TAC Relation Extraction Dataset
|Item Name:||TAC Relation Extraction Dataset|
|Author(s):||Victor Zhong, Yuhao Zhang, Danqi Chen, Gabor Angeli, Christopher Manning|
|LDC Catalog No.:||LDC2018T24|
|Release Date:||December 15, 2018|
|Data Source(s):||newswire, web collection|
LDC User Agreement for Non-Members
|Online Documentation:||LDC2018T24 Documents|
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Zhong, Victor, et al. TAC Relation Extraction Dataset LDC2018T24. Web Download. Philadelphia: Linguistic Data Consortium, 2018.|
|Related Works: Hide||View|
TAC Relation Extraction Dataset (TACRED) was developed by The Stanford NLP Group and is a large-scale relation extraction dataset with 106,264 examples built over English newswire and web text used in the NIST TAC KBP English slot filling evaluations during the period 2009-2014. The annotations were derived from TAC KBP relation types (see the guidelines), from human annotations developed by the Linguistic Data Consortium and from crowdsourcing using Mechanical Turk.
In each year of the slot filling evaluation, 100 entities (people or organizations) were given as queries (i.e., subjects), for which participating systems should find associated relations and object entities. All sentences judged in the TAC KBP evaluation and a sampling of other sentences that contain the query entities in the evaluation corpus form TACRED. Each sentence was crowd-annotated using Mechanical Turk, where each turk annotator was asked to annotate the subject and object entity spans and the corresponding relation.
Data is presented in both CoNLL and JSON format, both encoded in UTF-8. Scoring tools and gold relation labels are also included. Source corpora used for this dataset were TAC KBP Comprehensive English Source Corpora 2009-2014 (LDC2018T03) and TAC KBP English Regular Slot Filling - Comprehensive Training and Evaluation Data 2009-2014 (LDC2018T22).
For detailed information about the dataset and benchmark results, please refer to the TACRED paper.
None at this time.