(1) OVERVIEW The goal of this experiment is to evaluate the extraction of n-ary relations (n>2). Sentences from the New York Times Corpus are annotated with one relation trigger and all of its arguments. For each sentence, a system can extract a number of relations. We select the extracted relation whose name contains the annotated trigger, if it exists. Any other extracted relation is ignored. The selected relation is evaluated according to the number of arguments correctly extracted. An extracted argument is deemed correct if it is annotated in the sentence; otherwise, it is deemed incorrect. (2) RUNNING THE EVALUATION SCRIPT $ perl nary-manual-evaluation.pl ground-truth system-output (3) FILE FORMAT - GROUND TRUTH The ground truth file presents one sentence per line. Each sentence is annotated with a trigger and their arguments. Entities are enclosed in triple square brackets, triggers are enclosed in triple curly brackets. Example: And [[[PER David Keh]]] has {{{transformed}}} [[[LOC Noodle Road]]] , 209 East 49th Street , into [[[PER Din Tai-Fone]]] of Taipei , a cafe serving northern Chinese noodles and dumplings . (4) FILE FORMAT - SYSTEM OUTPUT The system output files are required to contain 2 fields: - Relation: The extracted relation. Systems must output three dashes ("---") to denote no relation. - Entities: The list of entities involved in the relation. In our system output files, we included a third field containing the annotated sentence from the ground truth. This field is used for manual analysis only and is not used by the evaluation script. The extracted sentence in each line of the system output must match the sentence in its respective line of the ground truth.