BOLT Information Retrieval Comprehensive Training and Evaluation Data Linguistic Data Consortium 1.0 Overview This package contains training and evaluation data produced in support of the Information Retrieval (IR) task within the DARPA Broad Operational Language Translation (BOLT) Program. The overall goal of the DARPA BOLT program was to improve machine translation capabilities in informal data genres (e.g. discussion forum posts) in Arabic, Chinese, and English. In support of BOLT Program goals, the BOLT IR task focused on advancing the state of the art of information retrieval in these same languages and genres. Specifically, BOLT IR sought to support development of systems that could: 1) Take as input a natural language English query sentence, 2) Return relevant responses to that query from a large corpus of informal documents in the three BOLT languages (Arabic, Chinese, and English), and 3) Translate responses from non-English documents into English where necessary. These objectives were chosen because they closely modeled the information retrieval needs of monolingual English speaker working as a government intelligence analyst. The National Institute for Standards and Technology (NIST) served as the evaluator and overall coordinator for all phases of BOLT Information Retrieval, providing evaluation task specifications and metrics for assessing system retrieval and translation capabilities. This package contains all pilot, dry run, and evaluation data that was developed for each phase of the BOLT IR task within the BOLT Program (2012-2015). This includes: natural-language IR queries, system responses to queries, and manually-generated assessment judgments for system responses. This package also includes: BOLT discussion forum source documents (from which queries were developed and systems retrieved responses), scoring software for each phase's evaluation (developed by NIST), plus experimental data developed only in Phase 2 (to explore how effectively systems could reduce redundancy in their responses). The datasets included in this package were originally released to BOLT and NIST as: LDC2012R34: BOLT Phase 1 IR Dry Run Queries LDC2012R54: BOLT Phase 1 IR Dry Run Assessments V1.1 LDC2012R53: BOLT Phase 1 IR Eval Queries V1.1 LDC2012E118: BOLT Phase 1 IR Eval Assessment Results V1.1 LDC2013E08: BOLT Phase 2 IR Source Data Document List and Sample Query LDC2013E20: BOLT Phase 2 IR Pilot Assessment Results V1.1 LDC2013E46: BOLT Phase 2 IR Dry Run Queries LDC2013E67: BOLT Phase 2 IR Dry Run Assessment Results V1.1 LDC2013E136: BOLT Phase 2 IR Eval Queries LDC2013E134: BOLT Phase 2 IR Eval Assessment Results Relevance Output from NIST LDC2014E01: BOLT Phase 2 IR Eval Assessment Results Redundancy LDC2014R29: BOLT Phase 3 IR Pilot Queries LDC2014E59: BOLT Phase 3 IR Pilot Assessment Results Relevance LDC2014R60: BOLT Phase 3 IR Dry Run Queries LDC2014R66: BOLT Phase 3 IR Eval Queries LDC2015R06: BOLT Phase 3 IR Eval Assessment Results Relevance V1.2 Summary of data included in this package, excluding Phase 2 experimental data (see section 2.0 Contents for more details on all datasets): +-------+---------+-------------------+---------+-----------+------------+ | | | | | | Assessment | | Phase | Task | Source Documents | Queries | Responses | Judgments | +-------+---------+-------------------+---------+-----------+------------+ | P1 | dry run | (same as P1 eval) | 9 | 1002 | *793 | | | eval | 1212102 | 146 | 18973 | 28270 | | P2 | pilot | (same as P2 eval) | 1 | 200 | 404 | | | dry run | (same as P2 eval) | 50 | 9726 | 19752 | | | eval | 2020908 | 100 | 18545 | 98491 | | P3 | pilot | (same as P2 eval) | 6 | 799 | 3057 | | | dry run | (same as P2 eval) | 50 | **(none) | (N/A) | | | eval | (same as P2 eval) | 150 | 122532 | 410909 | +-------+---------+-------------------+---------+-----------+------------+ * Phase 1 Dry Run assessments are available for only a subset of responses ** Phase 2 Dry Run queries were for research purposes only; systems did not produce responses, and there was no assessment 2.0 Contents ./data Directory containing all query, response, assessment, scoring, experimental data, and source data for Phases 1-3 of BOLT IR. NOTE: For convenience, ./data directories are listed in logical rather than default order. ./data/source_data Directory containing the discussion forum source data used to support Phases 1-3 of BOLT IR. ./data/source_data/lists Directory containing 6 TAB files. These files contain lists of threads (by language, where 'arz' = Arabic, 'cmn' = Chinese, 'eng' = English) used to support query development and system retrieval for Phase 1 and Phases 2-3 of BOLT IR: - p1_arz.tab - p1_cmn.tab - p1_eng.tab - p2-p3_arz.tab - p2-p3_cmn.tab - p2-p3_eng.tab These files each have three fields: Field 1 is a document (thread) ID, Field 2 is a token count for the XML source data file denoted by the document ID, and Field 3 contains a digit indicating the BOLT project source data corpus in which the document was originally released (1=LDC2012E04_V2, 2=LDC2012E16, 3=LDC2012E21, 4=LDC2012E54). NOTE: For Chinese source data documents, 1 word=1.5 tokens NOTE: The file naming convention for the source data XML files is: bolt--DF---. where: is one of the language IDs: arz, cmn and eng is a numeric ID associated with the web site is a numeric ID associated with the forum is a numeric ID associated with the discussion thread is the file extension "xml" NOTE: The threads selected for P1 of BOLT IR are a strict subset of those selected for P2-P3. (See 3.0 below for more detail on source data selection). ./data/source_data/xml/arz Directory containing 272 ZIP archives of 773861 Arabic discussion forum threads used to develop queries and do system retrieval in Phases 1-3 of BOLT IR. ./data/source_data/xml/cmn Directory containing 218 ZIP archives of 789077 Chinese discussion forum threads used to develop queries and do system retrieval in Phases 1-3 of BOLT IR. ./data/source_data/xml/eng Directory containing 222 ZIP archives of 457970 English discussion forum threads used to develop queries and do system retrieval in Phases 1-3 of BOLT IR. ./data/queries Directory containing all queries files for Phases 1-3 of BOLT IR. ./data/queries/full Directory containing 8 XML files. These files contain the full forms of all queries for Phases 1-3 of BOLT IR: - bolt-ir-p1-dryrun-queries.xml - bolt-ir-p1-eval-queries.xml - bolt-ir-p2-dryrun-queries.xml - bolt-ir-p2-eval-queries.xml - bolt-ir-p2-pilot-queries.xml - bolt-ir-p3-dryrun-queries.xml - bolt-ir-p3-eval-queries.xml - bolt-ir-p3-pilot-queries.xml ./data/queries/summary Directory containing 8 XML files. These files contain the summary forms of all queries for Phases 1-3 of BOLT IR: - bolt-ir-p1-dryrun-queries-summary.xml - bolt-ir-p1-eval-queries-summary.xml - bolt-ir-p2-dryrun-queries-summary.xml - bolt-ir-p2-eval-queries-summary.xml - bolt-ir-p2-pilot-queries-summary.xml - bolt-ir-p3-dryrun-queries-summary.xml - bolt-ir-p3-eval-queries-summary.xml - bolt-ir-p3-pilot-queries-summary.xml ./data/responses Directory containing system responses to all queries for Phases 1-3 of BOLT IR. ./data/responses/p1_dryrun Directory containing 3 XML files. These files contain unpooled system responses to the Phase 1 dry run queries, from three of the Phase 1 performers: - 15e87a-p1-dryrun-responses.xml - 3d2213-p1-dryrun-responses.xml - 671df0-p1-dryrun-responses.xml NOTE: In these and other performer-generated files, each team name has been replaced an anonymized team ID (consisting of a 6-character alphanumeric string), and any other identifying information (e.g. organization name, team member names, email addresses, domain names, etc) has been stripped out. ./data/responses/p1_eval/phase1-eval-pool-50.xml XML file containing NIST's pooled system responses to the Phase 1 evaluation queries, from all Phase 1 performers. ./data/responses/p2_pilot Directory containing 2 XML files. These files contain unpooled system responses to the Phase 2 pilot queries, from two of the Phase 2 performers: - 3d2213-p2-pilot-responses.xml - 671df0-p2-pilot-responses.xml NOTE: See above comments on anonymization ./data/responses/p2_dryrun Directory containing 50 XML files. These files contain pooled system responses to each of the Phase 2 dry run queries, from all Phase 2 performers. ./data/responses/p2_eval Directory containing 100 XML files. These files contain pooled system responses to each of the Phase 2 evaluation queries, from all Phase 2 performers. ./data/responses/p3_pilot Directory containing 6 XML files. These files contain pooled system responses to each of the Phase 3 pilot queries, from all Phase 3 performers. ./data/responses/p3_eval Directory containing 150 XML files. These files contain pooled system responses to each of the Phase 3 evaluation queries, from all Phase 3 performers. ./data/assessments Directory containing assessment data for system responses to all queries in Phases 1-3 of BOLT IR. ./data/assessments/p1_dryrun Directory containing 3 XML files. These files contain LDC's relevance assessments for unpooled system responses to the Phase 1 dry run queries, from three of the Phase 1 performers: - 15e87a-p1-dryrun-responses-assessments.xml - 3d2213-p1-dryrun-responses-assessments.xml - 671df0-p1-dryrun-responses-assessments.xml NOTE: Relevance assessments are available for system responses to all Phase 1 dry run queries (BIR_100000-BIR_100008) for teams 3d2213 and 671df0. However, due to technical issues, relevance assessments are only available for the first four Phase 1 dry run queries (BIR_100000-BIR_100004) for team 15e87a. ./data/assessments/p1_eval/phase1-pool-50-assessments.xml XML file containing LDC's relevance assessments for pooled system responses to the Phase 1 evaluation queries, from all Phase 1 performers. ./data/assessments/p2_pilot Directory containing 2 XML files. These files contain LDC's relevance assessments for unpooled system responses to the Phase 2 pilot queries, from two of the Phase 2 performers: - 3d2213-p2-pilot-responses-assessments.xml - 671df0-p2-pilot-responses-assessments.xml ./data/assessments/p2_dryrun Directory containing 50 XML files. These files contain LDC's relevance assessments for pooled system responses to each of the Phase 2 dry run queries, from all Phase 2 performers. ./data/assessments/p2_eval Directory containing a TAB-delimited file and an XML file. The TAB file contains raw assessment judgments from LDC for pooled system responses to the Phase 2 evaluation queries, from all Phase 2 performers. The XML file contains interpreted assessment values (based on LDC's assessment judgments in the TAB file) from NIST: - bolt-p2-ir-eval-assessments-relevance-v2.0-edited.tab - p2-assessments-v2.0.1.xml ./data/assessments/p3_pilot/bolt-p3-ir-pilot-assessments-relevance.tab TAB-delimited file containing raw assessment judgments from LDC for pooled system responses to the Phase 3 pilot queries, from all Phase 3 performers. ./data/assessments/p3_eval/bolt-p3-ir-eval-assessments-relevance-v1.2.tab TAB-delimited file containing raw assessment judgments from LDC for pooled system responses to the Phase 3 evaluation queries, from all Phase 3 performers. ./data/scoring Directory containing materials used by NIST to produce evaluation scores for system submissions in the Phase 1, Phase 2, and Phase 3 evaluations of BOLT IR. ./data/scoring/phase1 Directory containing the Phase 1 evaluation (P1 eval) team submissions (from performers), P1 eval queries and assessment files (from LDC) that are called with NIST's scoring scripts, P1 eval scoring scripts (from NIST), and evaluation outputs (from NIST). See the README.txt file in this directory for a full listing and description of directory contents, and instructions for running the Phase 1 evaluation scoring scripts. ./data/scoring/phase2 Directory containing the Phase 2 evaluation (P2 eval) team submissions (from performers), P2 eval groupings (from performers) for the experimental redundancy assessment task, P2 eval queries and assessment files (from LDC) that are called with the scoring scripts, P2 eval scoring scripts (from NIST), and evaluation outputs (from NIST). See the README.txt file in this directory for a full listing and description of directory contents, and instructions for running the Phase 2 evaluation scoring scripts. ./data/scoring/phase3 Directory containing the Phase 3 evaluation (P3 eval) team submissions (from performers), eval queries and assessment files (from LDC) that are called with the scoring scripts, P3 eval scoring scripts (from NIST), and evaluation outputs (from NIST). See the README.txt file in this directory for a full listing and description of directory contents, and instructions for running the Phase 3 evaluation scoring scripts. NOTE: The following phase 3 performer files do not validate against their referenced DTD "bolt-ir-cite-schema.dtd": ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.08.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.01.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.02.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.03.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.04.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.05.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.06.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.07.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.09.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.10.xml ./data/scoring/phase3/team-submissions/baselines/3d2213.baseline_run3.11.xml ** However, NIST did not run an XML validator on the submissions, nor do the evaluation scripts rely on the XML being valid. Rather, they are robust to work with possible invalid (but still usable) XML. ./data/experimental Directory containing 3 TAB-delimited files. These files contain LDC's redundancy assessment judgments for 229 views and 1295 groups of 4408 relevant, English citations to 61 P2 eval queries (see Section 8.0 below for a description of this experimental redundancy data). - bolt-p2-ir-eval-assessments-redundancy-citations-v1.0.tab - bolt-p2-ir-eval-assessments-redundancy-groups-v1.0.tab - bolt-p2-ir-eval-assessments-redundancy-views-v1.0.tab NOTE: These groupings are experimental data produced only for Phase 2 of BOLT IR. System groupings and redundancy assessments are *not* needed to reproduce the Phase 2 evaluation or score IR systems. ./docs/p1 Directory containing evaluation plans, guidelines, and other documentation for the Phase 1 BOLT IR data. ./docs/p1/bolt-ir-guidelines-v5.5_June_28_2012.docx NIST evaluation plan and task description for Phase 1 of BOLT IR. ./docs/p1/BOLT_IR_Query_Guidelines_v2.docx LDC query development guidelines for Phase 1 of BOLT IR. ./docs/p1/BOLT_IR_Assessment_Guidelines_V1.1.pdf LDC relevance assessment guidelines for Phase 1 of BOLT IR. ./docs/p2 Directory containing evaluation plans, guidelines, and other documentation for the Phase 2 BOLT IR data. ./docs/p2/IR-guidelines-P2-v1.3.docx NIST evaluation plan and task description for Phase 2 of BOLT IR. ./docs/p2/BOLT_IR_Query_Development_Guidelines_V1.0.pdf LDC query development guidelines for Phase 2 of BOLT IR. ./docs/p2/BOLT_IR_Assessment_Guidelines_V1.1.pdf LDC relevance assessment guidelines for the Phase 2 pilot and Phase 2 dry run of BOLT IR. ./docs/p2/BOLT_IR_Assessment_Guidelines_V2.9.pdf LDC relevance assessment guidelines for the Phase 2 evaluation of BOLT IR. Note that these guidelines reflect the change in the Phase 2 evaluation to annotators providing individual relevance assessment judgments (rather than annotators assigning direct assessment values) for system answers. ./docs/p2/BOLT_IR_RedundancyAssessment_Guidelines_V1.0.pdf LDC experimental redundancy assessment guidelines for relevant system answers to the Phase 2 evaluation queries in BOLT IR. ./docs/p2/judgment2assessment_mappings.txt LDC understanding of mappings from relevance assessment judgments to relevance assessment values. ./docs/p3 Directory containing evaluation plans, guidelines, and other documentation for the Phase 3 BOLT IR data. ./docs/p3/IR-guidelines-P3-v2.5.docx NIST evaluation plan and task description for Phase 3 of BOLT IR. ./docs/p3/BOLT_IR_Query_Development_Guidelines_V2.1.pdf LDC query development guidelines for Phase 2 of BOLT IR. ./docs/p3/BOLT_IR_Assessment_Guidelines_V3.2.pdf LDC relevance assessment guidelines for Phase 3 of BOLT IR. ./docs/p3/judgment2assessment_mappings.txt LDC understanding of mappings from relevance assessment judgments to relevance assessment values. ./dtd Directory containing 11 DTD files for validating query, response, and assessment XML files in Phases 1-3 of BOLT IR: - bolt-ir-citation-schema-v1.1.dtd - bolt-ir-citation-schema-v1.2.dtd - bolt-ir-citation-schema-v1.3.dtd - bolt-ir-submission.dtd - bolt-ir-topic-schema-summary-v1.0.dtd - bolt-ir-topic-schema-summary-v1.1.dtd - bolt-ir-topic-schema-v1.0.dtd - bolt-ir-topic-schema-v1.1.dtd - bolt-ir-topic-schema-v1.2.dtd - bolt-ir-v1.0.dtd - bolt-ir-v1.1.dtd NOTE: Calls to these DTDs can be found in the ./data directory queries, responses, and assessments XML files. If no DTD call is present, DTD validation was *not* available for that XML file when it was produced in the BOLT program, and only standard XML well-formedness checks can be run on the file. ./README.txt This file. 3.0 Source Data Discussion forum data was used exclusively as the genre in all Phases of BOLT IR, as this genre was most likely to contain entities, relations, and events of interest to the BOLT project. The In Phase 1 of BOLT IR, 400Mw of BOLT discussion forum data were selected for each language, prioritizing data from the fourth BOLT Phase 1 source data release (LDC2012E54), as this release was more likely to be rich in BOLT-targeted content. In Phases 2 and 3 of BOLT, approximately 300Mw additional words of BOLT discussion forum source data was selected and added to the Phase 1 source data pool, resulting in a pool of 700Mw per language to support Phases 2 and 3 of BOLT IR. NOTE: The same pool of source data was used in Phases 2 and 3 of BOLT IR, and the Phase 1 source data is a strict subset of the Phase 2-3 source data. 4.0 Query Development The Arabic, Chinese, and English queries for each Phase of BOLT IR were developed by native Egyptian, native Mandarin, and native English IR annotators, by conducting a time-limited search of the BOLT IR discussion forum source data, writing natural language queries based on their exploration of a particular topic/area of interest in the corpus, and then annotating valid human answers to those queries. See NIST's Phase 1, 2, and 3 evaluation plans and LDC's query development guidelines in the ./docs directory for details on query development for each phase. NOTE: In all three phases, queries were initially developed in full form (including a query description, query target and query type categories, and sample human answers). A summary form of the queries was then created using a subset of the information in the full queries files. These summary forms of queries were distributed to performers at evaluation time, and the full forms of the queries were made available to teams after the conclusion of each phase's evaluation window. NOTE: In Phase 3, an Egyptian Arabic dialect flag was provided for Arabic-language sample human answers. This flag provided an impressionistic judgment of whether or not an Arabic citation contains Egyptian Arabic. It is NOT a formal dialect annotation, and should not be interpreted as such. See the query development guidelines for more information on this flag. 5.0 Responses For each Phase's evaluation, NIST produced anonymized, pooled responses from each performer's system answers the evaluation queries, which were then assessed for relevance by LDC. NOTE: In the P1 dry run and P2 pilot, queries, performer responses were submitted by each performer individually (rather than being pooled by NIST). All other pilot and dry run performer submissions were pooled by NIST. NOTE: Individual team response files (including performer-generated files in the ./scoring directory), have been anonymized by replacing each team name with a team ID (consisting of a 6-character alphanumeric string), and any other identifying information (e.g. organization name, team member names, email addresses, domain names, etc) has been stripped from the files. 6.0 Assessment In each Phase's evaluation, LDC produced relevance assessments for pooled system responses to the evaluation queries for that phase. The assessment process and terminology was changed and updated between Phases (most notably between Phases 1 and 2). The sections below outline these changes at a high level. 6.1 Phase 1 Assessment In the Phase 1 evaluation, assessment was time-limited at three 3 hours per-kit, and as a result not all system responses (called "bullets" in Phase 1) were assessed. For each bullet assessed, Assessors provided a three-way relevance assessment (yes, no, maybe), a "supports" judgment indicating whether the provenances for relevant bullets supported the system answer, and information indicating whether relevant bullets were coreferential with any sample human answers (called "facets" in Phase 1) in the evaluation queries. See the ./docs/p1 directory eval plans, guidelines, and and other details on Phase 1 assessment. 6.2 Phase 2 Assessment In Phase 2, a number of aspects of the Phase 1 assessment process were changed. In Phase 2 , assessment was exhaustive rather than time-limited, coreference of system answers with sample human answers was eliminated, the "supports" judgment was eliminated, and system and human sample answers were both referred to as "citations" or "cites". In addition to these changes, a three-way translation quality judgment was added (acceptable, problematic, not acceptable), and a "checked source" flag was added to indicate whether the assessor looked at the citation in its original context when making a relevance or translation quality judgment. Beginning in the Phase 2 evaluation, the assessment process was overhauled, such that rather than directly assigning relevance assessment values (as they had in Phase 1 and the Phase 2 pilot and dry run), LDC annotators provided judgments about individual aspects of a system response's relevance to a query. These relevance assessment judgments were then transformed by NIST into relevance assessment values for purposes of scoring. See the ./docs/p2 directory eval plans, guidelines, and and other details on Phase 2 assessment. 6.3 Phase 3 Assessment In Phase 3, the assessment process and terminology were kept largely the same as the Phase 2 evaluation, with the following additions: - An Egyptian Arabic dialect flag was provided for Arabic-language system answers in which assessors checked the source for an Arabic-language system answer. This flag provided an impressionistic judgment of whether or not an Arabic citation contains Egyptian Arabic. It is NOT a formal dialect annotation, and should not be interpreted as such. See the assessment guidelines for more information on this flag. - A "generosity" flag was provided, indicating whether any of the assessment judgments made by assessors were made generously. See the ./docs/p3 directory eval plans, guidelines, and and other details on Phase 3 assessment. 7.0 Scoring For each Phase's evaluation, LDC assessment data was distributed to NIST, who scored system responses based on LDC's relevance assessments (in Phase 1), and interpreted LDC relevance assessment judgments (in Phases 2 and 3). NIST's scoring materials for each Phase (including scoring scripts, evaluation outputs, and performer submissions) can be found in the ./data/scoring/{phase1,phase2,phase3} directories. Please see NIST's README files for full details on the contents of those subdirectories, and instructions for running the scoring scripts for each phase. **NOTE: LDC successfully tested the scoring scripts and reproduced the evaluation outputs for NIST's phase1 and phase2 scoring scripts, but obtained errors when testing NIST's phase3 scoring scripts. NIST was alerted to this issue, but was unable to reproduce the errors LDC encountered during phase3 script testing. Any errors users of this package encounter when testing or running NIST scoring scripts should be directed to Ian Soboroff at the National Institute of Standards and Technology. 8.0 Experimental Data In Phase 2 only, an experimental redundancy task was performed in which systems produced automated thematic groupings and labels for relevant citations at two levels (view and group). The purpose of this task was to determine how effectively systems could reduce redundant answers in their output. LDC then produced redundancy assessment judgments for these groupings on the view, group, and citation level. Views were assessed for 2 categories: appropriateness and informativeness of view label. Groups were assessed for 3 categories: cohesion, appropriateness, and informativeness of group label. Citations were assessed for 1 category: appropriateness (of inclusion in a group). In parallel with relevance assessment, LDC redundancy judgments were then interpreted by NIST as redundancy assessment values, which were used for analysis of system groupings. **NOTE: This data is called experimental because redundancy assessment did *not* affect performers' official BOLT Phase 2 evaluation scores. Please see the .docs/p2 directory for eval plans, guidelines, and other details on redundancy assessment. 9.0 Acknowledgments This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-11-C-0145. The content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. Our thanks go to the National Institute for Standards and Technology (NIST), who designed and coordinated all evaluations for the BOLT Information Retrieval task. Finally, our thanks to the LDC staff members whose work contributed to the development of this corpus: Jonathan Wright, Haejoong Lee and Xiaoyi Ma (Technical support); Zhiyi Song, Ramez Zakhary, and Joseph Ellis (Annotation support). 10.0 Copyright Information (c) 2016 Trustees of the University of Pennsylvania 11.0 Authors For further information about this corpus, or the BOLT IR project, contact the following project staff at LDC: Stephanie Strassel, PI Kira Griffitt, Project Manager ---------------------------------------------------------------- README created by Kira Griffitt on March 23, 2015 README updated by Kira Griffitt on April 16, 2015 README updated by Kira Griffitt on June 11, 2015 README updated by Kira Griffitt on June 14, 2015 README updated by Kira Griffitt on August 31, 2015 README updated by Kira Griffitt on January 26, 2016 README updated by Kira Griffitt on January 27, 2016 README updated by Kira Griffitt on January 28, 2016 README updated by Kira Griffitt on November 4, 2016 README updated by Kira Griffitt on November 9, 2016 README updated by Kira Griffitt on September 4, 2018