Registration of Intent

To make your interest in the MRP 2019 task known and to receive updates on data and software, please self-subscribe to the mailing list for (moderated) MRP announcments.  The mailing list archives are available publicly.  To obtain the training data for the task, please (a) make sure your team is subscribed to the above mailing list and (b) fill in and return to the LDC the no-cost license agreement for the task.  We may ask for a mildly more formal registration of candidate participants in early June, as the evaluation period nears (please see the task schedule and below; more information to come).

System Development

The task operates as what is at times called a closed track.  Beyond the training and ‘companion’ data provided by the co-organizers, participants are restricted in which additional data and pre-trained models are legitimate to use in system development.  These constraints are imposed to improve comparability of results and overall fairness.

Evaluation Period

The evaluation period of the task will run from Monday, July 8, to Monday, July 22, 2019.  At the start of the evaluation period, we will make available the evaluation data (in the same format as the training graphs, but without the nodes, edges, and tops values).  Participants will be expected to prepare their submission by processing all evaluation files using the same general parsing system.  Parser outputs have to be uploaded in the MRP common interchange format, for which there will be a basic validation service.  Participants must agree to putting their submitted parser outputs into the public domain, such that all submissions can be made available for general download after completion of the evaluation period.

System Ranking

The primary evaluation metric for the task will be cross-framework MRP F1 scores.  Participating parsers will be ranked based on average F1 across all evaluation data and frameworks.  For broader comparison, additional, per-framework scores will be published, both in the MRP and applicable framework-specific metrics.  Albeit not the primary goal of the task, ‘partial’ submission are possible, in the sense of not providing parser outputs for all target frameworks.

Publication of Results

XHTML 1.0 | Last updated: 2019-06-25 (23:06)