This Page Pertains to the (Now Closed) 2019 Edition of the MRP Shared Task

Contact Addresses

There is a moderated mailing list mrp-users@nlpl.eu for task participants (or candidate future users of the data and evaluation software).  All interested parties are kindly asked to self-subscribe to this list.  The archives of the mailing list are open to the public, as the list is intended both for updates from the organizers and for general discussion among participants.

Additionally, the organizers of the MRP 2019 parsing task can be reached at the following address:

   mrp-organizers@nlpl.eu

News & Updates

March 30, 2020
There will be a second instance of the Cross-Framework Meaning Representation Parsing shared task (MRP 2020) at the 2020 Conference on Computational Language Learning (CoNLL).  Instructions for prospective participants are now available.
November 5, 2019
It is finished.  The proceedings from the shared task are now hosted in the ACL Anthology; also, slides from all oral presentations are available for download.  Please watch this space for news regarding MRP 2020, the follow-up task next year.
September 13, 2019
The schedule for the presentation of results has been posted: The shared task has a 90-minute oral plenary slot on November 3, 2019, plus a poster session.  Fifteen system descriptions will be presented through both an oral ‘blitz’ and a poster.
September 13, 2019
The official evaluation results have been updated for the MRP metric on the bi-lexical (DM and PSD) graphs: improved MCES search and initialization gives score increases of 1.2 points, on average.  The official relative ranking of systems is not affected.
August 9, 2019
Evaluation results are now generally available, as are system submissions and a sub-set of the evaluation graphs.  A total of 16 teams submitted parser outputs, plus another two with involvement of task co-organizers (outside the primary ranking).
July 14, 2019
The CodaLab site for team registration and submission of parser outputs has been down for the past five days.  The site is open again, and the end of the evaluation period has been extended until Thursday, July 25, 12:00 noon in Central Europe (CEST)
July 10, 2019
The sentence and token counts for the UCCA and AMR training data have been corrected on the MRP web pages.  Please recall that the UCCA graphs from the LDC release of the training data have been superseded by an ‘overlay’ to the original package.
July 1, 2019
Additional informaton on the evaluation data and procedures for team registration and submission of parser outputs are now available.  The official scorer implementation in mtool has seen several bug fixes and efficiency improvements.
June 24, 2019
A re-release of the MRP companion data provides reference (if not gold-standard) ‘alignments’ (i.e. anchoring) for the AMR training graphs, obtained from the JAMR and ISI aligners.  mtool now offers basic graph validation.
June 16, 2019
The MRP evaluation software (mtool, the Swiss Army Knife of Meaning Representation) now provides an implementation of the official MRP cross-framework metric.  Debugging and refinement are still ongoing.
June 3, 2019
An initial release of the MRP evaluation tool is available on Microsoft GitHub.  Unified cross-framework evaluation, however, is still being under development.  Please monitor continuous updates to the repository and its issue tracker.
May 25, 2019
We have clarified the constraints on which data resources can be used in addition to the training and companion data distributed by the task organizers.  The deadline for nominations of additional data has been extended to Monday, June 3, 2019.
May 21, 2019
We have released an update to the UCCA training graphs (improving consistency and adding more annotations), i.e. an ‘overlay’ to the original, full training data.  Also, tokenization and morpho-syntactic parses of the training data are now available.
April 26, 2019
We have moved the mid-May target date for extended (UCCA) training data and the morpho-syntactic ‘companion’ parses back by one week.  Also, the official scorer may not be available before early June, but its approach will mirror extant, framework-specific evaluation tools.
April 9, 2019
New information has been added to the task web site, including a description of the uniform serialization; a public sample of sentences annotated in all frameworks; and the no-cost evaluation license for access to the training data.
March 6, 2019
The initial task web site is on-line, and the first call for participation has been posted to major mailing list.  Please sign up for the task mailing list to receive continuous updates.

Task Co-Organizers

Acknowledgements

Several colleagues have assisted in designing the task and preparing its data and software resources.  Dan Flickinger and Emily M. Bender provided a critical and constructive review of (a sample of) the DM and EDS graphs.  Sebastian Schuster kindly made available a pre-release of the converter from PTB-style constituent trees to (basic) UD 2.x dependency graphs.  Milan Straka provided invaluable assistance in training and running the latest development version of his UDPipe system, to generate the morpho-syntactic companion trees for the MRP training sentences, as well as in improving the mtool software.  Zdeňka Urešová graciously took the time to provide (and quality-control) fresh gold-standard annotations for the PSD evaluation graphs.  Dotan Dvir has coordinated the team of UCCA annotators, always ensuring that the corpora were ready in time.  The ‘companion’ alignments for AMR graphs were most helpfully prepared by Jayeol Chun, including forcing the aligners to use the MRP tokenization from the MRP morpho-syntactic companion parses.  Andrey Kutuzov helped with the preparation of morpho-syntactic companion trees for the evaluation data.

We are grateful to the Nordic e-Infrastructure Collaboration for their support to the Nordic Language Processing Laboratory (NLPL), which provides technical infrastructure for the MRP 2019 task.  Also, we warmly acknowledge the assistance of the Linguistic Data Consortium (LDC) in distributing the training data for the task to participants at no cost to anyone.

XHTML 1.0 | Last updated: 2020-03-29 (14:03)