Motivation & Goals

The 2019 Conference on Computational Language Learning (CoNLL) hosts a shared task (or ‘system bake-off’) on Cross-Framework Meaning Representation Parsing (MRP 2019). The goal of the task is to advance data-driven parsing into graph-structured representations of sentence meaning. All things semantic are receiving heightened attention in recent years. And despite remarkable advances in vector-based (continuous and distributed) encodings of meaning, ‘classic’ (discrete and hierarchically structured) semantic representations will continue to play an important role in ‘making sense’ of natural language. While parsing has long been dominated by tree-structured target representations, there is now growing interest in general graphs as more expressive and arguably more adequate target structures for sentence-level analysis beyond surface syntax and in particular for the representation of semantic structure.

For the first time, this task combines formally and linguistically different approaches to meaning representation in graph form in a uniform training and evaluation setup. Participants are invited to develop parsing systems that support five distinct semantic graph frameworks—which all encode core predicate–argument structure, among other things—in the same implementation. Training and evaluation data will be provided for all five frameworks. Participants are asked to design and train a system that predicts sentence-level meaning representations in all frameworks in parallel. Architectures that utilize complementary knowledge sources (e.g. via parameter sharing) are encouraged (though not required). Learning from multiple flavors of meaning representation in tandem has hardly been explored (with notable exceptions, e.g. the parsers of Peng et al., 2017; 2018; and Hershcovich et al., 2018).

The task seeks to reduce framework-specific ‘balkanization’ in the field of meaning representation parsing.  Expected outcomes include (a) a unifying formal model over different semantic graph banks, (b) uniform representations and scoring, (c) systematic contrastive evaluation across frameworks, and (d) increased cross-fertilization via transfer and multi-task learning.  We hope to engage the combined community of parser developers for graph-structured output representations, including from six prior framework-specific tasks at the Semantic Evaluation (SemEval) exercises between 2014 and 2019.  Owing to scarcity of semantic annotations across frameworks, the shared task is regrettably limited to parsing English for the time being.

Some semi-formal definitions and a brief review of the five semantic graph frameworks represented in the shared task are available on separate pages.  The task will provide training data across frameworks in a uniform JSON serialization, as well as conversion and scoring software. If the task sounds potentially interesting to you, please follow the instructions for prospective participants.

Tentative Schedule

March 6, 2019
First Call for Participation
March 25, 2019
Specification of Uniform Interchange Format
Availability of Sample Training Graphs
April 15, 2019
Second Call for Participation
Initial Release of Training Data
Availability of Evaluation Software
May 13, 2019
Closing Date for Companion Data Nominations
Update of Training Data (If Need Be)
July 8–22, 2019
Evaluation Period (Held-Out Data)
July 29, 2019
Official End-to-End Evaluation Results
September 2, 2019
Submission of System Descriptions
September 16, 2019
Reviewer Feedback Available
September 30, 2019
Camera-Ready Manuscripts
November 3–4, 2019
Presentation and Discussion of Results
XHTML 1.0 | Last updated: 2019-03-07 (10:03)