Emerging and Rare entity recognition

This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text.

The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities.

Goals

Detecting commonly-mentioned entities tends to be easier than the rarer, more unusual surface forms. Similarly, entities with unusual surface forms, or that are simply rare, tend to be tougher to detect (Augenstein et al 2017), with recall being a significant problem in rapidly-changing text types (Derczynski et al 2015). However, the entities that are common in newly-emerging texts such as newswire or social media are often new, not having been mentioned in prior datasets. This poses a challenge to NER systems, where in many deployments, unusual, previously-unseen entities need to be detected reliably and with high recall. The WNUT 2017 shared task poses this challenge directly to participants, with turbulent data containing few repeated entities, drawn from rapidly-changing text types or sources of non-mainstream entities.

Evaluation

The shared task evaluates against two measures. As well as classical entity-level precision, recall and their harmonic mean, F1, surface forms found in the emerging entities task are evaluated. The set of unique surface forms in the gold data and the submission are compared, and the precision, recall and F1 of these measured too. This latter measure measures how good systems are at correctly recognizing a diverse range of entities, rather than just the very frequent surface forms. For example, the classical measure would reward a system that always recognizes "London" accurately, and so such a system would get a high score on a corpus where 50% of the Location entities are just "London". The second measure, though, would reward "London" just once, regardless of how many times it appeared in the text.

Systems are evaluated using a modified version of conlleval.py, downloadable here: wnuteval.py.

Input is via stdin or filename parameter; try something like: ./wnuteval.py datafilename

The system output data file takes the format token gold-label predicted-label.

Important dates
  • Training and dev data to be released: May/June 2017
  • Test data released: 21 June 2017
  • Result submission: 30 June 2017
  • Shared-task results and gold annotations for test data: 3 July 2017
  • System Description Papers Due: 7 July 2017
  • Reviews Returned: 15 July 2017
  • Camera Ready Deadline: 21 July 2017
  • Workshop Date: Sep 7 2017
  • Entity classes

    1. Person
    2. Location (including GPE, facility)
    3. Corporation
    4. Consumer good (tangible goods, or well-defined services)
    5. Creative work (song, movie, book, and so on)
    6. Group (subsuming music band, sports team, and non-corporate organisations)

    Downloads

    If you use this data, please cite the task paper:

    Training data: wnut17train.conll (Twitter)

    Development data: emerging.dev.conll (YouTube)

    Data README: README.md

    Eval script: wnuteval.py

    Test data (no tags): emerging.test (StackExchange and Reddit)

    Test data with tags: emerging.test.annotated

    Data is to be downloaded directly. Links are given out via the WNUT mailing list and this page. All the data will be made available after the task has finished, as well as the team submissions. The data's in CONLL format; see the README files in the downloads for more details.

    The dataset is kept live on github, including source data; github.com/leondz/emerging_entities_17

    Results

    Team F1 (entity) F1 (surface form)
    Arcada 39.98 37.77
    Drexel-CCI 26.30 25.26
    FLYTXT 38.85 36.31
    MIC-CIS 37.06 34.25
    SJTU-Adapt 40.42 37.62
    SpinningBytes 40.78 39.33
    UH-RiTUAL 41.86 40.24

    You can also download the submissions:

    Shared Task Organizers