Viviane T. Silva, Rodrigo Neumann Barros Ferreira, et al.
ACS Fall 2024
We describe a large, high-quality benchmark for the evaluation of Mention Detection tools. The benchmark contains annotations of both named entities as well as other types of entities, annotated on different types of text, ranging from clean text taken from Wikipedia, to noisy spoken data. The benchmark was built through a highly controlled crowd sourcing process to ensure its quality. We describe the benchmark, the process and the guidelines that were used to build it. We then demonstrate the results of a state-of-the-art system running on that benchmark.
Viviane T. Silva, Rodrigo Neumann Barros Ferreira, et al.
ACS Fall 2024
Gabriele Picco, Lam Thanh Hoang, et al.
EMNLP 2021
Avishai Gretz, Alon Halfon, et al.
EMNLP 2023
Thomas Bohnstingl, Ayush Garg, et al.
ICASSP 2022