Wasserstein barycenter model ensembling
Pierre Dognin, Igor Melnyk, et al.
ICLR 2019
Image captioning has recently demonstrated impressive progress largely owing to the introduction of neural network algorithms trained on curated dataset like MS-COCO. Often work in this field is motivated by the promise of deployment of captioning systems in practical applications. However, the scarcity of data and contexts in many competition datasets renders the utility of systems trained on these datasets limited as an assistive technology in real-world settings, such as helping visually impaired people navigate and accomplish everyday tasks. This gap motivated the introduction of the novel VizWiz dataset, which consists of images taken by the visually impaired and captions that have useful, task-oriented information. In an attempt to help the machine learning computer vision field realize its promise of producing technologies that have positive social impact, the curators of the VizWiz dataset host several competitions, including one for image captioning. This work details the theory and engineering from our winning submission to the 2020 captioning competition. Our work provides a step towards improved assistive image captioning systems.
Pierre Dognin, Igor Melnyk, et al.
ICLR 2019
Jianchang Mao, Patrick J. Flynn, et al.
Computer Vision and Image Understanding
Takashi Saito
IEICE Transactions on Information and Systems
Raphaël Pestourie, Youssef Mroueh, et al.
npj Computational Materials