Prix du meilleur article à la conférence TALN 2019 pour le GETALP

Lors de la 26ème conférence sur le Traitement Automatique des Langues à Toulouse du 1er au 5 juillet organisé conjointement avec la Plateforme d’Intelligence Artificielle, Loïc Vial, Benjamin Lecouteux et Didier Schwab ont obtenu le prix du meilleur article pour Compression de vocabulaire de sens grâce aux relations sémantiques pour la désambiguïsation lexicale.
Cet article présente une méthode originale qui pallie le manque de données annotés de bonne qualité et qui permet d’obtenir des résultats qui surpassent largement l’état de l’art sur toutes les tâches d’évaluation de la désambiguïsation lexicale.
Pour rappel, la Désambiguïsation Lexicale est une tâche qui vise à clarifier un texte en assignant à chacun de ses mots l’étiquette de sens la plus appropriée depuis un inventaire de sens prédéfini. Il s’agira, par exemple, de préférer dans la phrase La souris mange le fromage le sens de rongeur plutôt que le sens de dispositif électronique pour le mot souris. Ces travaux sont exploités par les auteurs dans plusieurs applications du traitement automatique des langues comme la traduction automatique ou pour concevoir des outils destinés à établir une communication alternative par exemple pour pour des personnes maîtrisant pas ou peu la langue ou des personnes en situation de polyhandicap.

2 seminars from Steven Bird in january 2019

Wed 9th January at 2pm – room 306 batiment IMAG
Scalable Methods for Working with Unwritten Languages 1: Interactive Respeaking
Mon 14th January at 2pm – room 306 batiment IMAG
Scalable Methods for Working with Unwritten Languages 2: Talking about Places and Processes
Steven Bird, Charles Darwin University
Computational methods offer exciting new possibilities for recording and processing low resource languages. If we are to extend these methods to encompass all languages we run into a problem: most languages are unwritten. Existing attempts encounter the “transcription bottleneck”, the fact that it is extremely onerous to transcribe audio in a language that has no established orthography. These talks describe two new ways to address the transcription bottleneck, by rethinking the tasks and the end products in the light of the capacities and motives of speakers, and the requirements of the speech and language processing pipeline. These talks will describe work in progress, based in a remote indigenous community in Australia.

ICPhS 2019 Special Session

Welcome

This is the web page for the Computational Approaches for Documenting and Analyzing Oral Languages Special Session at ICPhS 2019, the International Congress of the Phonetic Sciences, 5-9 August 2019, Melbourne, Australia.

Summary

The special session Computational Approaches for Documenting and Analyzing Oral Languages welcomes submissions presenting innovative speech data collection methods and/or assistance for linguists and communities of speakers: methods and tools that facilitate collection, transcription and translation of primary language data. Oral languages is understood here as referring to spoken vernacular languages which depend on oral transmission, including endangered languages and (typically low-prestige) regional varieties of major languages.

The special session intends to provide up-to-date information to an audience of phoneticians about developments in machine learning that make it increasingly feasible to automate segmentation, alignment or labelling of audio recordings, even in less-documented languages. A methodological goal is to help establish the field of Computational Language Documentation and contribute to its close association with the phonetic sciences. Computational Language Documentation needs to build on the insights gained through phonetic research; conversely, research in phonetics stands to gain much from the availability of abundant and reliable data on a wider range of languages.

Our special session is mentioned on the ICPhS website here. You can find a poster of this session here.

Main goals

The special session aims to bring together phoneticians, computer scientists and developers interested in the following goals:

  • Rethinking documentary processes: recording, transcription and annotation;
  • Responding to the urgent need to document endangered languages and varieties;
  • Elaborating an agenda and establishing a roadmap for computational language documentation;
  • Ensuring that the requirements of phonetics research are duly taken into consideration in the agenda of Computational Language Documentation;
  • Attracting computer scientists to ICPhS and engaging them in discussions with phoneticians (and linguists generally).

Main topics

This special session will focus on documenting and analyzing oral languages, including topics such as the following:

  • large-scale phonetics of oral languages,
  • automatic phonetic transcription (and phonemic transcription),
  • mobile platforms for speech data collection,
  • creating multilingual collections of text, speech and images,
  • machine learning over these collections,
  • open source tools for computational language documentation,
  • position papers on computational language documentation.

Session format

Special sessions at ICPhS will normally be 1.5 hours. For our accepted special session, we chose the “workshop” type with a more open format suitable for discussion of methods/tools. The exact format is still to be determined. More details will be provided on the format later.

How does the submission process work?

Papers will be submited directly to the conference by December 4th and will then be evaluated according to the standard ICPhS review process [see here]. Accepted papers will be allocated either to this special session or a general session. When submitting you can specify if you want to be considered for this special session.

Organizers

Laurent Besacier – LIG UGA (France)
Alexis Michaud – LACITO CNRS (France)
Martine Adda-Decker – LPP CNRS (France)
Gilles Adda – LIMSI CNRS (France)
Steven Bird – CDU (Australia)
Graham Neubig – CMU (USA)
François Pellegrino – DDL CNRS (France)
Sakriani Sakti – NAIST (Japan)
Mark Van de Velde – LLACAN CNRS (France)

Endorsement

This special session is endorsed by SIGUL (Joint ELRA and ISCA Special Interest Group on Under-resourced Languages)

Séminaire d’équipe Pedro Chahuara le jeudi 18 mai à 14h

Online Mining of Web Publisher RTB Auctions for Revenue Optimization
In the online adversiment market there are two main actors: the publishers that offer a space for advertisement in their websites and the announcers who compite in an auction to show their advertisements in the available spaces. When a user accesses an internet site an auction starts for each ad space, the profile of the user is given to the announcers and they offer a bid to show an ad to that user. The publisher fixes a reserve price, the minimum value they accept to sell the space.
In this talk I will introduce a general setting for this ad market and I will present an engine to optimize the publisher revenue from second-price auctions, which are widely used to sell on-line ad spaces in a mechanism called real-time bidding. The engine is fed with a stream of auctions in a time-varying environment (non-stationary bid distributions, new items to sell, etc.) and it predicts in real time the optimal reserve price for each auction. This problem is crucial for web publishers, because setting an appropriate reserve price on each auction can increase significantly their revenue.
I consider here a realistic setting where the only available information consists of a user identifier and an ad placement identifier. Once the auction has taken place, we can observe censored outcomes : if the auction has been won (i.e the reserve price is smaller than the first bid), we observe the first bid and the closing price of the auction, otherwise we do not observe any bid value.
The proposed approach combines two key components: (i) a non-parametric regression model of auction revenue based on dynamic, time-weighted matrix factorization which implicitly builds adaptive users’ and placements’ profiles; (ii) a non-parametric model to estimate the revenue under censorship based on an on-line extension of the Aalen’s Additive Model.