Sign Language Computational Linguistics
About this research line
We research sign language processing in collaboration with (among others) the Flemish Sign Language community, to enable co-created AI-driven sign language technology.
In Flanders, only about 13,000 people can communicate in Flemish sign language (Vlaamse Gebarentaal, VGT). For many of those people, VGT is their preferred language.
Since most hearing people do not understand sign language, signers and non-signers mostly communicate through interpreters, or through written language. Neither is practical for day-to-day iteraction, or getting to know each other on an informal basis. Interpreters are only available by appointment and need to be paid, and not all signers are equally fluent in written communication.
If each person could communicate in the language they feel most familiar with, communication could become a lot easier. In the European SignON project, we leveraged machine learning and AI to automatically translate between different European sign languages and different spoken or written languages. While automatically translating from or into sign languages for open, informal communication is still a very far-off goal, we believe that a first step in the form of communication in specific use cases or scenarios is feasible within the next few years.
The whole SignON platform development is user-driven, with a strong participation of native signers and the deaf communities from different countries. For the technical development, IDLab-AIRO investigated the use of deep learning techniques in order to create a sign language recognition and understanding system. Other partners contributed to the translation and language generation parts.
Sign languages are complex visual languages, with some specific properties that are not present in spoken or written languages. In order to understand these properties and incorporate them in our model development, several experts in sign language linguistics are also involved in the project.
From a deep learning perspective, another difficulty is the fact that only very small labeled datasets are available, at least in comparison to those for speech recognition and natural language processing on written text. Furthermore, sign languages have their own grammar and dialects. This makes sign language recognition and translation a very challenging and very exciting problem from the perspective of data efficiency.
At IDLab-AIRO, our current research builds on the pioneering work of one of our alumni, dr. Lionel Pigou. His research into the use of convolutional neural networks for sign language recognition is still highly cited in the domain today.
Our goal is to use domain and task knowledge to increase the performance of sign language recognition models to the point of usability.