In Flanders, only about 13,000 people can communicate in Flemish sign language (Vlaamse Gebarentaal, VGT). For many of those people, VGT is their first language.
Since most hearing people do not understand sign language, signers and non-signers mostly communicate through interpreters, or through written language. Neither is practical for day-to-day iteraction, or getting to know each other on an informal basis. Interpreters are only available by appointment and need to be paid, and not all signers are equaly fluent in written communication.
If each person could communicate in the language they feel most familiar with, communication could become a lot easier. In the European SignON project, we try to leverage machine learning and AI to automatically translate between different European sign languages and different spoken or written languages. While automatically translating from or into sign languages for open, informal communication is still a very far-off goal, we believe that a first step in the form of communication in specific use cases or scenarios is feasible within the next few years.
The whole SignON platform development is user-driven, with a strong participation of native signers and the deaf communities from different countries. For the technical development, IDLab-AIRO investigates the use of deep learning techniques in order to create a sign language recognition and understanding system. Other partners will contribute to the translation and language generation parts.
Sign languages are complex visual languages, with some specific properties that are not present in spoken or written languages. In order to understand these properties and incorporate them in our model development, several experts in sign language linguistics are also involved in the project.
From a deep learning perspective, another difficulty is the fact that only very small labeled datasets are available, at least in comparison to those for speech recognition and natural language processing on written text. Furthermore, sign languages have their own grammar and dialects. This makes sign language recognition and translation a very challenging and very exciting problem from the perspective of data efficiency.
At IDLab-AIRO, our current research builds on the pioneering work of one of our alumni, dr. Lionel Pigou. His research into the use of convolutional neural networks for sign language recognition is still highly cited in the domain today.
Our goal is to use domain and task knowledge to increase the performance of sign language recognition models to the point of usability.
- Isolated sign recognition from RGB video using pose flow and self-attentionIn 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
- Frozen pretrained transformers for neural sign language translationIn Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL) 2021
- Sign language recognition with transformer networksIn 12th International Conference on Language Resources and Evaluation (LREC 2020), Proceedings 2020
- Towards automatic sign language corpus annotation using deep learningIn 6th Workshop on Sign Language Translation and Avatar Technology, Proceedings 2019
- Gesture and sign language recognition with temporal residual networksIn 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017) 2017
- Sign classification in sign language Corpora with deep neural networksIn International Conference on Language Resources and Evaluation (LREC), Workshop, Proceedings 2016