Will sign language avatars ever replace sign language interpreters?
As a consortium, we are aware of the possible sensitivities in the deaf and sign language communities, to the automatic translation of spoken and sign languages as well as the use of sign language avatars. It is not the intention of our SignON App to replace human interpreters in the future. As a consortium member, EUD is continuously conducting surveys and interviewing members of the DHH community in order to assess and better understand the concerns and expectations of these communities. They assess the expectations from the communities about the different settings in which they would or would not use the SignON service (e.g. brief communications, talking to a doctor in a hospital, watching television shows, etc.). We strongly believe that human sign language interpreters are very valuable and can never be fully replaced by sign language avatars. However, we are convinced that the SignON App could be an alternative tool for different situations where one cannot/will not rely on a human interpreter. It would not be an obligation, but would be an option for the user, broadening the spectrum of communication media at their disposal.
Why does the SignON project only work with the languages English, Irish, Dutch, Spanish, BSL, ISL, NGT, VGT and LSE?
These languages have been chosen for the development of the SignON application and framework due to the expertise of the consortium partners. During the three years of EU project funding, the use of these languages will showcase the applicability and usefulness of the SignON framework. The plan is that this framework will be extensible, allowing new languages (sign and spoken) to be easily integrated in future versions.
Why isn’t sign language universal?
Sign languages, just like spoken languages, have evolved naturally. Sign languages have developed out of human contact within deaf communities; in deaf schools, deaf clubs, etc. Since these concentrations of deaf people took place on a regional level, unique "local" sign languages emerged worldwide, with their own conventional sign language lexicon and their own grammar rules. These different sign languages were later recognised on a national level and most of them are named after the country where the sign language originated, e.g. British Sign Language, Dutch Sign Language and Irish Sign Language. Though, even within country borders there can be several recognised sign languages, like Flemish Sign Language and the French Belgian Sign Language (LSFB). Spain has Spanish Sign Language (LSE) and Catalan Sign Language (LSC).
Why do not all deaf or hard of hearing people use sign language?
First of all, it is important to note that being deaf is not necessarily related to sign language and vice versa. There are deaf people who do not use sign language and communicate orally. There are also hearing people who have sign language as their first language, for example, if they have deaf parents. There are also deaf people who can express themselves perfectly in spoken/written languages as well as in sign languages (bilingual). The reasons why a deaf or hard of hearing person does not use sign language can be very diverse. It can be dependent on the person themselves, such as not being interested or skilled enough, or not needing to because they can communicate orally. Many deaf or hard of hearing people who experience hearing loss at a later age do not choose to learn sign language. External factors also play a role in this; for example, the former ban on sign language in deaf education, parental choice, no access to sign language resources, society's view on sign languages, etc.
Does the SignON project focus only on translations between spoken languages and sign languages?
No, SignON looks at various different languages and modalities including text, speech and sign language and the translation between them all. For example, a piece of Dutch text can be translated to Irish sign language. Flemish sign language can be translated into English sign language. This English sign language can then be translated into spoken Spanish, which in turn can be translated in Dutch text. All of the combinations of these modalities and languages will be facilitated. Even a deaf or hard of hearing person with atypical speech can use the SignON application.
Could an avatarised version of the signed videos help with privacy issues?
One of the advantages that a sign language avatar can offer is that it can be used for anonymising signed messages (that a signer wants to send). Suppose you want to send a complaint in your national sign language, but you do not want people to also see and recognise your face. You can then use an avatar where your signing is copied, but with a different appearance. It is important to note that the SignON application is a translation application that provides automatic translation from one language to another (both spoken and sign). So you cannot directly use the SignON application to create an avatar version of yourself. However, the underlying technology, developed by the UPF/GTI research group can be extended to do so. However, this is outside the scope of the SignON project.
Do I need an internet connection to use the SignON application?
While the SignON application runs on your phone, it depends on powerful AI models that a normal smartphone cannot support. These run on the cloud which can provide the necessary computational power. For this reason, you need an internet connection to connect to the SignON framework which will run the AI models and deliver the output to you via the SignON App.
Is the SignON application also accessible for deafblind people?
It is important is to note here that a deafblind person is someone who has a combination of (partial) sight and hearing loss. There are deafblind individuals who have enough vision to be able to move around and see the faces of their interlocutor. In their personal ideal conditions (such as enough lighting, good distance, own view options, etc.) they can understand the sign languages of their interlocutors, with or without the use of (pro) tactile signing. In our co-creation events we survey deafblind people to take into account their needs, but recognise that every deafblind person is different and not all will experience the application in the same way. The appearance of the sign language avatar in the SignON application can be adjusted to personal preferences; for example, by changing colours and contrast that could suit use by a deafblind person (e.g. clothes, skin colour, background, etc.)
Who can access the application now?
The SignON App can be accessed by “SignON Authorised Users” who are nominated by partners in the SignON project.

The current SignON Mobile App prototype is simple but powerful. Running on Android and Apple phones, it provides text tospeech translations and indicates how SignON sign language translation and presentation might look in the future. The project still has a lot of research to be done, but the current prototype mobile app prototype provides early SignON features, so that authorised users can see, hold and experience the tangible mobile App, and are thus empowered to provide realistic feedback on what they need from the App.

Please contact a project partner if you wish to become a SignON Authorised User.

Where can I download the SignON app?
The current prototype SignON Mobile App is available to “SignON Authorised Users” for both Android and iOS mobile devices on the Google Play Store and Apple App Store, as “SignONMobile”.

A “SignON Authorised User” is described above.

Are avatars personalisable?
We are continuously developing and improving our avatar. At the moment the avatar can be rotated, giving a 360-degree view point, zoomed in and out of to get a better view and clothes and background colours can be changed.
What technology do you use for MT?
MT for sign languages is a very broad topic. To understand sign language we use neural models built with a large amount of generic pose images and videos, which we adapt to the use-case of sign language. To translate the output of this first step, we use a multilingual neural model which we call the InterL. This model was originally built for spoken languages and we now use it to translate between spoken languages too. The model is based on the well-known BART [https://arxiv.org/abs/2001.08210] which we have fine-tuned to get better performance for the sign and spoken languages in SignON and the use-cases we want to address. To generate audio we use a text-to-speech software from the acapela-group (www.acapela-group.com) and to generate sign language, a series of linguistically motivated transformations use information drawn from the InterL to synthesise our avatar which one utterance at a time.
What is the name sign for SignON?
Watch the video to discover the name sign. It is a combination of two signs, 'SIGN' (mouth movement articulates 'sign' in English) and 'AVATAR' (mouth movement articulates 'on' in English). This name sign was proposed by the deaf community, alongside two other suggestions. Subsequently, these three name signs were presented to the deaf communities for voting, and the current name sign received the majority of votes.
Why was the SignON logo designed this way?
The logo was crafted by a deaf designer from the UK, Timur Mo. Given that the SignON project primarily focuses on creating a translation application between spoken/written and sign languages, a logo reminiscent of Google Translate was chosen. It features icons representing sign language and spoken language. Turquoise was selected because it is the symbolic color for deaf communities worldwide. The SignON name logo also incorporates a power button, symbolizing the intention that individuals can choose whether or not to use the application. It aims to be an additional tool in the array of communication options available to deaf and hard-of-hearing individuals, without replacing other communication methods, such as utilizing physical sign language interpreters.
What are the outcomes of the three-year SignON project?
From the beginning, the SignON project focused on two complementary areas (i) the technical side for the development of a mobile application for translation between signed and spoken languages and of course the related underlying broad research and (ii) working with the stakeholder communities, i.e. co-creation. By the end of this project, on the one hand we have established a fruitful co-creation framework that adheres strictly to privacy, ethics and legal considerations as well as having developed numerous models (for sign language recognition, natural language understanding, machine translation, synthesis, etc.), all the while pushing further the state-of-the-art in these fields and pipelines. We have also developed new data and processed existing data, making it more suitable for AI, and developed an application and a framework that use these models. Further information on the results is provided in our public deliverables. On the other hand, we have learned a lot. While the translation quality is nowhere near satisfactory, we have understood the factors and reasons behind this and know what the requirements are to move beyond the current state-of-the-art. We also learned how to collaborate effectively with such a diverse team. SignON led to the enlargement of the network of researchers and practitioners involved in SL research.
What happens after the SignON project ends?
The extensive research conducted within SignON raised a lot of new questions and ideas. Firstly, after SignON, different subgroups will continue their work on developing better models for recognition, synthesis, translation, etc. This will be possible not only for researchers from the SignON consortium, but also for others, as the framework and code will be freely available as open-source software. Through the established network we will continue collaborating. Secondly, we will look into one of the major issues for AI these days : data. We will continue collecting new and processing existing data to provide as many resources as possible to further the development. We will continue to organise events to promote the work on sign languages and the sign languages as language and cultural treasures. We have drawn up a two-year plan for the advancement of the SignON application into a viable product.

About SignON

SignON is a user-centric and community-driven project that aims to facilitate the exchange of information among Deaf, hard of hearing and hearing individuals across Europe, targeting the Irish, British, Dutch, Flemish and Spanish sign as well as the English, Irish, Dutch and Spanish spoken languages.
This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 101017255.
Designed and Developed by WP Ability