Skip to main content

A choral work, which gathered many hands and many minds, finally brought us together on 10 June 2025, at the ENS Milano (Ente Nazionale Sordi Milano) for the event dedicated entirely to ai4Sign.

We had the privilege of hearing first-hand experiences and stories from professionals who, having memories of what came long before ai4Sign, retraced with us some technological turning points, crucial moments that redefined accessibility and opened up new possibilities for deaf people.

There was talk of “technological empathy” and the desire to embrace all the nuances that these tools encompass, making the most of every feature. The focus was then on design and usability, the absolute desire to focus the work of designers on the ability to put themselves in the shoes of those who will use a particular product.

The aim of ai4Sign is to give visibility to the deaf community without leaving anything to chance.

The intention is therefore to be able to rely on this tool in all situations where there is no interpreter present to support with sign language translation. The decision to extend the horizon of use of ai4Sign to other groups of people (elderly people, people with temporary or situational disabilities…) leaves no doubt about our goal: facilitate communication to as many people as possible.

The meeting then revealed some unexpected background information, such as the show of hands during Teams calls as a solution devised by Microsoft to give deaf people the opportunity to intervene. An adaptation created ad hoc, for a specific case, but then became indispensable for everyone, especially during meetings with many items on the agenda.

From a technical point of view, the presentation of ai4Sign started with some basic points of the project (sign plan, variation analysis, sign analysis), and then moved on to the introduction of the concept of neural networks and generative AI as tools for machine understanding of natural language.

The ai4Sign project started in 2023 with the aim of addressing communication issues for deaf people. In 2024 the project won the hackathon organised by Microsoft “AI for Inclusion” and led to the creation of a specific dataset, and in 2025 the first avatars and Proof of Concept (POC) were developed.

The technology behind ai4Sign is based on the interpretation of signs through a transcription and avatar system. The process begins with the collection and coding of signs, which are entered into a dedicated dataset; here, the analysis of variations and filtering of noise (superfluous frames, and areas of the image not strictly related to the sign) ensure greater accuracy. Subsequently, a neural network, trained with this dataset, interprets the signs and constantly refines the accuracy of the model. Finally, the transcription system converts the signs into text, while the avatar reproduces them in LIS (Italian Sign Language), thus facilitating communication.

Application areas

ai4Sign can be used in several contexts:

  • Collaboration platforms: the avatar can interpret LIS audio during meetings and generate subtitles.
  • Kiosks and information points: the system can play content or interact directly with users.
  • Customer support: the avatar can communicate with deaf users in the absence of physical counters.
  • Integration into sites and chatbots: the platform can be integrated to play documents in LIS.

There were many questions from those present, who sought a point of contact with us and showed us the enormous attention and care towards our project. What makes us most satisfied with the work we have done so far is the response we have received from deaf people, the same ones who were grateful to us for being placed at the centre and for being involved in the realisation of many aspects of ai4Sign.

We thank EVERYONE present and every suggestion shared with us, even from afar, even from those who could not be present. The greatest achievement that we want to see realised in the coming months is a real and practical inclusion of those who face communication difficulties every day, because not being able to communicate freely makes one excluded, and exclusion is something that does not belong to us.