Online Magazine Download now Europe Diplomatic Magazine

© Promobot

Robo-C Project developed a new dialogue system for its human-like service robot. Before that, robots utilized a question-answer system called the language base. With a dialogue system, Robo-C recognizes questions, clusters them according to the topic, and generates responses with neural networks.

The new dialogue system utilizes finite automation. It is a linguistical system that involves natural language processing under three levels: analysis of named entities, user intents, and a wide array of topics.

A picture containing person, indoor, black, helmetDescription automatically generated
© Promobot

« This is the first time a dialogue system has a theme depth based on four dimensions. These dimensions include time, place, topic, and negative or positive context. Unlike assistants, the robot’s responses depend on the time and place of the discussion. It is crucial for service robots because their operation greatly depends on their area of work, i.e., airport or office or kids center. They must be aware of their surroundings to provide the best service possible, » says Andrew N., Ph.D., head of the dialogue system at Robo-C Project.

The developers used ten primary intents for the system’s modules, including ‘transport’ (how to get somewhere, buy tickets, where, etc.), ‘locations’ (the addresses, how to get there, etc.). One intent requires two thousand request samples. Data engineers need 6.5 million lines of requests, eight thousand work hours, or a thousand workdays to set up ten intents.

Each dialogue is split into five modules or branches. The first branch is simple chatting to make communication comfortable. This dialogue does not have a specific target. The second branch is a business-specific module to fulfill the requests of users. This module is targeted and excludes random responses. The third branch is a search module for constant updates of the robot’s database.

The “face” of Robo-C can display more than 600 variants of human facial expressions: the robot can move its eyes, eyebrows, lips, neck and “face muscles” © Promobot

The fourth module operates together with the third. When the robot has to evaluate data (‘what’s the most/least expensive,’ ‘where can I go,’ ‘who’s better’), it compares info from the search engine. The robot doesn’t require an Internet connection for it — it is a constant process that it performs whenever it is online. The last module directly connects with Robo-C Project’s language database, a depository of phrases and syntheses in 11 different languages that the company has been developing for the last five years.

« Most voice assistants operate as just one of our modules, the first one responsible for chatting,’ says Robo-C Project representative. ‘whenever you cut off the Internet, your assistant shuts down. Our voice assistant works offline and is here to fulfill specific requests of people. A dialogue system that doesn’t require the Internet is unprecedented. »

The dialogue system is already in use on most Robo-C models. Solutions by Robo-C Project operate in more than 40 countries worldwide and include service robots with human-like appearance. Robo-C is an anthropomorphic robot that imitates human emotions. It moves eyes and eyebrows, lips, and other artificial muscles. Mechanical muscles created with Robo-C’s patented technologies allow the robot to mimic more than 600 human facial expressions.

Promobot is the largest service robotics manufacturer in Northern and Eastern Europe.

The company carries out developments in the fields of mechatronics, electronics, artificial intelligence and neural networks, autonomous navigation, speech recognition, development of artificial skin and muscles, and human-machine interaction

In January 2022, Promobot, a Russian-owned company promised to pay US$200,000 to own the perpetual rights to a person’s face and voice in order to create its next service-focused humanoid robot.

Once the right person is chosen, a 3D model of their face and body will be created and over 100 hours of speech will be dictated for the robot to be able to respond to their owners. The robot is set to start appearing in 2023.

More News

NIST Tests Forensic Methods for Getting Data From Damaged Mobile Phones

WHO (OR WHAT) ARE YOU TALKING TO? – Artificial intelligence gets to play a bigger rôle

  • 12 mn

THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY – Identifying types of cyberattacks that manipulate behavior of AI systems

  • 5 mn

Latest news

SAVING LIVES OR SAVING MONEY? – How the contributions of member states towards NATO’s running costs fluctuate according to the perceived threat

  • 12 mn

MINDS AND METAL The fusion of AI & humanoid robots

  • 12 mn

LOUISETTE AZZOAGLIO LÉVY-SOUSSAN 19 years in the service of Princess Grace, and much more besides!

  • 12 mn

DOUBLE TROUBLE – How fake information feeds Russia’s propaganda war

  • 12 mn

PRESERVING THE PAST, SHAPING THE FUTURE – The importance of national cultural institutes

  • 12 mn


  • 8 mn