Tech UPTechnologyRobots that accurately detect changes in facial expression

Robots that accurately detect changes in facial expression

Developing an application with a technology such as artificial intelligence (AI), to obtain an end, and also that this is for the benefit of people, is what the application I am going to talk about today seeks, and that in areas such as Medical care, or even driving, can solve many problems.

No, facial recognition, in this case, is not to have us watched or controlled . And it has been Fujitsu Laboratories, Ltd. and Fujitsu Laboratories of America, Inc, in collaboration with the School of Computer Science at Carnegie Mellon University in the United States, who have announced the development of a facial recognition technology that detects subtle changes. in facial expression with a high degree of precision.

One of the obstacles to facial expression recognition is the difficulty of providing the large amounts of data necessary to train detection models for each facial pose, because faces are generally captured with a wide variety of poses in real-world applications. To address the problem, Fujitsu has developed technology that adapts different normalization processes for each facial image.

For example: when the angle of the subject’s face is oblique, the image can be adjusted to more closely resemble the front image of the face, allowing the detection model to be trained with a relatively small amount of data. The technology can accurately detect subtle emotional changes , including awkward or nervous laughter, confusion, etc., and also when the subject’s face moves in a real-world context.

The new technology will be used in a variety of real-world applications, including facilitating communication to improve employee engagement and also to optimize safety for drivers and factory workers. In recent years, technologies that detect changes in facial expression from images and that read human emotions have been attracting increasing interest, and in the future they will be used in a variety of situations such as patient monitoring in the healthcare, or analyzing customer responses to products in marketing campaigns, for example.

To “read” human emotions more effectively, it is critical to capture the associated subtle facial changes, such as understanding, bewilderment, and stress. To achieve this, developers have increasingly relied on Action Units (AUs), which express the “units” of motion for each muscle in the face based on an anatomical classification system. For example, AUs have been used by professionals in fields as varied as psychological research and animation. UAs are classified into approximately 30 types based on the movements of each facial muscle, including those of the eyebrows and cheeks.

By integrating these AUs into its technology, Fujitsu has pioneered a new approach to discovering even subtle changes in facial expression. To detect AUs more accurately, the underlying deep learning techniques require large amounts of data. However, in real-world situations, cameras generally capture faces at various angles, sizes, and positions, making it difficult to prepare large-scale learning data for each visual and spatial state. Therefore, the images captured by the camera adversely affect the accuracy of detection.

And this is precisely what the Carnegie Mellon University School of Computer Science, Fujitsu Laboratories, Ltd. and Fujitsu Laboratories of America Inc have accomplished: detecting AUs with high precision, even with limited training data. With the new technology, images of the face taken at various angles, sizes and positions are rotated, enlarged or reduced and adjusted in another way to make the image more like the front of the face. This makes it possible to detect AUs with a small amount of training data, based on the frontal view of the subject’s face.

In the normalization process, multiple feature points of the face in the image are converted to approximate the positions of the feature points in the front image. However, the amount of rotation, enlargement or reduction and adjustment changes depending on where the points on the face are selected. For example, if the feature points are selected to be around the eyes for the rotation process, the area around the eyes will be close to the reference image, but parts such as the mouth will be misaligned.

To address this issue, areas that have a significant influence on AU detection of the captured face image are analyzed and the degree of rotation, enlargement, and reduction adjusted accordingly. By using different normalization processes for each individual UA, the technology developed, using artificial intelligence, can detect UA more accurately.

This technology has already achieved a high detection accuracy rate of 81%, even with limited training data, and it is also more accurate than existing ones. Fujitsu aims to introduce its development into practical applications for various use cases, including teleconferencing support, employee engagement measurement, or driver supervision. And as I explained at the beginning, I hope also for medical care.

Chip war rages on: TSMC suspends Biren chip manufacturing

The semiconductor company is seeking to follow US rules, which prevent companies from developing certain technologies.

Do you want your company to be more optimal? use digital twins

According to one study, organizations that use them have seen a 15% improvement in operational metrics.

Not everything is DALL-E: The best websites to create images with Artificial Intelligence

The use of Artificial Intelligence in creative work is becoming more and more common and these are some of the platforms that you can use.

Electric cars with charge in 5 minutes: a new NASA technology will make it...

A new technology funded by NASA for future space missions can charge an electric car in just five minutes.

This is Optimus, Tesla's humanoid robot

Impressed? Tesla founder Elon Musk wants to build millions of robots like these and sell them for 20,000 euros a unit.

More