Tech UPTechnologyHow to prevent deepfake attacks, according to experts

How to prevent deepfake attacks, according to experts

Thanks to the increasing power of AI “learning”, and as experts warn, deepfakes are becoming more sophisticated , forcing researchers to refine their detection tools.

Manipulated speeches, falsified videos or images… There is no doubt that deepfakes are spreading at full speed through social networks. It consists of a technique, based on artificial intelligence, in which videos are superimposed, one on top of the other, making it possible to replace faces and thus achieve special effects.

Sometimes this technology is used for a “noble” cause (like the fake Trump video produced some time ago by Solidarité AIDS), or to make users smile, but more often it is also being used for the purpose of discrediting to a particular public figure, manipulate public opinion or even spread false information.

Due to this, it is not surprising that, in recent years, some of the main social media platforms, increasingly pointed out, have presented the development of different anti-deepfake weapons. This is the case, for example, of Facebook, which in December 2019 supported the development of detection technologies. Or Twitter, which announced a few months later the inclusion of a label applicable to “misleading” media.

But is there a way to prevent deepfake attacks ? And if so, how could we be able to detect them in our day to day?

Starting at the beginning: how do you do a deepfake?

There are basically two techniques that are commonly used for developing deepfake content. One of the main ones is to use the synchronization of the lips (specifically, the movement of the same) with the speech of another person. It is, it is true, a pernicious attack, since only a small part of the original video is modified.

Second, different facial expressions of an actor (we could define him as the puppeteer) are applied to the original face of the target (the puppet). Next, the face and the movements of the head as a whole are modified.

But in recent months, another type of technique has also emerged that surprises and alarms experts: although it moves a bit away from deepfake as we understand it, it would consist of the production of completely artificial images, using in particular unpublished faces.

When doing so, we must bear in mind that “deep” comes from deep learning , which consists of an artificial intelligence method based on learning. Thus, fueled by an enormous number of diverse examples, the machine is capable of automatically learning to perform a task.

For example, in the typical case of lip syncing, AI is input by processing audio speeches, with the main goal of making the lips actually move. The trained AI is then asked to carry out this work from a new audiotape, the images of which do not exist, which will then be created.

As you can imagine, this is a method that requires a lot of training data, which means that, initially, real videos are used with real speeches , with their corresponding lip movements. We could say that this principle would be similar to the one commonly followed for the puppet technique, so that a whole range of facial expressions ends up becoming the starting point.

A good example is found in a video from Facebook CEO Mark Zuckerberg, in which he talks about how Facebook “controls the future” from stolen user data. It is actually a deepfake video, as the original video was taken from a speech he gave on Russian interference in the US elections of just 21 seconds long, enough time to synthesize the new video.

How can we protect ourselves against a deepfake attack?

At the moment, the laws of different countries are beginning to address the threats posed by deepfake content (both videos and images). For example, in the state of California, two bills have already been proposed that have made certain aspects related to deepfake illegal, such as prohibiting the use of synthesized human images to make pornography (without the consent of the person who would appear in them ), and a ban on the manipulation of images of political candidates 60 days before an election.

Fortunately, the different cybersecurity companies constantly offer increasingly efficient detection algorithms, capable of analyzing the video image and detecting small distortions that generally appear during the “manipulation” process .

For example, for now, deepfake synthesizers generate a 2D face, which they then distort to fit the original 3D perspective of the video. And, as the specialists state, the direction in which the nose is pointing is usually a very revealing essential clue .

In turn, since deepfake videos are still in an early stage, it is possible to look at some specific characteristics to be able to identify them as fake by ourselves:

  • Light variations from one shot to another
  • Variations in the skin color of the original person
  • No flickering or strange blinks in the eyes
  • Lips that are poorly synchronized with speech
  • Jerky movement

In Mark Zuckerberg’s video, for example, it is clear that this is a deepfake creation because some facial movements are not natural ; also, at the beginning of the video the left ear makes strange movements, and the nose does not quite convince.

However, it is clear that soon we will have to have basic tools capable of easily identifying this type of content, since, originally, most users do not tend to look at those details that could warn that we are facing false content.

Go from a traditional CV to a digital and comprehensive one

The reality is that a person's CV on paper does not accurately reflect whether that person is suitable for a job, says Guillermo Elizondo.

Prime Day does not save Amazon and reports only 15% growth

The big tech companies are disappointing shareholders and Wall Street's response is to stop betting on them.

Goodbye to “irregular import” cell phones: ZTE will block them in Mexico

The company explained that it will send a message to the smartphones from which it "does not recognize" its import.

77% of the semiconductors that Intel manufactured in 2020 came from Asia

Upon the arrival of the new 13th Generation Intel Core in Mexico, the company spoke about its most relevant segments.

Japanese scientists create a 'washing machine for humans'

Can you imagine taking a relaxing bath in a machine that washes you with bubbles, plays relaxing music or videos?

More