|
How is this used and what are its dangers?
|
Although the ability to swap faces automatically to create manipulated videos with a real, trustworthy look has some positively interesting applications (such as films and gaming), this is obviously a technology that brings some problems.
The use of deepfake to manipulate images from politicians and celebrities were the first, most controversial experiences with this technology. In 2018, for instance, a Belgian political party disseminated a video where Trump was giving a speech asking Belgium to withdraw from the Paris Agreement. Trump never gave that speech, it was deepfake. This is one of the dangers of this technology: disseminating false content as authentic content.
In a corporate setting, deepfakes boost social engineering when used to manipulate people to get unauthorized access to systems, infrastructures, or information. By using this technology, attackers can, for example, combine e-mail phishing with fake audio messages. By using multiple vectors, they are increasing the chances of deceiving users.
|
|
|
How to detect a deepfake?
|
As deepfakes become more common, society is more aware of this type of techniques and their use, being able to detect possible fraud attempts.
|
|
|
To protect yourself, we recommend that you pay attention to some of these indicators that are indicators of deepfakes:
|
|
|
Blinking: Current deepfakes face problems when it comes to animating faces in a realistic way and this results in videos where eyes never blink or blink a lot or in an unnatural way. However, since Albany University researchers published a study about detecting this anomaly, there were new deepfakes launched that no longer had this problem.
|
|
Face: Look for skin or hair problems or faces that appear blurrier than their environments. Face skin is too smooth or too wrinkly? Facial expressions seem uncanny? Facial hair (beard, moustache, eyebrows) seems real? Sometimes changes are minor and are barely noticeable.
|
|
Lighting: Lighting seems off? Are shadows on the right place? Deepfake algorithms usually maintain the lighting of the videos that were used as models for the fake video, which doesn't match the lighting of the video that was made.
|
|
Audio:Audio may not match the person, especially if the video was forged and the original audio was not manipulated thoroughly.
|
|
However, the most important rule is: if the content seems odd, suspicious, or in any way surprising or dubious, try to confirm the content with the sender, and if they have actually sent it.
|
|
|
|
|
Detecting deepfakes is a challenge. In home videos, flaws can be seen with the naked eye, but AI technologies are evolving so quickly that deepfakes are becoming more and more difficult to detect. There are those who believe that we will soon be depending on forensic specialists to detect whether those contents are true.
|
|