Business solutions
PC & Mobile technology
13.10.2023 10:00

Share with others:

Share

Deepfake videos are a threat to the general public

Artificial intelligence (AI) can help us, but it can also be a threat, as evidenced by the recent deepfake videos of a well-known YouTuber named MrBeast, who delighted unsuspecting users with the new iPhone 15 phones for only two dollars.
Deepfake videos are a threat to the general public

The question is how many people realized too late that it was a scam, but we fear that it is a very high number.

Deepfakes existed even before the advent of artificial intelligence in the form we know it today (ChatGPT, Bard, Bing, Midjourney …). But with the advent of large language models, these recordings have become frighteningly realistic. It was only a matter of time before miscreants used this advanced technology for evil purposes.

They chose the best possible target. MrBeast is known for his extravagant YouTube videos in which he delights his fans and passers-by with expensive cars, private jets, phones, houses and similar gifts. A scenario where MrBeast would give away almost free iPhone 15 phones is not too far-fetched. unusual. Malicious people also bet on it. The fake MrBeast urged viewers to click on a link in the clip, but behind the scenes was the intent to steal data.

MrBeast's deepfake video recently appeared on TikTok, where it likely did a lot of damage, not only to users, but also to the platform, which will now be grappling with how to prevent it in the future.

MrBeast is not the only victim of artificial intelligence scams. Putin's image has also been abused in the past to spread false information.

Although celebrities are more attractive targets, ordinary mortals are also at risk. BBC journalists Matthew Amroliwala and Sally Bundock experienced this first hand. Their image was used to promote a financial scam. In the video, the journalist is supposed to be hosted by the richest Earthling, Elon Musk, who is supposed to question him about the currently supposedly most profitable investment opportunity. In the past, Elon Musk was also a victim of deepfake videos in which he allegedly gave away his wealth and gave advice on cryptocurrencies.

Until now, Meta (formerly Facebook) has attached warnings to such videos about possible false information discovered by the organization FullFact. The latter has set itself the mission of checking all possible irregularities that appear in the news and on social media. A representative of Meta also spoke. "We do not allow such content on our platforms and have removed it immediately. We're constantly working to improve our systems and encourage anyone who sees content they believe violates our policies to report it using the in-app tools. Then we can examine it and take action."

TikTok also responded, removing MrBeast's fake video a few hours after it was posted and blocking the account that violated their terms of use. In its policy, TikTok has such “syntheticčne” recordings exposed as prohibited content.

Meanwhile, the real MrBeast has called on platform X (formerly Twitter) and other platforms to answer the question of whether they are ready for an increase in such scams.

How to identify deepfake recordings?

Although for now classic phishing attacks and ransomware attacks remain the most common forms of attacks and also the most successful, in all probability a period is coming when deepfakes and attacks with artificial intelligence will become more common.

Recently, even Tom Hanks had to defend himself because of artificial intelligence. His image was stolen to promote a controversial dental plan.

AI systems will only become more advanced. With development, concerns about how to recognize these types of scams will increase. The golden rule is to be suspicious if you come across a video, post or message in which someone is offering you something for free, especially if it is a product or service that usually requests larger amounts.

The image of MrBeast devalues this rule somewhat. Precisely because of his videos, in which the participants also compete for attractive prizes, the aforementioned deepfake video was difficult to recognize as a scam.

We have no choice but to always be careful and check every piece of information beforehand. Those most observant users will also notice some other suspicious signs that may indicate that it is a scam.

In the video of MrBeast (you can watch it at bit.ly/DeepfakeRN), the attackers included the name of a famous YouTuber with the blue confirmation mark that we are used to from official profiles. If you're a regular TikTok user, you know that this kind of name placement is unnecessary, as all videos already include the uploader's name below the TikTok logo.

In the case of BBC journalists, the reasons for suspicion were of a more grammatical nature. The presenter mispronounces the number 15 and pronounces the word "project" with an unusual intonation. There are also a few minor grammatical errors, which you can easily miss (for example, the wrong use of »was, were»). Since the recordings will gradually become more and more realistic, errors of this type will be important for the recognition of scams. In the same clip, it is very difficult to notice that Elon Musk got an extra eye above his left eye. Eyes, fingers and similar details are often distorted in artificial intelligence shots, but for us this represents an opportunity to avoid disaster.

Legal experts who have a divided opinion regarding the prohibition of such recordings have also spoken out. If deepfake technology were to be banned, it would also harm genuine content in movies and series created with special effects.

Cover image: Image by rawpixel.com on Freepik


Interested in more from this topic?
Elon Musk Facebook Mint Twitter artificial intelligence


What are others reading?