Home SEO How Underground Groups Use Stolen Identities and Deepfakes

How Underground Groups Use Stolen Identities and Deepfakes

by Lottar


These deeply fake videos are already being used to cause problems for public figures. Celebrities, high-ranking government officials, well-known corporate figures and other people who have a lot of high-resolution images and videos online are the easiest targets. We see that social engineering scams using their faces and voices are already spreading.

Given the tools and deep spoofing technology available, we can expect to see even more attacks and scams aimed at manipulating victims through voice and video spoofing.

How deep fake can affect existing attacks, scams and monetization schemes

Deepfakes can be adapted by criminal actors for current malicious activities, and we are already seeing the first wave of these attacks. The following is a list of both existing attacks and attacks we can expect in the near future:

Messenger scams. Impersonating a money manager and calling about a money transfer has been a popular scam for years, and now criminals can use deep fakes in video calls. For example, they can impersonate someone and contact their friends and family to request a money transfer or to request a simple top-up in their phone balance.

BEC. This attack was already quite successful even without deepfakes. Now attackers can use fake videos in calls, impersonate managers or business partners and request money transfers.

Make accounts. Criminals can use deepfakes to bypass identity verification services and create accounts in banks and financial institutions, possibly even government services, on behalf of other people, using copies of stolen identity documents. These criminals can use a victim’s identity and bypass verification process, which is often done through video calls. Such accounts can later be used in money laundering and other malicious activities.

Hacking of accounts. Criminals can take over accounts that require identification using video calls. They can hijack a financial account and simply withdraw or transfer funds. Some financial institutions require online video verification to have certain features enabled in online banking applications. It is clear that such verifications can also be a target of deep spoofing attacks.

Blackmail. By using deepfake videos, malicious actors can create more powerful extortion and other extortion-related attacks. They can even plant fake evidence created using deep fake technologies.

Disinformation campaigns. Deep fake videos also create more effective disinformation campaigns and can be used to manipulate public opinion. Certain attacks, such as pump-and-dump schemes, rely on messages from known persons. Now these messages can be created using deepfake technology. These schemes can certainly have financial, political and even reputational repercussions.

Tech support scams. Deepfake actors can use fake identities to socially engineer unsuspecting users into sharing payment credentials or gaining access to IT assets.

Social engineering attacks. Malicious actors can use deepfakes to manipulate friends, families or colleagues of an impersonated person. So social engineering attacks, like the ones Kevin Mitnick was known for, could take a new turn.

Hijacking Internet of Things (IoT) devices. Devices that use voice or facial recognition, such as Amazon’s Alexa and many other smartphone brands, will be on the target list of deep-fake criminals.

Conclusion and safety recommendations

We are already seeing the first wave of criminal and malicious activity using deepfake. However, it is likely that there will be more serious attacks in the future due to the following issues:

  1. There is enough content exposed on social media to create deep fake models for millions of people. People in every country, city, village or specific social group’s social media are exposed to the world.
  2. All the technological pillars are in place. Attack implementation does not require significant investment and attacks can be launched not only by nation states and corporations, but also by individuals and small criminal groups.
  3. Actors can already impersonate and steal the identities of politicians, C-level executives and celebrities. This can significantly increase the success rate of certain attacks such as financial schemes, short-term disinformation campaigns, manipulation of public opinion and extortion.
  4. The identities of ordinary people are available to be stolen or recreated from public media. Cybercriminals can steal from the impersonated victims or use their identities for malicious activities.
  5. The modification of deeply false models can lead to a mass appearance of identities of people who never existed. These identities can be used in various fraud schemes. Indicators of such phenomena have already been observed in nature.

What can individuals and organizations do to address and mitigate the impact of phishing attacks? We have some recommendations for regular users, as well as organizations that use biometric patterns for validation and authentication. Some of these validation methods can also be automated and deployed in general.

  • A multi-factor authentication approach should be standard for any authentication of sensitive or critical accounts.
  • Organizations must authenticate a user with three basic factors: something the user has, something the user knows, and something the user is. Make sure the “something” items are selected wisely.
  • Staff awareness training, conducted with relevant samples, and the know-your-customer (KYC) principle are necessary for financial organizations. Deepfake technology is not perfect, and there are certain red flags that an organization’s staff should look out for.
  • Social media users should minimize the exposure of high quality personal images.
  • For authentication of sensitive accounts (for example, bank or corporate profiles), users should prioritize the use of the biometric patterns that are less exposed to the public, such as irises and fingerprints.
  • Significant policy changes are needed to address the problem on a larger scale. These policies should address the use of current and previously exposed biometric data. They must also consider the state of cybercriminal activities now as well as prepare for the future.

The security implications of deep fake technology and attacks that use it are real and damaging. As we have shown, it is not only organizations and C-level executives who are potential victims of these attacks, but also ordinary individuals. Given the wide availability of the necessary tools and services, these techniques are accessible to less technically sophisticated attackers and groups, meaning that malicious actions can be carried out at scale.



Source link

Related Posts

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy