Artificial Intelligence and Fraud:
An Unstoppable Force?

Online merchants are more vulnerable than ever to payment fraud.

Artificial intelligence presents unique challenges and advantages.

We find ourselves in a technological arms race, where the same tools used by fraudsters are also our greatest defence against them. By staying informed, proactive, and adaptable, we can ensure that the positive potential of AI far outweighs its risks.

Introduction

Preventing online payment fraud is a never-ending game of Whack-A-Mole. Whenever a new technology, system, or equivalent variable is introduced, fraudsters will most likely find a flaw in it and use it to perpetrate fraud. As a result, it becomes an ongoing game of staying one step ahead of the opposing party.

Online payment fraud is a stubborn problem. Despite years of research and development, fraud prevention measures have either fallen short or been quickly countered by fraudsters. Across Europe, an astonishing amount of customers and companies are becoming victims of fraud. In 2022, 2.2 million Dutch residents over the age of 15 became victims of fraud, accounting for 15% of the entire population. Nearly one out of every six Dutch individuals has been a victim of fraud, which is an awful measure.

Nearly one out of every six Dutch individuals has been a victim of fraud, which is an awful measure.

Source: Statistics Netherlands, 2.2 million cybercrime victims in 2022

A lot of unique tools and solutions were introduced in the industry. However, one stood out the most. As the title of this whitepaper suggests, a new trend in artificial intelligence has developed. Although the goal of this trend was to bridge the gap between people and technology and guarantee that some of us find this new trend useful in our everyday lives, fraud was not overlooked. AI has become a helping hand for most of us in both our personal and professional lives; it undoubtedly helps to boost our efficiency and inventiveness; yet, most people are unaware that the same AI is being used to swindle us.

The rise of artificial intelligence

Even though AI tools have been developed since the nineties, the language processing artificial intelligence tool ChatGPT, which first appeared at the end of 2022, absolutely renewed the interest in this matter and began the race of development of comparable solutions. This has become a worldwide trend, with speech, visual, and similar fields of AI following the emergence of ChatGPT. AI has grown to the point where 73% of customers recognize engaging with artificial intelligence on a regular basis. On the commercial side, recent research found that 35% of enterprises are adopting AI and 42% are looking into employing AI in the near future. Also, the AI market size on a worldwide scale is projected to grow dramatically over the coming years (see chart below). AI has undeniably become an important aspect of people’s and companies’ life.

In the grand scheme of things, AI may appear to be a revolutionary development, which it is, but it has significant drawbacks. Aside from ethical concerns, such as unemployment, loss of control, and biases, AI is widely employed by fraudsters, and in the end, this technology is used against ordinary people. Several law enforcement agencies, including the FBI, Interpol, and others, have issued warnings to businesses and consumers about the expanding use of AI among fraudsters. A former FBI agent, for example, cautions individuals against AI phone scams, while Europol underlines ChatGPT’s usage by fraudsters and the potential consequences for customers and businesses. There are several alerts regarding fraudsters exploiting AI in a variety of ways, but what are the specific techniques?

How fraudsters employ artificial intelligence in their schemes

#1 Language AI: Language models, such as the well-known ChatGPT, are ideal tools for fraudsters since they can now focus on other tasks while language models create the necessary content for them. Unsurprisingly, the extent to which the tool has progressed assures that messages are entirely natural and hard to discern from those written by a human. Aside from phishing messages, fraudsters can produce ideas, scam structures, and additional fraudulent material (see graph below).

#2 Voice AI: One of the most recent breakthroughs is voice artificial intelligence, which is capable of emulating any voice and delivering your preferred text. The sole resource required for this technique is an example of a voice that is wanted to be received as an output. This is undoubtedly an ideal tool for fraudsters, as they can now mimic whomever they want as long as they have a voice sample. The frequency of such scam attempts reflects fraudsters’ interest in this instrument. According to a source, AI is fuelling an increase in online voice fraud, and it only takes 3 seconds to clone a voice.

#3 Visual AI: Visual artificial intelligence has also been a hot issue with a lot of attention since its unforeseen prominence. What began as a tool to boost people’s creativity and idea development was quickly found by fraudsters and fully involved in scamming. What fascinates people about visual AI is its precision and boundless possibilities; anybody can produce unique and accurate images or movies. Scammers use such technologies to mimic CEOs, managers, or comparable individuals in order to persuade someone to divulge sensitive information. For example, fraudsters may create a video of a CEO urging employees to submit their credit card information in order to get their wages.

#4 Coding AI: Coding artificial intelligence has not received as much attention as the preceding technologies, but it should not be disregarded because it might become the heart of fraudulent operations. Coding artificial intelligence is so sophisticated that it can produce code in seconds or minutes and has no bounds in most circumstances. This implies that criminals may now use this technology to construct websites, programs, or malware to entice unwary users and perpetrate fraud. The United States’ Federal Trade Commission (FTC) has cautioned companies who construct AI-generating code platforms that they may be held liable if fraudsters use the tool to produce code and perpetrate fraud. However, no major efforts have been made to put an end to it.

#5 Others: Apart from the aforementioned and most widely used AI technologies, there are dozens of other AI-based solutions that fraudsters like. They may contain a mix of the previously listed tools as well as completely distinct ones, which are occasionally modified for fraudulent purposes and sold on the dark web.

How easy is it to utilize ChatGPT to commit fraud? ChatGPT fraudulent use case

There are many methods by which fraudsters can employ AI tools. The best way to realize how easy (and dangerous) these tools are, is by doing it yourself. In the example below, we have requested ChatGPT to provide a brief email informing the recipient that the security of their VISA account has been compromised and that they should take immediate action. ChatGPT created a properly looking and sounding email in under 30 seconds that is definitely indistinguishable from the one we would normally receive:

As has been demonstrated, a fraudster may produce a flawlessly convincing email and send it to all possible victims in under a minute. Given that it only took one minute, one can only wonder what complex and genuine-looking material fraudsters can produce in an extended period of time. ChatGPT is a clever and effective technology that assists both real consumers and, sadly, criminals. By adopting a large language model like ChatGPT, fraudsters improve their efficiency and effectiveness, allowing them to spend their “valuable” time developing additional schemes.

Future trends and predictions

As experts anticipate that artificial intelligence will continue to evolve and surpass human intelligence as early as 2030, it is apparent that the quality of AI-produced work will improve by leaps and bounds. Such a development would surely benefit society; yet, if the same circumstances and opportunities are made available to scammers, difficult times lie ahead.

As fraudsters make use of these ever-improving AI tools, society will be subjected to even more powerful social engineering attacks, including continual phishing, scamming, and more. Aside from the attack intensity, each attempt’s quality will improve highly, making it significantly harder to notice the differences between fraudulent and genuine.

Lisa de Vreede – Product Owner Protectmaxx

Expert insights

“AI’s rapid advancement offers a double-edged sword. It revolutionizes industries, yet fraudsters leverage it to amplify their deceptive tactics. From mimicking human behaviour through intelligent algorithms to employing chatbots for automated phishing, identity theft, and social engineering attacks, the creative misuse of AI is on the rise.

This escalating threat calls for robust security measures across organizations and individuals. However, it is crucial to remember that AI, despite its efficiency and proficiency in big data handling, analysis, and processing, is not infallible. It stumbles over unknown patterns or nuanced issues in fraud prevention. Therefore, while AI is a powerful ally, it is not a standalone solution.

At this juncture, humans outpace AI in accuracy, particularly in identifying unique or irregular patterns. Human intervention is still a necessary component in fraud prevention. Hence, AI’s potential in this field remains tied to human oversight, at least until it evolves further in recognising and understanding unique patterns.”

How should businesses rebel?

What can be done to prevent becoming entangled in this lethal snowball, given that the predictions and trends do not appear promising, particularly for enterprises? The situation may not be as desperate as it appears at first, since anti-fraud innovations keep pace with the sector. Fraud protection vendors are working tirelessly to stay up to date with the rapid evolution of fraudsters and gain a long-term edge.

Since the quality and severity of fraud efforts rise, most experts see no other option except to address the problem with the assistance of seasoned specialists and artificial intelligence itself. By leveraging artificial intelligence, fraud detection tools are achieving previously unattainable levels of precision.

Protectmaxx

Protectmaxx, a next-generation fraud detection API, is an innovative and capable solution that has proven its supremacy in safeguarding organizations over the years. Protectmaxx is greatly strengthened by artificial intelligence and machine learning, resulting in excellent precision, high card acceptance, and low chargeback rates.

If you found this whitepaper useful

If you found this article informative, please share it with your colleagues. If you’d like to chat to one of our experts about your payment fraud concerns, please give us a call on +31107989 501.

About us

We believe buying and selling digital goods online should be effortless.
With over 25 years of experience, we know all about fraud, payments and selling digital goods online. Our team of 85 revenue geeks is working 24/7 to make it simple and safe to buy and sell digital goods.
Learn more
Headquarters in Rotterdam
5 Offices across Europe
Compliance

Let's make it happen.
Say hello!

Contact us and one of our Revenue Geeks will get back to you within 24 hours.