A lot has happened in the field of artificial intelligence (AI) in recent months. The chatbot ChatGPT from OpenAI, which has been available since last November, is causing a stir. The text-based dialogue system is based on machine learning and answers questions in natural language.
The program promises advantages in many areas – such as automated customer support that answers questions from end users quickly and efficiently, quick information and the simplified further development of prototypes into conversational applications.
In January, Microsoft also presented its new AI “Vall-E”. The speech synthesis model can imitate human voices. A recording of the original voice of just three seconds is sufficient for this. The AI simulates the human voice very accurately and can even mimic the speaker’s emotional emphasis.
However, the developers of both systems are aware that their AI models do not only offer advantages. As the popularity of such programs increases, so does the potential for fraud. Chatbots in particular can be misused to launch malware attacks, perfect phishing attempts or steal identities. The possibilities are manifold.
BioCatch has analyzed which fraud scenarios are possible in the future through the use of AI models.
Phishing und Social engineering using AI
ChatGPT uses natural language processing (NLP). Cyber criminals could exploit this for phishing and social engineering campaigns. For example, e-mail conversations can be authentically recreated without recognizable grammatical or spelling errors. A natural flow of language ensures trust among the potential victims: the supposed bank employee, who asks the customer by e-mail to enter their account details for verification, appears authentic thanks to the natural language. In this way, fraudsters can easily steal data or take over entire accounts.
Such forms of fraud are difficult for banks to detect because the cases described involve “real” customers who initiate the transfer. But not only the diverse fraud scenarios are increasingly becoming a problem for banks and financial institutions through the use of AI.
To protect against such risks, organizations must have solid security measures in place. This primarily includes regular security updates and multi-factor authentication . Additionally, they should take additional measures to protect their chatbots from malicious actors.
Another challenge: combating money laundering
Combating money laundering is also a challenge and often involves high costs. Here, the first transfer usually remains undetected. Either it is overlooked by the surveillance system, or the “customer” or the AML analyst (Anti-Money Laundering) confirms the transaction as unsuspicious. Because with the help of AI-supported chatbots like ChatGPT, money launderers can generate conversations that appear to be about legitimate business activities. In reality, however, they serve to disguise money transfers. This makes it increasingly difficult for financial institutions to identify common patterns of money laundering activity.
Another problem is recruiting unsuspecting people to launder money. Many of the accounts are opened by unsuspecting individuals who think they have found a profitable part-time job. Those affected often do not know that they are acting as money launderers and are using their account for criminal activities, or making it available for this purpose. Because the scammers pretend to be legitimate companies and promise quick money. And with ChatGPT, the supposed job advertisement and the subsequent recruitment process can be made even more convincing.
Behavioral biometrics can help
Behavioral biometrics can play an important role in detecting fraud attempts and money laundering. By analyzing user behavior, such as typing speed, keystrokes, and mouse movements, a user’s normal behavior can be determined. Based on this, the software can recognize whether it is actually the registered user or a scammer. Many other fraud attempts can also be detected in this way. In this way, accounts can also be found that are to be used for money laundering at a later date.