Passwords might seem like a relatively recent, internet-age phenomenon, but the first digital password dates all the way back to 1961. Other significant events that year: Soviet cosmonaut Yuri Gagarin became the first person to orbit the earth, construction began on the Berlin Wall in East Germany, and the Beatles played their very first show at the Cavern Club in Liverpool. The world has come a long way since 1961. And yet, after more than half a century of technological and societal progress, the humble password remains our go-to, frontline defense against cybercriminals.
Passwords have never offered very reliable protection from family, nosy colleagues, or – least of all – ambitious fraudsters. But the advent of readily available, easily useable artificial intelligence (AI) tools has made the digital password as we know it all but obsolete. Though it was created to accelerate creativity and innovation, generative AI also allows bad actors to circumvent password-based security, social engineering their way (via deepfake videos, voice clones, and incredibly personalized scams) into our digital banking accounts.
A new survey of 600 fraud-management, anti-money laundering, and risk and compliance officials around the world found nearly 70% of respondents believed criminals were more adept at using artificial intelligence to commit financial crime than banks were at using the technology to stop it.
To combat this threat, financial institutions and banks must innovate.
Global Vice President for BioCatch.
The state of fraud and financial crime in 2024
The UK government currently estimates the cost of cybercrime at £27 billion per year. But a new report from BioCatch found more than half (58%) of businesses say their organizations spent between $5 million and $25 million battling AI-powered threats in 2023. Meanwhile, 56% of the finance and security professionals surveyed said they saw increased financial crime activity last year. Worse still, nearly half expect financial crime to rise in 2024 and anticipate the total value of losses due to fraud to increase as well.
With the cybercrime threat landscape evolving rapidly by the day, it’s no surprise fraud-fighting professionals expect tougher challenges on the horizon. Already, we see cybercriminals launching sophisticated attacks on businesses, and crafting convincing phishing emails, deepfake videos for social engineering, and fraudulent documents. They impersonate officials and our loved ones with chatbots and voice clones. And they create fake content to manipulate public opinion.
AI has rendered the senses we’ve used for thousands of years near obsolete in deciphering between what’s legitimate and fraudulent. Financial institutions must develop new approaches to both keep up and fight back.
Zeroing in on zero trust
More than 70% of financial services and banking businesses identified the use of fake identities while onboarding new clients last year. In fact, 91% are already rethinking the use of voice verification given the risks of voice cloning with AI. In this new age, even if something looks and sounds right, we can no longer a guarantee it is.
The first step to verification in the age of AI is greater internal cooperation. More than 40% of professionals say their company handles fraud and financial crime in separate departments that do not collaborate. Nearly 90% also say financial institutions and government authorities need to share more information to combat fraud and financial crime. But the simple sharing of information is unlikely to be enough. This new age of AI-powered cybercrime requires protective measures able to distinguish between humanity and technology, legitimate and fraudulent.
Enter, behavioral biometric intelligence.
The difference is human
Behavioral biometric intelligence uses machine learning and artificial intelligence to analyze both physical behavior patterns (mouse movements and typing speed, for example) and cognitive signals (hesitation, segmented typing, etc.) in search of anomalies. A deviation in user behavior – especially one that matches known patterns of criminal activity – is often a very good indication the online session is fraudulent. Once detected, these solutions can block the transaction and alert the appropriate bank officials in real time.
Behavioral biometric intelligence can also identify money mule accounts used in money laundering by monitoring behavioral anomalies and changes in activity trends. Research shows a 78% increase in money mule activity among people under the age of 21, while a third of financial institutions cite a lack of resources to control mule activity.
Best of all, behavioral biometric intelligence is a non-intrusive and continuous method of risk assessment. It doesn’t slow or interrupt a user’s experience. It simply enhances security by reviewing the distinct ways people perform everyday actions. Traditional controls will still be required to fight fraud and financial crime, but layering in behavioral biometric intelligence can help banks to achieve both their fraud-prevention and digital-business objectives more effectively.
It seems unlikely we’ll ever fully abandon our trusty passwords, but by themselves passwords are already dusty relics of the past. It’s imperative we add new solutions to our online banking security stack to ensure the protection of our personal information and digital interactions. Behavioral biometric intelligence needs to be one of those solutions, helping to keep us safe in this unpredictable new age.
We’ve featured the best online cybersecurity course.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
+ There are no comments
Add yours