Recommended articles
Social share
Want to keep learning?
Sign up to receive security learning articles from Verified Visitors
The information you provide to Verified Visitors is governed by the terms of our Privacy Policy
What is New Account Fraud, and how to prevent it.
Sign up to receive security learning articles from Verified Visitors
The information you provide to Verified Visitors is governed by the terms of our Privacy Policy
Everyone is excited. During the previous week the business has seen a massive growth in new account registrations. Marketing has smashed all their quarterly OKR targets. The CEO is taking the credit, again. The CFO is waiting till these new accounts surely convert into paid accounts, and has the upsell ratio all set-up ready to go in Excel. Investors have been informed in hushed tones. Even the founder is smiling, for once.
However, the tech team is suspicious, something doesn’t look quite right, but they aren’t sure, and keep their heads down. The tech team has put in a sophisticated CAPTCHA - one of the new puzzles that are uniquely generated each time that only humans can solve. The accounts seem unique, they have valid IDs and even mobile numbers.What can possibly go wrong?
It turns out that the marketing department can’t put their finger on the referral source of the new registration source. They thought it must have been the new Tik-Tock video, but that’s not what the data shows. “Word of mouth”, mumbles the Chief Marketing Officer.
New account fraud, a sophisticated form of cybercrime, involves the creation of fake accounts with the intent to deceive and exploit. These deceptive practices compromise the integrity of online platforms, leading to financial losses and reputational damage for businesses. They often start with a significant increase in registrations, or in a more subtle sudden re-activation of dormant accounts.
The company isn’t large enough to support a dedicated fraud team, but the tech team suspicions proved to be correct. The new increase in registrations was caused by automated bot attacks, from sophisticated bots capable of bypassing their swanky new puzzle CAPTCHA protection. Although the tech team still couldn’t be sure, they took the precaution of performing a manual audit on a selection of the new batch of registrations. They couldn’t actually connect with any real customers during the audit.
New account fraud is committed for a whole variety of reasons depending on the nature of the accounts and the business. Here are some of the major ones that we have seen at VerifiedVisitors.
Bots operate at massive scale, and are now inexpensive to run, even with very large volumes. Most people can’t comprehend the scale of the bot traffic. The growth in Bots as A Service (BaaS) platforms now means that anyone with zero experience and programming skills can create and deploy bots using millions of residential proxies that are very hard to detect.
For the hackers it's essentially a numbers game. Some of the dating app scandals have seen wealthy widows and older people scammed out of their entire life savings. If the attackers can get a 0.001 % return on their bot farms, they are still making a massive profit. The cost of launching attacks is pretty much free, the scale is massive, and so the hackers have a good return on investment (ROI) opportunity. The attacks carry on.
Simple bots are fairly easy to stop. However, the latest generation of bots using Generative AI and sophisticated proxy platforms is making life much harder. Common bots are stopped at the WAF layer, using IP reputation analysis, or signature fingerprints from previous attacks. This means that the vast majority of bots can be easily prevented. The problem is with the custom bots that are targeted at your website or domain. Although they will be a small percentage, these are the ones that cause all the damage, so it’s misleading to look at numbers alone.
We can see in the diagram how simple bots using generic scripts are easily detectable. However, as we proceed up the Y access to highly customisable bots, they become much more human-like and hide the traces of their digital provenance much more effectively. So here we can see advanced bots passing CAPTCHA and mimicking real user mouse trails to avoid detection. Using mobile proxies makes them harder to spot still. Blocking a mobile gateway will result in blocking potentially hundreds of thousands of mobile users, and the mobile farm devices are often real - which means they will pass a simple fingerprint test.
❌ Traditional IP reputation services - these just fail for all but the most persistent dumb bots,known botnets, or rogue data centers. Blocking by country e.g. Russia and China is futile - the bots just rotate to a non-blocked country of origin.
❌ Old School Fingerprint Detection -again these old school fingerprint detection techniques worked well to detect most bots, but fail against botnets, mobile and residential proxies using actual devices, or very sophisticated emulation of the fingerprint parameters.
❌ CAPTCHA and annoying puzzles and other challenges -again these old school methods are easily bypassed by the human CAPTCHA farms, and in some case by using AI and image recognition. The latest puzzles that get harder if you fail them, are perhaps the most annoying UX aberration since Microsoft Clippy. However, CAPTCHA will defeat most bots, so it's definitely worth having as a part of your overall bot prevention strategy.
✅ Adding 2FA such as a mobile verification is a significant deterrent. If your new account creation process is such that you can sustain the drop in registrations from legitimate signups that can’t be bothered, this is always a wise thing to do. Just bear in mind that the hackers using mobile proxies can automate passing the 2FA as well, without even resorting to SMS spoofing or other more advanced techniques. Again adding much more sophisticated MFA, such as using a mobile authentication app is going to provide a comprehensive level of account security, and if your user base can support it, it’s an obvious way to go. However, for most B to C plays it’s too much of a user burden.Bear in mind, that you are essentially pushing all your users by enforcing a rigid user verification.
✅ Old school audit is still one of the most effective ways to discover potential fraudulent account and should definitely be done on a regular basis. However, this is time consuming, manual, and happens AFTER the accounts are created. Sorting out the fake accounts can be a huge administrative burden.
✅ Monitoring your key login and registration statistics is critical. Detailed information from audit logs can greatly help to find and trace anomalous patterns manually. For example, recording last login dates, identifying dormant accounts, or other suspicious cohorts are all going to help. The hackers won’t know if you have two new registrations a day or 2,000, so anomalies in new account registrations are likely to be identified. Identifying new account fraud begins with recognizing irregularities in account creation patterns. Rapid, mass registrations from a single IP address or the use of disposable email addresses are red flags that necessitate immediate attention.
✅ Effective detection also hinges on monitoring user behavior post-registration. Often the bots will never login again, or sometimes they login excessively at random times in the middle of the night. The one thing that is likely is their behaviour post-registration may well be markedly different from your legitimate new registrations.
.
VerifiedVisitors uses AI to help identify and build up a dynamic cohort risk model based on learning from your website traffic. We use a zero trust at the network edge to identify bot traffic BEFORE it has a chance to cause potential damage.
On the far left we have the high risk areas from known automated traffic hitting potentially vulnerable paths such as your user logins and registrations. On the far right we have the Verified Visitors - your legitimate repeat visitors and the traffic that is manage or blocked.
Dividing the traffic dynamically into cohorts means that we can treat each potential risk in a different way. This has major benefits. Known risky behaviour is identified and acted on earlier. Known legitimate customers are trusted, but verified on a constant basis. Meanwhile, as new rules are created, our AI is learning more and more about the nature of your traffic.
✅ Known users we’ve seen before and have a unique virtual ID are trusted but constant verified. This means we don’t inconvenience the legitimate users, unless their behavioural signature significantly changes. This is where additional 2FA can be applied intelligently. For example, if a user changes their country of login, or other major factor, the additional security checks won’t be seen as arbitrary and random, but can actually be welcomed
✅ Setting a managed good bot lists enables VerifiedVisitors to manage all the good bots and verify that they are genuine. Setting the good bots list also defines the bot that’s aren’t on the list - they are either fakes or unwelcome, and global rules can be easily applied accordingly.
✅ Known vulnerable paths and exploits by bot traffic are subject to dynamic rules to block or prevent access
✅ Finally, we have a cohort of “Likely automated” where the traffic is in the grey area. Our AI analysis is not conclusive, for example the traffic may have failed some behavioural checks and have a slightly defective signature. This cohort of traffic can then be subject to direct targeting to determine its true origin. For example, VerifiedVisitors has a challenge page, which allows us a couple of seconds to collect additional telemetry, which will often resolve the bot or not issue very quickly, or the site could fall back on a CAPTCHA or other validation method. The percentage of “likely automated” heavily varies between sites, and the site owner usually has a very clear idea of what is acceptable. Over time the AI learns from each interaction, and takes labeled data from the active verifications, which dramatically reduces the grey-zone.
Our AI platform continuously learns and applies adaptive security measures for each cohort. This ensures that any deviations from normal user behavior trigger immediate responses, thwarting potential fraud attempts before they can do the damage.
To see how our AI platform safeguarding against new account fraud using a multi-faceted approach see our demo here:
To see how our AI platform protects you from Account-Take-Over (ATO) attempts see our article on Account Take Over ATO here.:
To see how the VerifiedVisitors AI platform protects dormant sleeper accounts that are reactivated, please see our article here on fake account creation.
To see how the VerifiedVisitors AI platform can protect your accounts, please head to our free trial
Hackers use stolen personal details, such as name and address, contacts and even credit card and banking details to create new accounts, seemingly from a legitimate identity. They then seek to monetise the new accounts in a variety of different ways. For example, they may use the account to login into a retail site, and use up bonus points that have a significant retail value. Consumer based businesses can’t afford to disrupt the user flow too much, and are thus subject to these types of attack.
VerifiedVisitors has a unique AI platform for detecting visitors according to their risk profile. Combining Zero trust at the edge of network, with strong 2FA and other authentication for just those tiny % of suspicious users stops the fraud while letting the verified visitors pass with no additional friction.
New Account bot fraud is on the rise, with a significant increase in automated account creation across social media, dating, gaming, and sports betting sites, as well as financial services / loan specialist sites.
Businesses without a CISO or fraud have it tough. VerifiedVisitors has an AI platform and a Virtual CISO tool to enforce some of the best practice using AI that a full time dedicated team would implement.