BOT MANAGEMENT TOOLS TO PROTECT FROM FAKE ACCOUNTS CREATION THAT CAUSES FRAUD

HOW EFFECTIVE BOT MANAGEMENT TOOLS CAN PREVENT FAKE ACCOUNT PROBLEMS

Preventing Identity Theft, Fake Registrations and Fraudulent Account Registrations

WHAT ARE FAKE ACCOUNTS?

“A fake account is a profile or identity usually created by bots with malicious intent to deceive others, that registers and gains access to your customer domain using a false identity"

Preventing Fake accounts in the Cloud

Fake accounts, also known as impostor accounts, not only lead to deception but are often a tell-tale sign of an imminent fraud or a bot activated account take over attack that is about to commit fraud or steal IP.


The rise of fake accounts on websites and APIs has become a significant concern for businesses and individuals alike. These malicious accounts are often overlooked or seen as harmless. However, these fake accounts can act like sleeper cells, benign and harmless, until activated by bots for a whole range of nefarious cybercrimes.


VerifiedVisitors is committed to helping you protect your platform and users from the risks posed by fake account creation and activation. In this comprehensive guide, we will present you with advanced techniques and strategies to detect and prevent the damage from fake accounts effectively.


Fake Account Creation


Why you can't just ignore fake accounts?

WHY FAKE ACCOUNTS?

A fake account is a profile or identity usually created by bots with malicious intent to deceive others, that registers and gains access to your customer domain using a false identity. These accounts imitate genuine users, they can pass CAPTCHA fields, and even two-factor authentication, and are used to spread misinformation, engage in fraud, or carry out cybercrimes. Once they successfully gain customer status in your protected service, the exploit their new found status to commit cybercrimes.

Identity Theft, Fake Registrations and Fraudulent Activities

PROBLEMS CAUSED BY FAKE ACCOUNTS

Fake accounts once registered and trusted act like sleeper cells, under the radar until activated by bots.


Malicious actors use fake accounts to impersonate real users, leading to identity theft and fraudulent activities within your platform


Fake user accounts used to be easy to spot. Automated bots would register for your service, but would leave tell-tale signs they are fake. For example, they would use gmail or hotmail accounts with numbers, e.g. 2234208080@hotmail.com and they would also all register in a very short space of time, at very regular intervals in a way that is obviously fake and automated. The hackers are relying on the fact that many sites don’t vet each and every registration, and they know they can simply hide in the volume of daily registrations. Today the bots are much smarter and the registrations tend to look at lot more organic.The volume of registrations is spread out, and they use more natural emails, or even - see below - fake emails from stolen IDs or fullz.


How do cybercriminals make money from fake accounts?

One of the most common methods is to trigger the dormant fake accounts into action on a specific event that exploits gaps or inconsistencies in the value chain. For example, high-value items with a likely resell value several times in excess of the purchase price can be exploited for commercial gain. Taylor Swift tickets, Nike sneakers, all have had high profile bot attacks. Sports betting accounts are also riddled with fake accounts. Criminals trigger each of the fake accounts to place relatively small bets from the actual game-side - knowing that the TV coverage occurs a few seconds later - giving a tiny window to exploit the odds in the favour of the cybercriminals. In both cases, the bots are using an account that transacts, they do actually purchase the items - however the actual purchase is triggered from a bot - similar to the popular sniping software from online auction sites such as ebay, that automates last second bids.
Fake accounts for media sites hide behind the paywall as legitimate users. If one account was created, and simply scrapes the online content, it would be quickly discovered. The hackers use many accounts to distribute the content load amongst the accounts, so they just look like normal, albeit dedicated and conscientious readers. However, in sum, each and every day, the entire contents behind the paywall are being scrapped by the bot army of fake accounts. While the actual commercial loss may not be massive, the brand damage and loss of trust can be catastrophic.”
How fake accounts create breaches of trust, and lead to a host of issues, from content theft to sophisticated fraud.

FAKE ACCOUNTS PROBLEMS

Social Media Impostors

One prevalent form of fake account, as Elon Musk has found out with X (aka Twitter) are fake social media accounts and impostors. These bot activated impostors mimic well-known personalities, celebrities, or public figures to attract followers, spread false information, or scam unsuspecting users. These accounts often use bots to transmit fake messages - e.g. investing in the latest crypto-schemes.

Spam and Phishing Attacks from Fake Emails

Fake email accounts can be used to send spam messages or phishing emails to genuine users, compromising their personal information and data security. These accounts are created to trick recipients into believing that the sender is someone they know or a legitimate organization. They can also be used to create fake accounts, register for a service, and then use bots to take advantage of the registered service. For example, a bot use the fake ID, or fullz data to register and then scrape data from behind a firewall, steal data about other users, or buy high value tickets and other items for re-sale, once inside the protected domain.

Distorted Analytics and Vanity Metrics

Fake accounts can skew user engagement metrics and analytics, making it challenging to obtain accurate data for decision-making processes. This can be challenging in a corporate setting. For example, if one of the company OKRs is setting marketing goals for registrations, engagement or user growth, removing these fake accounts can negatively impact on reported metrics.Investing in bot detection software offers several invaluable benefits for your online business:


1. Improved Website Security


Bot detection software acts as a digital shield, protecting your website from unauthorized access and potential cyberattacks. It identifies and blocks harmful bots, ensuring that your sensitive data remains secure.


2. Enhanced User Experience


Malicious bots can slow down your website, leading to a poor user experience. Many sites punish legitimate users by putting into place extensive two-factor authentication, hard to use CAPTCHA, pointless rate limits across the entire website, and other obstacles for all users, regardless of who they are. By employing bot detection software, you can optimize your website's performance and provide a seamless browsing experience for all your visitors.


3. Decreased Server Maintenance, lower costs


Bot spikes can cause all sorts of maintenance issues. The sudden volumes of traffic can overwhelm servers, or cause issues as your elastic compute auto-scales and fire-up new nodes, causing additional expenses and increased maintenance. Verifying bots using log data, is time consuming and fraught with identify issues. Malicious bots often impersonate common bots that you don’t wan’t to allow on your site. Many sites have whitelisted bots disguised as legitimate services, only to have their entire web site crawled or worse.

Malicious bots are at the heart of fake account creation.

HOW FAKE ACCOUNTS ARE CREATED

Bot Automated Account Creation

Most fake accounts are generated through automated bots. Hackers and cybercriminals use sophisticated bots to create many fake profiles rapidly.

Use of Bots

Bots are programmed to register online for a service. This fake account creation is made to look like a real user. Bots can use captcha farms - outsourced real human captcha solvers, or are programmed to pass the captcha. Many sites use an alternative audio capture for accessibility reasons. Bots can easily use AI voice recognition to listen and complete the audio captcha. They can also be programmed to respond to SMS authentication requests using a range of mobile numbers assigned to fake registrations.

Identity Theft

In some cases, fake accounts are established through identity theft. Stolen personal information is used to create convincing profiles that can be leveraged for nefarious purposes. See the Blog article on how hackers obtain millions of fake ID packages containing comprehensive stolen Personal Identifiable Information (PII)

Traditional Methods for Identifying Fake Accounts mostly fail - witness Elon Musk and Twitter's rate limiting of users.

HOW COMPANY’S IDENTIFY FAKE ACCOUNTS TODAY?

Email Verification and Validation


✅ Implement a robust email verification process during account registration to ensure that users provide valid and active email addresses.

Why email verification FAILS

❌ The email passes the verification, it's just a fake email.


✅ Phone Number Verification and OTP Password, Utilize phone number verification through OTP (One-Time Password) to add an extra layer of security and prevent automated fake account creation.

Why OTP verification FAILS

❌ The cyber criminals use burner phones and respond to the OPT.


✅ CAPTCHA Challenges. Integrate CAPTCHA challenges at critical junctures to deter bots and automated scripts from registering fake accounts. ❌ Why CAPTCHA FAILS

❌ the bots pass CAPTCHA.


✅ Authenticator Two-Factor- Authentication: Using a robust authenticator is probably the most effective way of managing and authenticating users.

Why Authentication FAILS

❌- Forcing authenticator registration of all users may be a real issue for access. User acceptance may be strong for banking, financial services, and security use cases, but user resistance for general e-commerce and associated conversions may well be severely affected. Don’t forget, Bots can also be programmed to pass the authenticators! Setting up fake authenticated accounts with multiple emails is more work, but still easy to do.


✅ Manually Inspecting Logs and User Profiles: It’s true that fake accounts may have incomplete or inconsistent profile information, raising suspicions about their authenticity.

Why Manually Inspecting Logs FAILS

❌ However, inspecting these manually if you more than a few thousands registrations is a herculean task. Humans don’t enjoy registering for services, and will likely have all sorts of inconsistencies with their profile.


✅ IP Address Monitoring: Manually monitoring IP addresses for suspicious activities, such as multiple account registrations from the same IP, which could indicate potential bot involvement is possible.

Why IP monitoring FAILS

❌ However, doing this manually with many thousands of accounts is very hard, and most sophisticated bots rotate IPs constantly.


✅ Unusual Behaviour: Accounts that exhibit unusual patterns of activity is a good tell-tale sign that the account is fake. However, this is again very difficult to spot, unless the account is massively abusing your service.

Why manual detection FAILS

❌ Account usage genuinely does vary enormously across our human population. People use their accounts in all sorts of different ways. Without bot management tools, this is a really complex problem to solve manually.


✅ Rate Limiting: Enforce rate limiting for account registration attempts to prevent bots from overwhelming your system with numerous requests.

Why rate limiting is a total FAIL

❌. Punishing all users because you can’t identify the bots is a total fail. The rate-limiting doesn’t stop the bots, it just slows down all your traffic.


Auditing a sample of accounts to understand the scope of fake account creation

VERIFICATION CHECKS AND REGISTRATION AUDITS

Auditing and verifying the identity of individuals manually, or performing spot audits on a sample of your registration can definitely help.

The manual audit may reveal that fake accounts do exist in the sample, and from there, it’s possible to size and scope the extent of the issue based on the total registrations and sample size.

Strict user identity verification, for sensitive transactions or activities that require higher security measures is certainly possible. However, manual methods, such as instigating a phone call with each new registration to verify each and every account is a lot of work. It will only make sense if you have a relatively small amount of high-value accounts.

A manual check may reveal that the registration mobile credentials are real, but the customer never picks up the phone. In all likelihood you suspect the mobile is just a burner phone used for fake account creation. The problem is many legitimate customer don’t respond to mobile messages either.

The good news a thorough audits really should pick up some fakes. Positively identifying a ‘genuine’ fake is very helpful, as you can then understand more about the fake account creation process, to uncover a pattern that may identify other accounts. Manual verification can absolutely make business sense according to the context.

Zero Tolerance at the Edge of Network is invaluable

PREVENTING FAKE ACCOUNTS WITH MODERN TECHNIQUES

We’ve seen how most of the traditional methods fail. How can VerifiedVisitors and bot management software help to fight the bots and prevent fake accounts?

Zero Trust at the Edge of Network

Stopping bots at the edge of network greatly reduces the cybercriminals ability to create and then activate bot based attacks.

Reconnaissance Bots

Creating fake accounts and then activating them automatically takes some time, even with bot automation. Hackers will typically send out some test scripts to see if their automation bots are picked up and stopped on their victim’s target site. Zero tolerance on core login paths acts as a strong deterrence. With no obvious vulnerabilities, the cybercriminal may decide to move onto an easier target. So many sites are wide-open, so why swim against the tide?

Reconnaissance Bots


PROTECTING ACCOUNT TAKE-OVER PATHS

If Cyber criminals are able to get through the first line of defence and your company doesn’t have a zero trust at the edge of network policy, the cybercriminals can move onto the actual fake accounts creation automation. Making it impossible to automate the account creation in the first place, will make it very difficult for the hackers and may well encourage them to try elsewhere.


VerifiedVisitors has an account take over rule, specifically looking for automated activity on login paths. These detectors compare the actual hardware and software platform using a device fingerprint against their stated user agent type, and looks at behavioural factors, as well as the digital provenance of the traffic. All these factors are then taken into account by the Machine Learning, to produce an overall threat score. As the bots try and hit the accounts page they are blocked at the network edge, before they even have a chance to create an account.


At this stage, the cybercriminals will know they are up against a sophisticated adaptive defence that specifically preventing login access.


They cybercriminal is now faced with a tough choice. They are forced to create the fake accounts totally manually one-by one. However, they also now know that when it comes to logging into and triggering the fake accounts to commit the fraud events, they are likely to be stopped


It's an arms race as the bots fight back

TRIGGERING FAKE ACCOUNTS

If the cyber criminals decide to go ahead and manually create fake accounts, they will also probably bypass all the traditional two-factor authentication methods on your site. Typically fraudsters use a few hundred fake accounts. They use these to disguise login as they attempt brute force attacks, and also to launch their own attacks from these fake accounts


It's an arms race as the bots fight back

FAKE ACCOUNT BEHAVIOURAL ANALYSIS

Adopting a zero tolerance mindset, means presuming that our accounts may at some point suffer from breaches. Behavioural analysis of the common paths and navigations routes across web sites uses Machine learning at massive scale to understand the regular user distributions, and then look for the tell-tale signs of auto-mated bot activity. Simple bots give the game away by navigating too quickly from one destination to the other. They may lack mouse trails, or even fake the mouse trails in an unconvincing way. The behavioural analysis is a very powerful way of detecting anomalous activity from within a mass of traffic or even API service that is plagued with data mining. Once an audit discovers a fake account, the behavioural analysis can be used to look for similar behavioural patterns from the know fakes


Audits

REGULAR SECURITY AUDITS

At VerifiedVisitors we do love an audit. Time and time again, we’ve seen how a simple bot audit can pick up fake account creation, leading to exposure of fraudulent accounts. Conduct regular security audits of your accounts to identify vulnerabilities and potential weaknesses in your platform's security measures. Adopting a zero trust mentality, and audit testing each of your protection layers, is a tried and trusted formula for success

Conclusion

CONCLUSION - ZERO TRUST FOR BOTS IN THE CLOUD

Fake accounts are a real problem for many companies. Adopting a zero trust mindset, and undertaking audits at each stage of your security layers will help you identify fake accounts. VerifiedVisitors has a free 30 day bot audit, and has sophisticated state of the art bot management tools to stop bot based fake account activity all at the edge of network, before any harm can be done

Audits

HOW VERIFIEDVISITORS PROTECTS YOU

VerifiedVisitors protects all your endpoints - API, & websites across the hybrid cloud - all with no software to install in milliseconds. Adding zero tolerance at the network edge greatly increases your overall security footprint, preventing bot attacks and fraud before they can do harm.


FAKE ACCOUNTS: FAQ

How can I detect fake accounts on my company website?

Fake accounts are best detected by undertaking a comprehensive audit, combined with a capable bot detection platform to identify and remove the threats.

How can we spot fake accounts on our company web site?

Suspicious profiles, unusual behavior, and verification checks can all be used to look for fake accounts. However, this is difficult if not impossible at large scale. We suggest an initial audit to verify and scope the size of the problem, followed by the integration of a bot detection platform to allow you to automatically pick up anomalous patterns of fake behaviour at scale.

Can fake accounts be used for cyber-attacks?

Yes, fake accounts can be leveraged to launch cyber-attacks, compromising the security and privacy of individuals and organizations.

How can AI and machine learning help combat fake accounts?

AI and machine learning algorithms can efficiently identify and disable fake accounts by analyzing patterns and behaviors. Many vendors just think they can sprinkle AI and ML like fairy dust on top of a basic fingerprint detection system. The machine learning needs to learn for your traffic to know what is anomalous. Ensure the ML is optimised and learns from your traffic and not just implementing global behavioural rules.