Zero Trust Bot Protection at the Edge
Early and pro-active threat detection is the core stable of managing risk. Most CISO's will have in place a robust proactive set of practices that include continuous manual review of network activity and logs for suspicious behaviour, responding to potential security alerts, and understanding the signals for potential data breaches. Typical early detection methods can be split into three phases as below:
Phase 1 - Unusual Network Traffic
In this phase we look for unusual IPs or ASN ranges we've never seen before, traffic spikes that's can't be explained, anomalous patterns of usage that we haven't seen before, bursts in bandwidth utilisation, and changes in traffic behaviour that we can't explain.
Phase 2 - Anomalous System Behaviour
Phase 2 looks for performance or service degradation, system crashes, CPU or bandwidth bursts, and changes in the normal pattern of system resources.
Phase 3 - Log Data Abnormality
In phase 3 we look for changes in the failed login attempts to success ratio, atypical logins from new IPs locations, or timezones, and use of admin accounts and other privileged accounts.
Why this approach Fails if you're doing it manually using logs.
The problem is, if you're looking at the logs and trying to decipher patterns in the tea leaves of the network abnormalities and anomalous behaviour, it's all probably too late. The webserver will have crashed, and data may have been breached. It's also very hard to spot some of these attacks by looking through logs, even with the best SIEM tools and analytics. Many of the ATO attacks used bots to disguise and hide the data breaches.
What is needed is to instead move these practices to the edge of the network, where we can adopt a policy of zero- tolerance, and help to proactively prevent these attacks before they hit our infrastructure.
How does VerifiedVisitors work?
Edge of Network Detectors
Our detectors go to work on all the edge of network traffic in the cloud before it hits your website. We examine hundreds of threat detectors, from behavioural, digital provenance, user agent, browser and fingerprint data to provide a comprehensive platform for detecting potential account take over and malicious activity. Our detectors work in the background to categorise the threats into cohorts from high risk all the way to known and verified threats. Our platform generates automatic rules for each traffic type that are update dynamically over time according to the risk. The total threat risk surface area is presented in the console, as shown below.
When you flip the chart icon at the top right, the view changes to the actual behavioural tracking of the user cohorts, so you can clearly see the anomalous data, without having to dig into the logs and read the tea leaves.
Once you've identified the potentially malicious traffic, and the paths you really want to protect, VerifiedVisitors suggest rules that you can then put in place to mitigate the threats. These dynamic rules change according to the threat type - which means if the IP ranges, user agents, fingerprints or other attributes of the attack change, it makes no difference. The rule adapts to the new threat conditions. For example, if you have zero tolerance set on your login paths, admin, shopping cart etc. all malicious bot activity will be blocked at the network edge.
Stopping the automated attacks has another major benefit. It removes the spikes and abnormalities, leaving real verified visitor traffic. It's much easier to spot anomalous data with the new clean verified data set.
Photo by Daniel Lloyd Blunk-Fernández on Unsplash