AI-Driven Prevention and Detection Mechanisms
With the rising complex nature of social engineering attacks, defenders have been (sensibly) using Artificial Intelligence (AI) and Machine Learning (ML) to fight such threats. AIbased tools promise to be able to sift through enormous data sets, find faint patterns, and do it all much faster than a human analyst. This part focuses on how the AI/ML can help in the identification, prevention and mitigation of social engineering attacks, actual tools and frameworks and recently published works from 20202025.
ML to the Rescue : The Phishing Email Detection One of its earliest AI applications in this sector is email security. Contemporary email providers and secure email gateway vendors utilise machine learning models to accurately detect phishing and spam emails. These models can consider attributes of an email message like headers and sender reputation, as well as content (such a language patterns) to identify malicious messages that might slip past traditional rule based filters. For instance, even Google’s Gmail employs AI based filters that scan each and every one of the emails traffic that is heading inside – according to Google its AI led defences block 99.9% of spam, phishing and malware emails while it sends around 15 billion unwanted emails into the junk box per day. This is a reminder of the magnitude problem for phish that would be all but unmanageable by humans, and shows how effective AI can be at combatting it. Approaches such as NLP allow detectors to identify phishing emails by phrases contained in them (for example: Recognizing what make a 'phishing lure' in text). Machine learning can also be used to identify phishing URLs by dissecting features of the link (length, presence of abnormal tokens, etc.) or even rendering the page in a sandbox and checking if it’s a fake login page. Academic research in recent years has investigated the use of deep learning models (such as recurrent neural networks or transformers) for enhancing phishing email classification, which can include detecting spear phishing attacks that leverage language more contextually and with a target in mind. These models learn tiny differences between, let’s say an email that appears to be sent from a CEO but is not (differences in writing style or metadata) and a true communication. The result: Much more adaptive detection that can spot new phishing tactics that perhaps a signature based approach would miss. Then, some enterprise AI systems engage in anomaly detection based on email behaviors for example, if an employee’s account abruptly begins emailing different recipients at odd hours (a hint the account is compromised and being operated by a phisher), the system may alert or intervene automatically.
User Behavior Analytics and Anomaly Detection: AI is also being applied to build models of normal user behavior and identify anomalies that could signal social engineering led compromise, rather than just scanning communication content. This is in the realm of User and Entity Behavior Analytics (UEBA). Machine learning models train off logs of user activity (logins, patterns, transactions), and profile what a usual day is for regular users or roles. If an account that usually doesn’t do much begins moving 10GBs of data at 2 a.m., or if the credentials of an employee who normally works in an office are used to access your network from another country, these systems will raise a red flag. These kinds of anomalies can be indicative of an attacker who was able to leverage stolen credentials from a phishing or some other social engineering attack. By detecting misuse early on, AI powered monitoring can help limit the damage (for example by automatically freezing the account or prompting for reauthentication if suspicious activity is detected). ITDR isn’t purely AI, it’s more akin to AI training or apprenticeship– where artificial intelligence watches how creds are used then mentions “this looks like fishy activity” when appropriate. Unit 42 now advises organizations to incorporate behavioral analytics and ITDR proactively in order to catch misuse of credentials and lateral movement that frequently occur subsequent to social engineering compromise. The advance notice that these technologist give defenders could be enough to get them a foothold for responding before the contents of this messaging is used against the user, effectively taming the attack.
Malicious Call & Chat Detection: One other frontier that AI/ML can be used is to detect social engineering in voice calls and chat conversations. As vishing and deepfake audio attacks are on the rise, various researchers have proposed voice analysis algorithms to identify voice spoofing or anomalous speech. AI might analyze patterns in people’s vocal tone and sentence structure while they’re on the phone to catch calls that are likely scam attempts. For example, there are experimental solutions such as the “Antisocial Engineering Tool (ASsET)” that has been developed to detect telephone scams by examining the semantic information exchanged in a conversation. ASsET and like methods use NLP strategies to identify “scam signatures” – particular patterns of dialogue or sets of phrases that are distinct to social engineering calls (akin to how AV products have malware signatures). With enough input (spoken or transcription) of dialogue from phone scams and clustering/classification algorithms on a transcript, such systems could potentially flag when a call is reading from a known scam script (eg faux IRS agents, tech support). This is a fertile field for research, and commercial call filtering apps are starting to employ AI to block robocalls (and suspected vishing) attempts. Similarly, chatbots and messaging apps have started using AI content filters to detect any suspicious or fake message in live chats. For instance, some banking chat services incorporate AI that can sense if a user is reading from a script; possibly following instructions of the scammer (i.e strongarm type coercion or abnormally abnormal requests are its basis).
AI for Social Engineering Prevention: Detection and prevention go together. AI is also being worked into to help users become more aware and empowered. One interesting trend is that even the training themselves are now AI based; systems, like ant phishing training services, can automatically use machine learning to devise a targeted phishing email test or in some cases just generate a more realistic phishing simulation via AIs. Organisations can build up resilience by inoculating their users via AI generated fake phishing emails, coupled with feedback. Some email clients, with AI assistants and all, can also proactively alert you within a message. For example, an AI might read an email on a user’s behalf, and provide a warning that says: “This email requests a password and exhibits characteristics that resemble known phishing; use caution.” ("But as I’ll explain later, attackers are also attempting to manipulate AI assistants directly themselves a trend called prompt injection to evade these defenses.") At a larger level, AI can facilitate the maintenance of blacklists and reputation systems in a more dynamic way. Services are using machine learning to scour the web for phishing sites, fake social media profiles and malicious content and take them down or warn users before acting.
Tools, Frameworks and Developments: In the academic domain all over the literature a lot of research frameworks and prototypes exist. In 2021, A neural networks (CNNBiLSTM) and attention based hybrid model RFCNNBiLSTM was used to detect voice phishing in Korean, obtaining an accuracy of >99% on experiments. Another emerging trend that’s poised to change the game in defensive strategies is the use of large language models (LLMs) and generative AI. Yet with AI, the same class of technology attackers utilize to create fake content is also available for defenders, making it possible to more effectively analyse and screen that pretend stuff. For example, researchers are exploring LLMs that could understand the intent behind an email or message; an AI might “comprehend” that a piece of text is meant to scare the recipient into clicking a link and label it as likely phishing. AI is also being used to monitor transactions for example, among companies that have started using AI algorithms at banks to detect and block suspicious transactions (based on patterns of known scam techniques and unusual amounts, for instance) or rampant scams while happening in real time (such as when fraudsters socially engineer their prey into moving money).
Some specific applications and products include: email security AI (such as machine learning models in Microsoft 365 Defender that can spot BEC attempts by comparing the writing style of incoming email with historical messages to examine metadata), browser based AI protections (web browsers using machine learning to flag deceptive sites or fake login forms), and AI assisted authentication (systems analyzing logins for risk scores could, upon spotting a login from an unrecognised device, trigger extra verification based on what a ML model reckons is risky activity). In industry, vendors such as Abnormal Security and Barracuda Sentinel advertise AI-based detection of social engineering (particularly BEC) through the use of communication pattern modelling. In the meantime more and opensource projects and academic tools (e.g., such as ASsET) are elbowing their way into areas such as telephone scam detection.
Lastly, AI is being tapped to handle incident response to social engineering. Upon detecting an attack, each of these components uses AI for playbooks in order to automate actions such as quarantining phishing emails on all mailboxes, resetting owned accounts, or even conducting conversation with the attacker (thought there is an AI chatbot of course) to stall them and gain information. These responses, already automated thanks to artificial intelligence’s analytical speed, can contain some of the threat that originates in social engineering before it becomes a potentially greater breach. At a high level, AI and ML are key new vector’s for defence tools against social engineering from filtering phishing communications at scale, to watching user behavior for strange activities, to assisting in user training and awareness. With attacks on the rise in both volume and complexity, however, AI’s ability to adapt and learn provides a significant advantage when it comes to outmanoeuvring social engineers.