The rapid emergence of Open AI’s ChatGPT has been one of the biggest stories of the year, with the potential impact of generative AI chatbots and large language models (LLMs) on cybersecurity a key area of discussion.
There’s been a lot of chatter about the security risks these new technologies could introduce — from concerns about sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks.
Some countries, US states and enterprises have ordered bans on the use of generative AI technology such as ChatGPT on data security, protection, and privacy grounds. Clearly, the security risks introduced by generative AI chatbots and large LLMs are considerable.
However, generative AI chatbots can enhance cybersecurity for businesses in multiple ways, giving security teams a much-needed boost in the fight against cybercriminal activity.
Here are 6 ways generative AI chatbots and LLMs can improve security.
Vulnerability scanning and filtering
“We can anticipate that LLMs, like those in the Codex family, will become a standard component of future vulnerability scanners,” the paper read.
For example, a scanner could be developed to detect and flag insecure code patterns in various languages, helping developers address potential vulnerabilities before they become critical security risks.
As for filtering, generative AI models can explain and add valuable context to threat identifiers that might otherwise go missed by human security personnel. For example, TT1059.001 — a technique identifier within the MITRE ATTCK framework — may be reported but unfamiliar to some cybersecurity professionals, prompting the need for a concise explanation.
ChatGPT can accurately recognise the code as a MITRE ATTCK identifier and provide an explanation of the specific issue associated with it, which involves the use of malicious PowerShell scripts, the AWS paper read. It also elaborates on the nature of PowerShell and its potential use in cybersecurity attacks, offering relevant examples.
In May, OX Security announced the launch of OX-GPT, a ChatGPT integration designed to help developers with customised code fix recommendations and cut-and-paste code fixes, including how codes could be exploited by hackers, the possible impact of an attack, and potential damage to the organisation.
Reversing add-ons, analysing APIs of PE files
Generative AI/LLM technology can be used to help build rules and reverse popular add-ons based on reverse engineering frameworks like IDA and Ghidra, says Matt Fulmer, manager of cyber intelligence engineering at Deep Instinct. “If you’re specific in the ask of what you need and compare it against MITRE ATTCK tactics, you can then take the result offline and make it better to use it as a defense.”
LLMs can also help communicate via applications, with the ability to analyse APIs of portable executable (PE) files and tell you what they may be used for, he adds. “This can reduce the time security researchers spend looking through PE files and analysing API communication within them.”
Threat hunting queries
Security defenders can enhance efficiency and expedite response times by leveraging ChatGPT and other LLMs to create threat-hunting queries, according to AWS.
By generating queries for malware research and detection tools like YARA, ChatGPT assists in swiftly identifying and mitigating potential threats, allowing defenders to focus on critical aspects of their cybersecurity efforts.
This capability proves invaluable in maintaining a robust security posture in an ever-evolving threat landscape. Rules can be tailored based on specific requirements and the threats an organisation wishes to detect or monitor in its environment.
AI can improve supply chain security
Generative AI models can be used to address supply chain security risks by identifying potential vulnerabilities of vendors. In April, SecurityScorecard announced the launch of a new security ratings platform to do just this through integration with OpenAI’s GPT-4 system and natural language global search.
Customers can ask open-ended questions about their business ecosystem, including details about their vendors, and quickly obtain answers to drive risk management decisions, according to the firm.
Examples include “find my 10 lowest-rated vendors” or “show me which of my critical vendors were breached in the past year” — questions that SecurityScorecard claims will yield results that allow teams to quickly make risk management decisions.
Detecting generative AI text in attacks
LLMs not only generate text, but they are also working towards detecting and watermarking AI-generated text, which could become a common function of email protection software, according to AWS.
Identifying AI-generated text in attacks can help to detect phishing emails and polymorphic code, and it’s realistic to assume that LLMs could easily detect untypical email address senders or their corresponding domains, along with being able to check if underlying links in text lead to known malicious websites, AWS said.
Security code generation and transfer
LLMs like ChatGPT can be used to both generate and transfer security code. AWS cites the example of a phishing campaign that has successfully targeted several employees within a company, potentially exposing their credentials.
While it is known which employees have opened the phishing email, it is unclear whether they inadvertently executed the malicious code designed to steal their credentials.
“To investigate this, a Microsoft 365 Defender Advanced Hunting query can be utilised to find the 10 most recent logon events performed by email recipients within 30 minutes after receiving known malicious emails. The query helps to identify any suspicious login activity that may be related to compromised credentials.”
Here, ChatGPT can provide a Microsoft 365 Defender hunting query to check for login attempts of the compromised email accounts, which helps to block attackers from the system and clarifies if the user needs to change their password. It is a good example to reduce time to action during a cyber incident response.
Based on the same example, you could have the same problem and find the Microsoft 365 Defender hunting query, but your system does not work with the KQL programming language. Instead of searching for the correct example in your desired language, you can do a programming language style transfer.
“This example illustrates that the underlying Codex models of ChatGPT can take a source code example and generate the example in another programming language. It also simplifies the process for the end user by adding key details to its provided answer and the methodology behind the new creation,” said AWS.
Leaders must ensure the secure use of generative AI chatbots
Like many modern-day advancements, AI and LLMs can amount to a double-edged sword from a risk perspective, so it’s important for leaders to ensure their teams are using offerings safely and securely, says Chaim Mazal, CSO at Gigamon.
“Security and legal teams should be collaborating to find the best path forward for their organisations to tap into the capabilities of these technologies without compromising intellectual property or security.”
Generative AI is based on outdated, structured data, so take it as a starting point only when evaluating its use for security and defense, says Fulmer. “For example, if using it for any of the benefits mentioned above, you have it justify its output. Take the output offline and have humans make it better, more accurate, and more actionable.”
Generative AI chatbots/LLMs will ultimately enhance security and defenses naturally over time, but utilising AI/LLMs to help, not hurt, cybersecurity postures will all come down to internal communications and response. Mazal says.
“Generative AI/LLMs can be a means for engaging stakeholders to address security issues across the board in a faster, more efficient way. Leaders must communicate ways to leverage tools to support organisational goals while educating them about the potential threats.”
AI-powered chatbots also need regular updates to remain effective against threats and human oversight is essential to ensure LLMs function correctly, says Joshua Kaiser, AI technology executive and CEO at Tovie AI.
“Additionally, LLMs need contextual understanding to provide accurate responses and catch any security issues and should be tested and evaluated regularly to identify potential weaknesses or vulnerabilities.”