The Cyber Security landscape is undergoing a transformation thanks to artificial intelligence (AI), with both the industry and its adversaries jumping on the bandwagon to enhance the tools of their trades.
So says Hilbert Long, General Manager Sales Europe, at CYBER1 Solutions. “Concurrently, the democratisation of AI is now in full swing, thanks to the emergence of cutting-edge generative AI tools such as OpenAI’s ChatGPT and DALL-E. These tools empower everyday users with the capabilities of artificial intelligence.”
Within just the initial five days after its November 2022 launch, ChatGPT’s platform saw over a million users eager to put its AI prowess to the test, he explains. People are excited to explore the potential of these generative AI tools across various domains, including coding, essay writing, artistic creation, blueprint design, package artwork, virtual world and avatar creation in the metaverse, and even troubleshooting production errors.
They are also engaged in an iterative process of refining their prompts and instructions to extract increasingly superior outcomes.
“However, while the positive applications of generative AI are incredibly promising, there is also the sobering reality of potential misuse and harm,” adds Long. “As users delved into this innovative tool, some discovered its capacity to generate malicious software, craft phishing emails, and propagate propaganda. These same tools could also produce false information and push viewpoints that are linked to misinformation campaigns.”
No planning or management
And with the growing popularity and widespread adoption of generative AI, the question of who bears responsibility for addressing these associated risks is becoming a widespread concern, Long explains. In fact, over 1,100 signatories, including prominent figures like Elon Musk, Steve Wozniak, and Tristan Harris from the Center for Humane Technology, recently posted an open letter that calls for an immediate pause, lasting at least six months, on the training of AI systems more powerful than GPT-4 by all AI labs.
The letter argues that there is a notable absence of the necessary planning and management. It suggests that instead of proper planning and management, “AI labs” have become embroiled in a reckless race to develop and deploy increasingly powerful digital intelligences that no one, not even their creators, can fully comprehend, predict, or reliably control.
Addressing the associated risks, the letter advocates for the development of powerful AI systems only when there is confidence in their positive impact and manageable risks.
Nevertheless, while regulation is considered a crucial step, there’s no guarantee that rapid and bold regulatory actions will effectively safeguard AI. Comparable situations, such as the drug trade or cryptocurrencies, demonstrate that legislation alone may not be sufficient to halt illicit activities.
“Furthermore, while many within the industry are working on regulations, malicious actors remain unconcerned about or unbound by these regulations,” Long says “They seize every opportunity to exploit the potential of AI for malicious purposes. This underscores the fact that AI is not only altering the landscape of the cyber arms race but also elevating it to a nuclear level of risk and competition.”
AI-enhanced malware
To begin, malefactors armed with AI now have the ability to automate their malicious tools and activities, including identity theft, phishing, data exfiltration, fraud, and more, at a pace and precision beyond human capabilities.
“AI-enabled attacks happen when bad actors leverage AI as a tool to aid in the development of malware or to execute cyberattacks. These types of attacks have become increasingly popular and are made up of activities such as the creation of malware, data poisoning, and reverse engineering,” he adds.
In addition, advanced conversational chatbots like ChatGPT, powered by Large Language Models (LLMs) for Natural Language Understanding (NLU), are significantly amplifying the potential for automating and enhancing the effectiveness of AI-facilitated malware attacks.
The imitation game
As an illustration, Long says an attacker may employ a chatbot to compose more convincing phishing messages that do not display the typical indicators of deception, such as grammar, syntax, or spelling errors that are easily detected.
“In the context of ChatGPT specifically, its ability to generate code underscores the growing menace posed by AI-driven malware. In April this year, a security researcher at Forcepoint unveiled a zero-day virus with untraceable data extraction, solely relying on ChatGPT prompts.”
He stresses that although ChatGPT has demonstrated the capability to generate functions, it currently lacks robust mechanisms for error checking and prevention in production-style environments. “Right now, ChatGPT lacks the adversarial reasoning required by malware developers, such as considering countermeasures adversaries might employ to thwart their actions while advancing their own objectives. However, this could change overnight.”
This is why Long says companies must remain vigilant regarding the threats stemming from AI-driven hacking tools and implement the necessary measures to fortify their networks against these threats.
Another major concern is the rapid advancement of deepfake technology, which is getting better and better at mimicking reality. Almost anyone can now produce counterfeit images, videos, audio, and text that appear deceptively genuine.
Updating protocols
“Given the formidable capabilities of these AI-fuelled tools, it is crucial for entities across all sectors to arm themselves against these dangers,” Long adds. “This highlights the urgent need for businesses to update their security protocols to stay one step ahead of evolving threats.”
To do this, he says organisations must be made aware of the perils posed by AI hacking tools and take measures to safeguard their networks against these emerging threats. One way to do this is by leveraging AI tools to augment their security strategies.
“This means choosing a Cyber Security partner that possesses the necessary expertise to strike a balance between these intelligent tools, streamlined processes, and the invaluable human experience and knowledge that have proven key to mitigating security incidents and expediting the identification and handling of threats,” Long concludes.