A language-generating AI model called ChatGPT, available for free, has taken the internet by storm. While AI has the potential to help IT and security teams become more efficient, it also enables threat actors to develop malware.

In this interview with Help Net Security, Daniel Spicer, Chief Security Officer of Ivanti, explains what this technology means for cybersecurity.

ChatGPT Cybersecurity Threat

What are the reasons to be concerned about the application of AI to cybersecurity?

The tech industry has focused on creating generative AI that responds to a command or query to produce text, video, or audio content. This type of AI cannot analyze and contextualize data and information to provide understanding.

Currently, the value of generative AI, like ChatGPT and DALL-E, is skewed in favor of threat actors. Generative AI gives you exactly what you asked for, which is useful when writing phishing emails. Information security needs an AI that can take information, enhance it with additional context, and draw a conclusion based on its understanding.

This is why the use cases addressed by generative AI are extremely attractive to threat actors. An obvious concern with the application of AI by threat actors is social engineering. AI makes it possible to create a huge volume of sophisticated phishing emails with minimal effort. It can also create incredibly realistic fake profiles. Even places that were once considered reasonably bot-free, like LinkedIn, now have compelling profiles that even include profile pictures. And we’re just starting to see the impacts.

We have also already seen that threat actors use ChatGPT to develop malware. Although the results on the quality of ChatGPT’s code-writing ability are mixed, the generative AI that specializes in code development can speed up malware development. Eventually, we’ll see it help exploit vulnerabilities faster – hours after a vulnerability is disclosed instead of days.

On the other hand, AI has the potential to help IT and security teams become more efficient and effective, enabling automated and/or semi-automated detection and remediation of vulnerabilities as well as prioritization based on the risks. This makes AI capable of analyzing data very promising for IT and security teams facing resource constraints, but unfortunately this type of tool does not yet exist and when it does, it can be complicated to implement because of the training needed for him to understand the “normal”. in a particular environment.

The industry must now focus on building AI to help defenders analyze and interpret massive amounts of data. Until we see huge improvements in the ability of AI tools to understand, attackers will continue to have the advantage as today’s tools meet their requirements.

ChatGPT has controls in place to prevent misuse. Are these checks enough to keep cybercriminals at bay?

In a word, no. So far, the controls ChatGPT has put in place are ineffective. For example, researchers have found that the way you ask ChatGPT a question significantly alters its response, including its effectiveness in rejecting malicious requests. Depending on the commands you give to a generative AI tool, it is possible to fragment all the stages of a malware attack. A positive with this scenario is that ChatGPT does not currently write good code – but that will change.

Deploying ChatGPT should be considered a great demo experiment. This is their beta launch. Generative AI will continue to improve – it has already lowered the bar for phishing attacks – and in the future we will have better versions of the technology that are capable of developing malware.

This will change the arms race between malware developers and AV/EDR vendors, as code-driven AI can manipulate code to change the design more substantially than traditional packaging services.

You can’t put the genie back in the bottle. Threat actors using generative AI in their arsenal of attacks is a possibility, and now we need to focus on how we will defend against this new threat.

Can ChatGPT be abused by attackers without technical knowledge?

It’s safe to say that ChatGPT will significantly reduce the skill-based cost of entry for threat actors. Currently, the sophistication of a threat is more or less tied to the sophistication of the threat actor, but ChatGPT has opened up the malware space to a whole new level of rookie threat actors who one day will be able to more easily hit well above their weight. .

This is alarming, as it not only increases the volume of potential threats and the number of potential threat actors, but also makes it more likely that people who have no idea what they are doing will join the melee. There is an unprecedented level of inherent recklessness even in the realm of malware.

But it still has its challenges. Since we know the technology is a success, we will see iterations and improvements.

What happens in a future version of ChatGPT, a version that users will be able to connect to tools that find vulnerabilities?

The cybersecurity world is already struggling to keep control over the vast number of code vulnerabilities. AI will push those numbers even higher because it’s both faster and smarter at finding vulnerabilities. Combined with AI coders, we were able to see the weaponization of newly discovered vulnerabilities in minutes, not days.

To be clear, the AI ​​isn’t even better or faster right now, but we expect that to happen. One day we will see vulnerability detection, vulnerability weaponization, and payload, all done by AI and without human intervention.

Check Point researchers demonstrated how ChatGPT could create a plausible phishing email. How do you think threats will evolve once attackers start using AI?

Again, AI made it faster and more accessible to people with limited technological knowledge to produce large amounts of realistic phishing attacks and fake profiles. Since this gateway is relatively new, it’s a bit of a doddle at this point among rookie threat actors. But as threat actors become more comfortable with AI, we will see sophistication increase as threat actors compete for access and prominence.

In the future, we will have an AI capable of completing a whole attack chain, starting with writing a phishing email. It’s not far-fetched that AI can use readily available tools to scope out an environment and quickly identify the best path to an organization for ransomware. It will be able to determine the network, layout, and architecture, then manipulate the toolchain to obfuscate the payloads to avoid detection by defenders. All without anyone having to press another button.

This is not good news for those on the other side.

More ChatGPT content:

Source link

Leave A Reply