ChatGPT is the hottest artificial intelligence (AI) application of the moment. In case you’re one of the few who haven’t encountered it yet, it’s basically a very sophisticated generative AI chatbot powered by OpenAI’s GPT-3 Large Language Model (LLM) . Basically, that means it’s a computer program that can understand and “talk” to us in a way that’s very much like talking to a real human. A highly intelligent and knowledgeable human, who knows about 175 billion pieces of information and is able to remember any of them almost instantaneously.

The power and capabilities of ChatGPT have fueled the public imagination of what might be possible with AI. Already, there’s a lot of speculation about how it will affect a slew of human job roles, from customer service to computer programming. Here, however, I want to take a quick look at what this could mean for the cybersecurity field. Is this likely to lead to an increase in the already growing number of cyberattacks targeting businesses and individuals? Or does it empower those whose job it is to counter these attacks?

How can GPT and its successor technology be used in cyberattacks?

The truth is that ChatGPT – and more importantly, future iterations of the technology – have applications in both cyberattack and cyberdefense. This is because the underlying technology known as Natural Language Processing or Natural Language Generation (NLP/NLG) can easily mimic written or spoken human language and can also be used to create computer code.

First, we need to cover an important caveat. OpenAI, creators of GPT-3 and ChatGPT, have included some pretty rigorous safeguards that, in theory, prevent it from being used for malicious purposes. This is done by filtering the content for phrases that suggest someone is trying to use it for such a purpose.

For example, ask him to create a ransomware application (software that encrypts a target’s data and demands money to make it accessible again), and he will politely refuse.

“I’m sorry, I can’t write code for a ransomware app…my goal is to provide information and help users…not to promote harmful activities,” he told me. it says when I asked for it experimentally.

However, some researchers claim to have already found a workaround to these restrictions. Further, there is no guarantee that future iterations of the LLM/NLG/NLP technology will include such guarantees.

Some of the possibilities that a malicious party may have at their disposal include the following:

Writing more official or appropriate-sounding scam and phishing emails, such as encouraging users to share passwords or sensitive personal data such as bank account information. It could also automate the creation of many such emails, all personalized to target different groups or even individuals.

Automated communication with scam victims – If a cyber thief attempts to use ransomware to extort money from victims, a sophisticated chatbot could be used to increase their ability to communicate with and guide victims throughout the ransom payment process.

Malware Creation – As ChatGPT demonstrates that NLG/NLP algorithms can now be used to proficiently create computer code, this could be exploited to allow almost anyone to create their own custom malware designed to spy on the user activity and steal data, to infect systems. with ransomware or create any other malicious software.

Build language capabilities into the malware itself – This would potentially allow for the creation of a whole new breed of malware that could, for example, read and understand the entire contents of a computer’s computer system or email account. a target to determine what is valuable and what should be stolen. Malware may even be able to ‘listen’ to the victim’s attempts to counter them – for example, a conversation with helpline staff – and adapt its own defenses accordingly.

How can ChatGPT and its successor technology be used in cyber defense?

AI, in general, has potential applications for both offense and defense, and luckily it’s no different for natural language-based AI.

Identifying Phishing Scams – By analyzing the content of emails and text messages, it can predict whether they are likely attempts to trick the user into providing personal or actionable information.

Coding anti-malware software – Because it can write computer code in a number of popular languages, including Python, Javascript, and C, it can potentially be used to aid in the creation of software used to detect and eradicate malware. viruses and other malware.

Spot vulnerabilities in existing code – Hackers often take advantage of poorly written code to find exploits – such as the potential to create buffer overflows that could cause a system crash and potentially data leakage. NLP/NLG algorithms can potentially spot these exploitable flaws and generate alerts.

Authentication – This type of AI can potentially be used to authenticate users by analyzing how they speak, write, and type.

Automated Reporting and Summarization – It could be used to automatically create plain language summaries of attacks and threats that have been detected or countered or those that an organization is most likely to fall victim to. These reports can be customized for different audiences, such as IT departments or executives, with specific recommendations for different people.

I work in cybersecurity – is this a threat to my job?

There is currently a raging debate over whether AI is likely to lead to widespread job losses and layoffs among humans. My opinion is that while it is inevitable that some jobs will disappear, it is likely that others will be created to replace them. More importantly, the jobs lost are likely to be those that primarily require routine and repetitive tasks, such as installing and updating email filters and anti-malware software.

Those that remain or are newly created, on the other hand, will be those that require more creative, imaginative, and human skills. This will include developing expertise in machine learning engineering to create new solutions, but also developing and building cultures of cybersecurity awareness within organizations, mentoring workforces on threats that cannot not be stopped by AI (like the dangers of writing login credentials on the post it notes) and developing strategic approaches to cybersecurity.

It is clear that through AI we are entering a world where machines will replace some of the more routine “thinking” work that needs to be done. Just as previous technological revolutions have seen routine manual labor replaced by machines, skilled manual labor such as carpentry or plumbing is still done by humans. The AI ​​revolution is likely, in my view, to have a similar impact. This means that information and knowledge workers in potentially impacted areas – such as cybersecurity – should develop the ability to use AI to augment their skills while further developing human skill sets.” soft” which will probably not be replaced anytime soon.

To stay up to date with the latest new and emerging business and technology trends, be sure to subscribe to my newsletter, follow me on TwitterLinkedIn and YouTube, and check out my books “Future Skills: The 20 Skills And Competencies Everyone Needs To Succeed In A Digital World” and “Business Trends in Practice,” which won the 2022 Business Book of the Year award.



Source link

Leave A Reply