ChatGPT-AI-Cybersecurity

Predictions of evolving Cyber threat landscape with the advent of ChatGPT and its limitations

ChatGPT and Its Usecases for Infosec and Cybersecurity community

OpenAI recently released the chatbot ChatGPT; ChatGPT is designed to understand and generate natural language, as well as respond to questions, follow conversations, and generate natural language responses. ChatGPT can generate responses that are much more natural sounding and conversational than traditional chatbots.

This is because the system is trained on immense amounts of natural language data, allowing it to generate responses that sound more human-like. Additionally, the ChatGPT can detect the context of the conversation and provide more natural responses.

Artificial Intelligence (AI) plays a key role in various cybersecurity fields. It has helped security analysts and researchers for a long time, but with the advent of platforms like ChatGPT, cyberspace will evolve rapidly. ChatGPT is a good example of how technology and humans are becoming more interconnected. ChatGPT can help humans, and machines better understand each other, allowing more efficient communication and collaboration.

ChatGPT Use cases

We have listed a few use cases of ChatGPT below that highlight how the security and research community have started utilizing the ChatGPT chatbot to perform various time-consuming day-to-day tasks .

Malware Researchers

Malware researchers can understand IDA Pro Pseudocode using ChatGPT, as shown in the figures below. The malware researchers can use this feature to identify the algorithms, encryption logic, etc.

Figure 1 – Analyzing Pseudo Code using ChatGPT

Figure  2 – Output generated via ChatGPT while analyzing Pseudo Code

Bug Bounty Researchers

Bug Bounty Researchers can utilize this ChatGPT to create scripts easily to perform reconnaissance, such as crawling domains to find admin login pages, which increases the scope for researchers and the severity of vulnerability being reported to bug bounty platforms.

Figure 3 – Use of ChatGPT to crawl the domain to find admin pages

Android researchers can use ChatGPT to create Frida scripts to perform operations such as unpacking, SSL pinning etc. The below figure shows the code generated by ChatGPT for unpacking the Secshell Packed Android sample.

Figure 4 – Using ChatGPT to unpack Secshell packed Android application

Building Virtual Machines

One can create a Virtual Machine using ChatGPT and deploy docker containers within it.

Figure 5 – Screenshot suggesting the use of ChatGPT to create a VM.

Penetration Testing Teams and individuals involved in penetration testing can create custom scripts and tools to automate a few of their steps using ChatGPT, as shown below.*

Figure 6 – Screenshot suggesting the use of ChatGPT to create a Python script to scan open ports and vulnerabilities

Digital Forensics


Digital Forensics investigators can utilize ChatGPT to create custom scripts that helps them in performing tasks such as checking network traffic, analyzing logs etc. The below images shows the code generated by ChatGPT to check DNS Query and response addresses using Tshark.

Figure 7 – Screenshot from ChatGPT to check DNSquery and response address using Tshark

SIEM Solutions Security Operation teams can utilize this platform to create rules for their security information and event management tools. One basic example of the same is given below.

Figure 8 – Using ChatGPT to create Splunk rules

Creating Reports

Using ChatGPT, the time involved in generating reports can be reduced drastically. For example, one can generate Bug bounty report for SQL injection vulnerability found in one of the endpoints as shown in the figure below.

Figure 9 – Using ChatGPT to create BugBounty reports

Predictions

As seen from the above use cases, ChatGPT can perform multiple operations depending upon the input statement provided by humans.

Keeping this in mind, given below are a few predictions that we might see with the utilization of ChatGPT

  1. With the wide variety of publicly available tools that are utilized by teams in infosec and cybersecurity companies, we are likely to see more automated tools created by ChatGPT.
  2. One of the recent trends observed that TAs were mass exploiting a new vulnerability within a month of its disclosure. Threat Actors (TAs) using ChatGPT will reduce their effort and time involved in creating Proof Of Concept for exploting the newly identified vulnerabilies, hence we would notice mass exploitation of a new CVE within a few days or even hours.
  3. The open-Source community will look at new innovative custom tools and scripts in the public domain.
  4. TAs and Hackers will create a more streamlined approach for reconnaissance, which might increase the chance of cyber incidents.
  5. Public exploits and tools have been the best arsenal for Hacktivist groups with limited resources. With ChatGPT, we might see more hacktivist groups launching cyberattacks with added capabilities such as exfiltrate data, steal sensitive information etc.
  6. The correlation and OSINT investigation done by researchers can be performed more conveniently to idenfity new threats.
  7. Threat Actors (TAs) might use ChatGPT to modify previously developed malware strains.
  8. TAs might create scripts using ChatGPT to perform wide-scale Social Engineering attacks on victim organizations and employees dealing in public-private sectors.
  9. Cybercrime forums and Darkweb markets might notice a spike in access leaks via web exploitation due to automated scripts utilized by TA using chatGPT. Victim organizations with poor security posture and limited asset visibility will be more prone to cyberattacks.
  10. TAs will create powerful scripts and tools using ChatGPT, which might be further distributed in cybercrime and dark web forums.

Limitations

Even though ChatGPT has multiple use cases and well-trained AI providing answers to users’ queries, it lacks in multiple areas. As per OpenAI, ChatGPT has the following limitations:

ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging as:

  1. During RL training, there’s currently no source of truth
  2. Training the model to be more cautious causes it to decline questions that it can answer correctly
  3. Supervised training misleads the model because the ideal answer depends on what the model knows rather than what the human demonstrator knows.

ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim not to know the answer but, given a slight rephrase, can answer correctly.

The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.

Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, the current models usually guess what the user intended.

While OpenAI made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased or discriminatory behavior. ChatGPT uses the Moderation API to warn or block certain types of unsafe content, but users can expect it to have some false negatives and positives for now.

ChatGPT has limited scalability due to its reliance on a large corpus of training data. ChatGPT uses a supervised machine learning approach to train its model, which requires a large and diverse dataset of conversations. The ChatGPT dataset consists of conversations from various sources, including Reddit, Twitter, and various chat rooms. This allows the model to learn natural language patterns and understand how conversations are structured. However, the larger the dataset, the more difficult it is to scale the model.

Recently, the use of ChatGPT-generated text for content on Stack Overflow has been temporarily banned. They also stated, “This is a temporary policy intended to slow down the influx of answers and other content created with ChatGPT”.

Conclusion

AI technology is rapidly advancing, and its potential applications in cybersecurity are vast. AI can automate tedious security tasks, uncover sophisticated threats, and improve response times. However, AI alone is not enough to protect organizations from cyber threats. AI-driven security systems are only as effective as their underlying models and data, requiring human oversight to ensure they function properly. The systems and algorithms that power AI security solutions require human expertise to define and refine them.

AI is not yet capable of understanding abstract concepts or adapting to unexpected environmental changes, which is why human experts must provide guidance and oversight to AI systems. For example, ChatGPT can widely be used in pen-testing exercises but still requires a human element to define the process and procedure which must be followed to complete the assessment.

In addition, AI systems are not completely reliable yet and are vulnerable to mistakes. Human experts are necessary to spot errors in the AI system and respond to malicious activity or data manipulation. As AI technology continues to evolve, some aspects of the cybersecurity space will likely be automated with AI. However, human analysts will still be needed to interpret the data and interpret the results of the AI system

Comments are closed.

Scroll to Top