IoT & AITechnology For SMEs

Check Point Research: Securing Your Business from ChatGPT Risks

With the use of ChatGPT, it is possible to produce convincing phishing emails that are difficult to identify from actual ones.

Sharing is caring!

ChatGPT is an advanced AI model that has impressed the tech world with its ability to generate human-like text based on human-engineered prompts. From writing essays to simulating conversations, ChatGPT is versatile and seemingly holds immense potential.
Infact according to a recent Microsoft Work Trend Index 2023 report, over three quarters of people in India are willing to delegate as much work as possible to AI, with 90% of them agreeing that any new hires must have new skills to be prepared for the growth of AI.
However, like many technological innovations, ChatGPT has its dark side. Unscrupulous users have found ways to misuse this technology for harmful purposes. More than 100,000 ChatGPT users are potentially exposed to fraudulent activities and cyberattacks as reported by Group-IB. The company reported that hackers have successfully infiltrated 1,01,134 devices containing saved ChatGPT login details.
Here are some ways ChatGPT can be misused:

Phishing: ChatGPT can be used to create realistic and convincing phishing emails that are difficult to distinguish from legitimate emails. These emails can be used to trick users into providing sensitive information, such as passwords or credit card numbers. This will increase the intensity and number of cyber-attacks across the globe which has already seen a 38% increase in 2022, compared to 2021 according to research done by Check Point Research (CPR).

Malware distribution: ChatGPT can be used to generate malicious code, such as viruses and trojan horses. This code can be embedded in documents, emails or websites and can be used to infect users’ computers.

Social engineering: ChatGPT can be used to impersonate real people in order to manipulate users into taking actions that are harmful to themselves or their organizations. For example, ChatGPT could be used to impersonate a bank employee in order to trick a user into providing their account information.

Disinformation and propaganda: ChatGPT can be used to generate fake news and propaganda that can be used to mislead and manipulate people. This can be used to damage reputations, sow discord, or even incite violence.

Data exfiltration: Generative AI can be used to create fake documents or emails that appear to be legitimate, which can be used to trick users into giving away their credentials or sensitive data.

Insider threats: Generative AI can be used to create fake documents or emails that appear to be from authorized users, which can be used to gain access to sensitive data or systems.

Actions CISOs can take to guard against Generative AI

In today’s fast-moving cyber world, generative AI is a powerful tool that can be used for both good and bad. Here are some actions that CISOs can take to guard against generative AI misuse on both internal and external fronts:

Third-party partnerships. CISOs should work with their vendors to ensure that their generative AI systems are secure and that they have measures in place to protect against misuse.

Supply chain security. CISOs should use security tools to monitor for suspicious activity from external sources; unusual traffic patterns or attempts to access sensitive data.

Incident response plan. CISOs should have a plan in place for responding to generative AI misuse incidents. This plan should include steps for identifying, containing, and mitigating the damage caused by an incident.

In addition to the above, CISOs should also consider the following:

  • Generative AI can be used to create realistic training data for machine learning models, which can help to improve the accuracy of these models in detecting and preventing cyber attacks.
  •  Generative AI can be used to identify malicious content, such as deepfakes or synthetic voice recordings and to respond to cyber attacks like phishing or DDoS attacks.
  •  The threat landscape is constantly evolving, so it is important for organizations to share information about generative AI threats with each other. This can help to improve the overall security posture of the community.

Risk assessment and policy development. One very important step a CISO can take is conducting a thorough risk assessment to understand the potential abuse scenarios and their impacts. Develop clear policies and guidelines for the use of AI systems, including acceptable use, prohibited content, and consequences of misuse.

Content filtering and moderation. It is also important to implement advanced content filtering mechanisms to identify and block inappropriate or abusive content in real-time. Set up a monitoring and content moderation system to review and approve AI-generated responses before they are shown to users.

Implement strong access controls and user authentication. One of the most important elements for a CISO to implement includes strong access controls, ensuring that only authorized users can interact with the generative AI system. Also, implement and monitor a system that can track and manage individual users’ interactions.

Usage monitoring and anomaly detection. Deploy monitoring tools to track usage patterns and to identify anomalies, such as unusually high levels of activity or suspicious activity.

Regular audits and assessments. Conduct regular audits of AI system usage and outputs to ensure compliance with established policies. Periodically assess the effectiveness of abuse mitigation strategies and adjust them as needed.

User education and awareness. An important requirement — for CISOs to design trainings and provide users with clear guidelines on how to interact responsibly with the AI system.

Collaboration with legal and compliance teams. Work closely with legal and compliance teams to ensure that the generative AI system adheres to relevant regulations and standards. Develop a plan for addressing legal and regulatory issues related to abuse.

Incident response and contingency planning. An important part of any CISO or security team’s set of responsibilities is to develop a comprehensive incident response plan that can address incidents promptly and effectively. Define escalation paths, communication protocols and actions to mitigate the impact of these kinds of incidents.

Feedback loops and continuous improvement. Establish mechanisms for users to provide feedback on the AI system’s performance, including abuse-related concerns. Use this feedback to continually improve abuse detection and prevention mechanisms.

Vendor collaboration and updates. Stay in close contact with the AI model provider (e.g., OpenAI) to receive updates on abuse mitigation features and best practices. Ensure that the AI system is regularly updated to benefit from the latest security enhancements.

Consider the ethical implications of the AI system’s outputs and its potential impact on users and society. Engage in discussions around responsible AI use within the organization and the broader community.

As ChatGPT continues to learn from its interactions, it can be continuously trained to recognize and refuse potentially harmful or misleading requests. By taking these proactive measures, CISOs can contribute to the responsible deployment of generative AI tools while minimizing the risks associated with abuse and misuse.


While ChatGPT offers a myriad of advantages in various domains, it isn’t immune to abuses. However, through a combination of third-party partnerships, technological safeguards, user education, and community vigilance, many of the negative implications can be mitigated. OpenAI, alongside its user community, holds the responsibility of ensuring that this potent tool is used ethically and judiciously, maximizing its benefits while minimizing potential harm.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button
%d bloggers like this: