ChatGPT, AI tools and their security risks
Everyone’s talking about it - ChatGPT, an artificial-intelligence (AI) chatbot. It provides quick, well-formulated answers in the form of a story or code. Other AI tools such as YouChat, Google Bard AI, and Character.AI are also catching up. AI tools are powerful, and just like any other tools, they can be used for good and for bad. This article tells you the possible risks associated with using AI tools and how to prevent leakage of your personal and business data using the example of ChatGPT.
In summary
- Everything you share with ChatGPT is open and not protected. Do not share sensitive information with ChatGPT
- Don’t trust the answers blindly
- Do not share documents created in ChatGPT with third parties
Should I worry about using ChatGPT?
There are 2 main areas of concern when using tools such as ChatGPT:
- The data you provided to ChatGPT or other open AI tools are open to everyone. That is why you should ensure that your personal data or proprietary data is not disclosed by copying or typing it to external sources such as ChatGPT.
- You cannot trust the correctness, security, and reliability of the answers or (generated) codes. That is why we advise against its use in business emails, processes, or applications.
If you choose to use it in (business) emails, processes, or applications:
- Take full ownership or responsibility for the text and code. Preferably also mention that you used ChatGPT.
- Don’t trust the answers provided by ChatGPT blindly- always review the text or code. ChatGPT is not familiar with your standards and values.
- Do not share documents created in ChatGPT with third parties. Intellectual property and copyright issues might arise.
- Be aware of the vulnerabilities or backdoors being introduced into your applications by using ChatGPT code.
Criminals also use ChatGPT
Unfortunately, new AI technology also helps cyber criminals create more sophisticated phishing texts. Be aware of ChatGPT-related Phishing emails and always verify the authenticity of emails or texts. Remember that spelling mistake is not the only red flag you should watch out for. We recommend also educating yourself and your colleagues about phishing risk.
For the development and deployment of AI systems regulatory frameworks, you can refer to regulators’ sites. (EU/UK).