Guidance on ChatGPT (or other AI language models) For Regulated Firms
First, ChatGPT is a chatbot development platform that uses natural language processing and machine learning to deliver conversational interfaces. ChatGPT is able to understand human speech, make decisions based on this information, and respond with natural language in real time with the accuracy of a human.
Why regulated firms should avoid using ChatGPT
At this time, it is not recommended that ChatGPT and other similar tools be used in a professional capacity by regulated firms. While there are some security measures in place, they are not sufficient to guarantee the privacy and security of your data. Furthermore, ChatGPT’s Terms and Conditions state that you grant them full access to all information that you share via their service. While you do own the content that was created, OpenAI, the parent company of ChatGPT, has full access to the data that you submit using their tool and may store that data forever, even if it has been deleted by the user.
Additionally, ChatGPT does not offer any guarantees regarding the accuracy or quality of its results. Although there have been some recent improvements, such as the addition of punctuation, this is likely to be a continuing issue with AI-powered language models such as this, because there will never be enough training data available for every possible situation encountered when communicating with another human being who speaks English as their first or even second language.
Recommended use cases for ChatGPT
Your staff may use ChatGPT at home for any of the following:
- general interest in AI language models
- experimentation with how AI language models work
- education about how AI language models work.
When ChatGPT is used outside the firm, care should be taken not to expose details about the company. This includes:
- confidential information (e.g. financials)
- sensitive topics (e.g. board meetings)
- company secrets (e.g. employee records)
- company policies and procedures (e.g.HR practices).
It is also important that staff do not discuss company resources or assets such as equipment or real estate, since those belong to the firm and not to the employee.
Information security policies and technical controls
The firm’s current Information Security Policy should have provisions against Shadow IT, which will cover ChatGPT. Shadow IT is the use of unauthorized applications, hardware, or software by employees. You may wish to remind staff that the use of unapproved applications is a violation of policy and that this includes the use of ChatGPT. If you do not have provisions against the use of Shadow IT, now is the time to include it in your policy.
In addition to a policy, your firm may implement technical controls to block access to ChatGPT. IT can block the site by forcing web browser settings through a policy and by using a content filtering system (CFS). A CFS is software or hardware that filters websites based on categories, keyword lists or URLs.
The usage of ChatGPT (or other AI language models) is an interesting topic and it’s clear that the use of chatbots and conversational interfaces is growing. As the technology continues to mature, safeguards will be put in place to ensure that necessary security and privacy considerations are implemented.
To learn more, get in touch with our Cyber & Data Protection team today.