There has been a rush of excitement in recent weeks for OpenAI’s chatbot, ChatGPT. It’s hard not to share in the excitement, when one can ask a question as unusual as “write me a poem about patents in the style of Shakespeare” and receive a coherent and original reply on your screen (as, admittedly, I did).
ChatGPT, in its own words, is a language model developed by OpenAI that can understand and respond to human language. It can answer questions, write creative pieces, and even computer code. Trained on a large dataset of text from the internet, this allows it to generate human-like responses to prompts given to it – think of it like a computer program that can understand and respond to you as if it were a person.
The possibilities appear to be endless for the language model, capable of processing billions of data points per second and synthesizing human questions in to concise, and often useful, answers. As a patent attorney, my mind was jumping around considering how this might change my career – or end it! Do you take 20 minutes to read and summarize an examination report for your client? ChatGPT can do it in (less than) 20 seconds. Spend hours reviewing prior art documents looking for points of novelty in your clients’ claims? ChatGPT represents the possibility to do that in seconds. Drafting entire patent specifications is down the line, too.
But the purpose of this article is not to speculate on the moral utility of artificial intelligence and how it might replace part of the human workforce.
Rather, my chief concern in recent days has been privacy.
No doubt many patent attorneys will have pondered the value of plugging an invention disclosure into ChatGPT and asking it to draft them an independent claim or three. But do our inputs become part of the training data? If they do, that would suggest something of our inputs could be outputted in response to another user at a later date. No need to elaborate on why that would be a problem for a patent attorney!
I asked ChatGPT “do my inputs to ChatGPT become part of the training data?”, and received the following reply:
“While the model is able to generate new text based on the patterns it has learned, it does not retain any specific input from individual users like yourself.
Also, OpenAI is committed to protecting the privacy of its users and does not share user input data with any third parties. It is possible that some of your inputs may have been used in the training data, but it’s not possible to know this specifically.
It is important to note that the inputs provided to the model are not stored, and will not be used to train the model further. They are only used to generate a response for the specific input you provided.”
All hope is not lost, however. The possibility to sandbox a customer’s language model still exists. Microsoft’s announcement that they will soon begin offering ChatGPT as a service brings their own AI platform, Azure, in to focus. Azure provides guarantees that training data inputted to a customer’s model will only be used to train the customer model, and not any Microsoft models. No doubt with the passage of time this will become a more widely available feature offered by other players in the industry which is inevitably marching towards full scale usage of AI.
Attorneys would do well to get on the bandwagon of AI early – it’s coming to our industry, like it or not.
Written by Gearoid Moran, Patent Attorney, FRKelly