Chat GPT: Love It or Hate It; Legal Considerations for Use

Chat GPT Love It or Hate It Legal Considerations for Use

Chat GPT (Chat Generative Pre-Trained Transformer) is a large language model chatbot developed by OpenAI and launched in November 2022. It is a revolutionary technology capable of responding to questions posed by users, and performing specific tasks requested, from writing and translating accurately and at high quality through providing solutions to various problems, to writing computer programs and even generation of complex text.

It has been hailed as the most ground-breaking development in technology since the launch of the iPhone by Apple. For context, ChatGPT set the record for the fastest app to reach 100 million users. Thousands of businesses have also jumped on this train in order to streamline their work processes, increase productivity and create new and exciting ideas.

As a language model, ChatGPT has revolutionised the way we process and receive information as we know it. It has been touted as the hottest thing since sliced bread and thousands of Linkedin and Twitter handles have sprouted claiming to teach the hacks for getting the best prompts on ChatGPT. With its advanced natural language processing abilities, Chat GPT has the potential to automate a wide range of tasks from web development, content writing, academic research, etc.

It has truly been a writer’s and content creator’s dream. While ChatGPT offers various benefits and advancements in conversational AI technology, like any new technology, it has equally raised multiple alarms on data protection, intellectual property, and copyright infringement, liability for errors from professionals, academia, law schools, medical schools, the judiciary, educational institutions, the legal profession, business owners and many more. We will consider 5 main legal considerations to be aware of when you use ChatGPT.

  1. Permissions and data collection: As a language model tool, it is important to understand first and foremost that Chat GPT processes and stores vast amounts of data, including personal information and sensitive data of data subjects in order to generate information. ChatGPT interacts with users, collecting and processing personal data. When a user makes use of the platform, the provider obtains user information which includes personal data, login details, information regarding use, analytics, and cookies. This data is used to upgrade, and analyse the chatbot, conduct tests, and develop new products and services. Moreover, this data can be disclosed or transferred to third parties. There has been some major contention over the fact that it does not seek the appropriate permissions in order to do so. This makes the protection of user privacy and compliance with data protection laws a crucial consideration. Italy is seriously considering blocking Chat GPT and we are certain many other jurisdictions will follow suit; the question for determination is whether it complies or falls foul of the various data protection laws around the world. Potential risks to be wary of include unauthorised data access, insufficient consent mechanisms, inadequate data security, and unclear data ownership.
  2. Copyright infringement: For ChatGPT to be able to produce new and unique content, the artificial intelligence that powers the platform is trained on vast amounts of data and designed to leverage existing knowledge and information on the internet.  Since it uses existing information to create its responses, the data may include copyrighted materials and as such, violate another author’s copyright-protected content. The generation of content that infringes upon intellectual property rights poses a significant legal risk. Users have to bear in mind the challenges of avoiding plagiarism and copyright violations. This risk highlights the importance of obtaining necessary licences, implementing content filtering mechanisms, and respecting intellectual property rights during the training and operation of ChatGPT.
  3. Liability for errors: ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. A few days ago, I asked ChatGPT for the facts of a case. While the primary principle stated was correct, ChatGPT gave the wrong facts. Anyone who relied on this information would have run into a dangerous misconception. There was a similar instance where a lawyer prepared a pleading using Chat GPT only to learn the hard way that many of the cases cited did not exist to the annoyance of the judge presiding over the matter. The reputational damage on the lawyer is of course immeasurable not to mention the embarrassment. This begs the question of responsibility for errors.  Who would have assumed liability for this? As AI systems like ChatGPT gains autonomy and interact directly with users, questions of liability and legal responsibility become paramount. Determining accountability when ChatGPT produces erroneous or harmful responses needs to be resolved. Notably, OpenAI points out that ChatGPT cannot provide accurate information on topics and changes after its knowledge cut-off in 2021. This, therefore, highlights the need for clear terms of service, user agreements, and disclaimers to mitigate potential legal liabilities.
  4. Legal Frameworks and Safeguards: To mitigate the legal risks associated with ChatGPT and similar AI systems, there is a need for comprehensive legal frameworks. Legislators around the world are in the process of setting up regulatory frameworks in the form of sector-specific guidelines, standards for data protection and privacy, and mechanisms to address liability and accountability. For the speedy implementation of this, there should be a collaboration between policymakers, AI developers, and legal experts to adapt regulations to the evolving AI landscape.
  5. Another consideration that is seemingly more ethical than legal is that, as an AI language model, Chat GPT is only as good as the data it is trained on. If the data used to train Chat GPT is biased or incomplete, it can lead to biased or incomplete output. This inadvertently raises the question of liability. There have also been some interesting reports about the fact that Chat GPT can potentially pass bar exams and medical exams. Naturally, the academic world is worried about the effect this might have on students. The debate rages on about students becoming too lazy to research thoroughly or asking Chat GPT to write papers and dissertations on their behalf. The good thing about innovation is that it encourages support industries and now there is a big demand for platforms that can check if articles are AI-generated. These are all major issues and can potentially generate lawsuits and liability.

 

Also read: Understanding Different Types Of Employment Arrangements

 

Organisations, business owners, professionals, and academics using Chat GPT should generally be mindful of the following:

  • Ensure that the report or content generated is edited to suit your needs to avoid seeing identical reports attributable to other people which can raise plagiarism and intellectual property infringement concerns.
  • Companies and organisations must ensure that they have the appropriate licences or permissions to use any copyrighted material or trademarks in the text generated by Chat GPT.
  • Companies and organisations that use Chat GPT must also conduct a risk assessment to ensure that they are not exposing themselves to potential legal liability for any errors or biases in the content generated by Chat GPT.
  • Companies and business owners must ensure that they are complying with relevant data protection laws and regulations like the Nigerian Data Protection Regulation ( ), General Data Protection Regulation (GDPR) in the European Union or the California Consumer Privacy Act (CCPA) in the United States. They must also ensure that they have obtained appropriate consent from individuals for the collection and use o f their data.

In conclusion, ChatGPT holds a lot of potential to improve user experiences, but implementing it comes with serious legal risks that need to be carefully considered. These legal issues range from data ownership to intellectual property infringement, liability of errors to privacy, and data protection. We can manage these risks and build an environment that maximises the advantages of AI by establishing comprehensive legal frameworks while maintaining compliance with relevant laws and regulations.

Authors

Beverley Agbakoba-Onyejianya
Beverley@oal.law
Esther Odunze
Nkechinyere@oal.law