ChatGPT and the Law: Navigating the Legal Implications of Artificial Intelligence
By Sarah Malik and Amal Erywan
Artificial Intelligence (“AI”) has significantly impacted various industries by introducing new technologies that automate manual tasks and streamline processes. The concept of AI was first introduced by Alan Turing in 1950 when he speculated about “thinking machines” that could reason at the level of human beings. Despite recent developments, many people do not understand AI. There is a misconception that AI is a super-powered robot or hyper-intelligent system.
What is AI?
AI uses algorithms and machine learning to perform tasks that usually require human intelligence such as decision-making, pattern recognition, and problem-solving. The process of building AI is by identifying the problem, collecting the right data, training the AI model, choosing the right platform, picking a programming language, and deploying and monitoring the operation of the AI system.
The use of AI is becoming increasingly popular, even in the legal industry. As a starting point, several parties are experimenting with AI-powered chatbots to resolve conflicts and settle disputes. For example, The UK’s Online Court Project is creating a chatbot-powered platform that will enable parties to settle small claims disputes online. Allen & Overy has also integrated ‘Harvey’ which is their “innovative artificial intelligence platform built on a version of Open AI’s latest models enhanced for legal work, into its global practice”.
Other examples include LawBot, which provides legal advice for landlord-tenant conflicts and employment law; DoNotPay, which assists users in contesting parking tickets and small claims court cases; and LexIQ, which points out potential compliance, liability, and indemnity problems.
One of the main benefits of AI in legal disputes is its ability to process large amounts of data quickly and accurately. It can help with automated tasks, such as document reviewing, and with routine matters, such as reducing backlogs in family, employment, landlord-tenant, property disputes, and petty crime tribunals. AI systems can also assist lawyers analyse and predict data, which can help to inform strategy and decision-making in disputes and reduce or eliminate the initiation of formal disputes, e.g. predicting the likelihood of success in court, predicting settlement, etc.
In 2016, an AI algorithm was used to predict the outcome of the European Court of Human Rights cases with a 79% accuracy rate. This suggests that AI can be a useful tool in legal disputes, however, it should be used with caution.
Like many new technologies, AI has its flaws. One concern is the potential for bias in AI algorithms. AI algorithms are only as good as the data they are trained on, and if the data contains biases, these biases can be reflected in the algorithm’s decision-making. This can be particularly problematic in legal disputes that involve sensitive issues such as race, gender, or religion.
Another concern is the lack of transparency in AI-decision making. Transparency is essential to ensure fairness and justice. It can be problematic in legal disputes where it may be difficult to understand how AI arrived at the decision.
To address the concerns, several organisations have developed guidelines for the ethical use of AI in the legal industry. In 2019, the European Commission published guidelines on the ethical use of AI, which included specific recommendations for the legal industry. These guidelines emphasise the importance of transparency in AI decision-making and the need to ensure that AI algorithms are free from bias. They also suggest that AI should be used to enhance human decision-making, rather than replace it entirely.
DoNotPay, the “world’s first robot lawyer”, which uses AI and automated processes to fight parking tickets, is facing a class action lawsuit for unauthorised practice of law in California. The plaintiff, Jonathan Faridian claims that DoNotPay’s description of itself as the “world’s first robot lawyer” is flawed. He alleges that the company misrepresented its services leading clients to believe they were high-quality legal advice and documents, when they were not.
The lawsuit, among other things, refers to a DoNotPay customer review where a customer attempted to utilise the service to contest two parking tickets but ultimately had to pay extra money because the firm did not respond to a summons. Further, despite attempting to terminate their account, the customer was nevertheless charged the subscription fee by DoNotPay. Joshua Browder, the founder of DoNotPay, has taken to Twitter to respond to the lawsuit and said that the ‘claims have no merit’.
ChatGPT is a powerful chatbot that uses a large language model trained by OpenAI. It has been trained to respond in a human-like way by using predictive texts, which makes it sound less artificial and even more intelligent than earlier forays of AI.
The bot has taken the media by storm with its impressive responses, and one can only imagine how this tool would be useful in a legal context. Some use cases include drafting a contract, summarising complex cases, providing simple legal advice and creating legal content.
Whilst it could be a good starting point for lawyers and non-lawyers looking to automate or reduce their legal tasks, ChatGPT has shown not to be accurate in some instances. The bot creators on the OpenAI website sets out that ChatGPT “sometimes writes plausible sounding but incorrect or nonsensical answers”.
ChatGPT also poses a number of issues as the nature of legal work deals with high volumes of confidential documents and information. This could possibly lead to disputes involving intellectual property infringement, privacy, data breach, etc. For example, currently, Italy is temporarily banning ChatGPT and this is attracting the attention of EU privacy regulators.
The world’s first defamation lawsuit against ChatGPT is being filed by an Australian mayor who claims that ChatGPT published false and defamatory statements about him on social media, which caused damage to his reputation and career. This lawsuit is expected to raise important legal questions about the liability of AI models for content they generate.
There is growing concern about the potential for this content to spread false information or even defamatory statements.
We must remember ChatGPT is simply a machine learning system that has a source code and input by humans. It should never be looked as a replacement of any form of human interaction/ability to interpret legal principles. It can never have any level of understanding and judgment as a human lawyer and is only as good as the parameters of its source code and date which need to constantly be monitored to ensure boundaries are also maintained. It is dangerous considering it does not have any ethical considerations nor regard for the consequences like a human lawyer would.
The latest version of ChatGPT, known as GPT-4, was officially announced on 13 March 2023, however, it is only available as a paid subscription under ChatGPT Plus. The current free version available of ChatGPT is GPT 3.5, a less accurate and capable version in comparison. GPT-4 is 10 times more advanced than GPT-3; one example is that GPT-4 passed a simulated law school bar exam with a score around the top 10% of test takers compared to GPT-3 which scored in the bottom 10%.
While ChatGPT is not 100% perfect, it is a powerful and useful tool to help advance the efficacy and efficiency of legal disputes.
As AI continues to advance, it is likely that its use in legal disputes will become more widespread. McKinsey has predicted AI could deliver “an additional global economic output of about $13 trillion by 2030, boosting global GDP by about 1.2 percent a year”. AI may well prove to have significant impact on the legal industry with the general aim of improving access to justice, reducing time and costs, increasing accuracy, and automating tasks. However, as with any new technology, it is essential to approach the use of it with caution. It is a work in progress.
The chances of AI replacing lawyers in its entirety are slim to none as there exists personal touch and understanding of clients that only lawyers provide. Lawyers need to and should work with AI, instead of against it.
This publication is not intended to offer legal advice and is solely for informational purposes.
Also published by London International Disputes Week 2023: here.