In response to a furious judge in Manhattan federal court, two remorseful lawyers blamed ChatGPT for duping them into inserting bogus legal studies in a court document. The attorneys now face potential penalties for a lawsuit involving an airline that included references to earlier court cases they thought were legitimate but were actually manufactured by the artificial intelligence chatbot.
If you or someone you care about was harmed in a personal injury accident in Oregon, it is crucial that you have factual information to support your case. Rizk Law personal injury lawyers will conduct a thorough, independent investigation into your case, gathering honest data and authentic evidence to support your claims.
What Is ChatGPT?
ChatGPT is a chatbot powered by artificial intelligence (AI). Designed to engage in interactive conversations with users, ChatGPT has been trained on a large corpus of text data to generate human-like responses to given prompts or questions. It can understand and generate natural language responses, provide information, answer questions, and can even engage in back-and-forth exchanges with the user.
Chatbots such as ChatGPT learn patterns and associations in the text data they are trained on, allowing them to generate coherent and contextually relevant responses. However, ChatGPT and similar language models do not possess true understanding, consciousness, or human-like intelligence. Their responses are generated by analyzing statistical patterns and correlations in data rather than actual comprehension or knowledge.
ChatGPT Fooled Two Lawyers Into Citing Made-up Case Law
The lawyers claimed they utilized the innovative tool ChatGPT to assist them as they searched for legal precedents to support their client’s complaints against a Colombian airline after an injury they sustained on a flight in 2019.
ChatGPT, which has wowed the globe with its creation of essay-like responses to user queries, recommended many cases to the lawyers concerning aircraft catastrophes that the attorneys had not been able to find using the firm’s standard procedures. Unfortunately, a number of those incidents were either made up or involved fictitious airlines.
One of the attorneys told the judge that he believed ChatGPT was obtaining these cases from sources he did not have access to. He said that he “failed miserably” at conducting additional research to guarantee the citations were truthful and that he did not know that ChatGPT had the ability to invent cases.
The judge appeared perplexed and worried by the incident, displeased with the lawyers for failing to act swiftly in correcting the erroneous legal citations after the court first alerted them to the situation months earlier.
Later, the attorney would go on to say that he had suffered emotionally and professionally as a result of the blunder and felt “embarrassed, humiliated, and extremely remorseful.” He claimed that he and his employer had put precautions in place to ensure that nothing similar would happen again in the future.
According to a representative for the legal firm, the submission resulted from carelessness, not bad faith, and should not result in fines. He claims lawyers have historically struggled with technology, particularly new technology, and that it is not getting any easier as time goes on. When the attorney chose to use ChatGPT, he had erroneously assumed he was working with a typical search engine.
The judge ultimately said he would rule on sanctions at a later date but that this highlights the hazards of using AI chatbots without fully understanding the risks.
Partner With a Trusted Personal Injury Attorney in Oregon Today
Did you suffer an injury due to someone else’s negligence? If so, the Rizk Law team is here to help you fight for the compensation you deserve. Contact us to schedule a free consultation by calling (503) 245-5677 or completing our convenient contact form.