Jun 12th: Does AI Produce Legal Gibberish?

AI in legal research needs knowledge of its limitations and usage within acceptable parameters, Big Law CXO hire, and the European Legal Tech conference.

Jun 12th: Does AI Produce Legal Gibberish?

Welcome to your Law Tech briefing for June 12th, 2023, covering what happened in legal tech recently.


TLDR; Listen instead:


Have less than a 1 min? Three takeaways from yesterday

1. Legal knowledge workers should fully comprehend the capabilities of AI tools and ensure that they remain within acceptable parameters to avoid mishaps or legal consequences. Keep the human in the loop.

2. Embrace technology in the practice of law, and not shy away from it. Increased education and awareness of AI capabilities are essential for legal professionals to keep up with the changing landscape.

3. Non-compete agreements may influence worker behavior without employers having to take any action, highlighting the need for reform in these agreements and their impact on job mobility and wage growth.

Legal tech news over the weekend was dominated by the use of AI in the legal industry - specifically, the dangers of using generative AI without sufficient knowledge of its capabilities. An attorney in New York found himself in hot water after using an AI language generator called ChatGPT to supplement his legal research. Unfortunately, the tool generated fake case citations, which were submitted to the court without verification, leading to a two-hour grilling by the presiding judge. Does this mean that attorneys shouldn't use technology or AI-enabled tools, absolutely not? We discuss more as part of the key stories.

Trends

The use of AI in legal work is becoming more prevalent, but this comes with the need for increased education and awareness of its capabilities. The Lexpo Conference, set to take place in Amsterdam this week, will feature discussions on key themes beyond technology, including the legal workforce and the alignment of tools and talent. This demonstrates the legal industry's commitment to keeping up with the changing landscape and embracing new ways of working.

Analysis

The dangers of using AI in legal research without sufficient knowledge of its capabilities cannot be underestimated (admitted this applies to any work in any profession). Lawyers must fully comprehend the capabilities of AI tools and ensure that they remain within acceptable parameters. However, the use of AI language generators, such as ChatGPT, does have potential benefits for the industry, including increased productivity and efficiency. Understanding these tools' limitations and the importance of verifying information generated by them is crucial to avoiding mishaps such as the fake citations incident.

On a separate note, the Government Accountability Office's report reveals that only 6% of private sector employers who use non-compete agreements have enforced the contracts frequently or very frequently in the past five years. This suggests that non-competes may influence worker behavior without employers having to take any action and that workers with non-competes may be less likely to search for or take a job with a rival employer. It's important to be aware, considering the focus on lateral hires across the legal profession.


📌
THIS BRIEFING IS SUPPORTED BY

Lupl. Manage your matters, deals, and cases without the chaos. Lupl brings together tasks, documents, and knowledge so legal professionals can focus on what matters most - delivering positive outcomes.
See for yourself.

Key stories

ChatGPT and Legal Gibberish (The Time Blawg)

The ChatGPT lawyer case (Mata v Avianca) has been in the news this week as the judge suggests “legal gibberish” to better describe what the lawyers produced using the tool. Lawyers acting on behalf of Stephen Schwartz lodged a Memorandum of Law ahead of the hearing scheduled for 8 June, which they defended as an honest mistake that happened while dabbling with new technology. They argue that Schwartz should not receive any sanctions as he did not act with bad faith. Schwartz had used ChatGPT to produce fake citations that he later lodged with the court without accuracy checks. The case has put a spotlight on the generative artificial intelligence tool’s potential ability to make legal research obsolete in some cases. Courts are issuing guidance on using such technology in conducting legal research.

Editors note: In light of recent events, it is crucial to promote the need for continuous experimentation with new technology in the legal profession while also exercising due caution. The recent incident involving AI in legal research serves as a reminder that while AI, such as ChatGPT, can offer vast potential benefits in optimizing work efficiency and making legal processes more accessible, it is not an infallible oracle. It is a tool designed to assist, not replace, the diligence and expertise of legal professionals. As we navigate the intersection of law and technology, we must remember that technology is only as smart, savvy, and ethical as the humans who use it. Therefore, cross-verifying AI-suggested cases with human expertise can be a valuable practice. Despite the challenges, we must continue to incorporate AI into legal work, learn from mistakes, and continue to explore its potential. This boldness paves the way for a future of AI-enhanced legal service delivery and work product that is supported by a human in the loop.

Pinsent Masons hires Matt Peers as global COO (Legal IT Insider)

Pinsent Masons have recruited Matt Peers as its Global Chief Operating Officer to replace Alastair Mitchell. The replacement comes as Pinsent Masons seeks to expand and improve its business operations to support the “Vision 26” strategic plan. Peers will report to the law firm's new managing partner Laura Cameron and sit on the operations committee. He will be expected to lead the way in ensuring that the business support provided is “fit for the future” and as supportive as possible of the firm’s strategy. Peers has held various senior roles in companies such as Linklaters, Deloitte, and Carphone Warehouse. His addition to the firm is part of its ongoing expansion strategy.

All stories

Opinion: The New York fake ChatGPT citations case is a gift to law firms globally (Legal IT Insider)

A New York lawyer, Steven Schwartz, used an AI language generator called ChatGPT to supplement his legal research but ended up citing fake case citations in a court filing. Avianca's lawyers wrote to the judge, saying that they were unable to find the cases in court dockets or legal databases. Schwartz was grilled for two hours by Judge Castel, highlighting the need for education around generative AI and the importance of using such tools sensibly and within acceptable parameters.

Microsoft Office Tricks: 10 Easy Ways to Save 10 Minutes a Day (Attorney at Work)

An article has highlighted several quick tricks available in Microsoft Office that can help save time and increase productivity. These include using the Dictate feature in Word and Outlook to dictate documents and emails, using the Format Painter to copy formatting to other parts of a document, and turning off Word's automatic selection of entire words and spaces. Other tips include using AutoText and AutoCorrect to reduce keystrokes, dragging and dropping emails in Outlook to create calendar appointments and tasks, using AutoSum in Excel, and customising start-up programmes to reduce loading times.

Promoting Innovation and Digital Transformation in Europe at Lexpo (Attorney at Work)

The Lexpo Conference, founded by Rob Ameerun, is set to take place on June 12-13 in Amsterdam. The conference aims to promote innovation and digital transformation in the legal industry across Europe and will feature discussions on key themes beyond technology, including the legal workforce and the alignment of tools and talent. The event aims to broaden exposure to thought leaders worldwide and enhance the conversation around innovation in the legal community. The conference's programming reflects the legal industry's rapidly changing landscape, with a higher level of interest and understanding of technology's impact.

OpenAI Sued For Defamation Over ChatGPT ‘Hallucination’; But Who Should Actually Be Liable? (Technology Archives - Above the Law)

A journalist named Fred Riehl interacted with ChatGPT regarding a lawsuit he was reporting on in federal court in the Western District of Washington. ChatGPT provided Riehl with false information about a third party named Mark Walters who is not a party to the lawsuit. The information provided by ChatGPT included an erroneous case number and was a complete fabrication of the complaint. None of the statements provided by ChatGPT concerning Walters are in the actual complaint.

Employers rarely seek to enforce noncompetes, GAO survey finds (Legal Dive - Latest News)

A new report by the Government Accountability Office (GAO) has revealed that only 6% of private sector employers who use non-compete agreements have enforced the contracts frequently or very frequently in the past five years. The report suggests that noncompetes may influence worker behaviour without employers having to take any action, and that workers with noncompetes may be less likely to search for or take a job with a rival employer. The findings come amid growing scrutiny of non-compete agreements, which have been criticised for limiting workers' job mobility and wage growth.