Summer Intern Spotlight: AI and the Legal System
Author: Ardra Nair (Summer 2024 Intern)
Publish Date: September 3, 2024
Growing at a rapid pace, artificial intelligence (AI) is taking over the professional world, with advancements such as premeditated graphs foreseeing the future of stocks, augmenting and improving workflow, and so much more. But at what point does AI cross the line? This is a question that can be asked in several environments and industries and is a particularly hot topic in the legal field. Tampering of evidence, AI-generated photographs, and more could complicate legal proceedings and be the downfall of crucial elements such as pictures, video, and voice evidence in the courtroom.
Overall, Americans are very skeptical about the use of AI tools in the legal and judicial systems. When asked if the positives (such as aiding in document analysis and risk assessment) outweigh the negatives (like deepfakes and false information), average agreement rests in the lower half of the scale at 33 out of 100. Breaking down the distribution of opinion, which shows the percentage of people who answered a certain way along the 0-100 scale (Figure 1), shows that 2 in 5 Americans (40%) strongly disagree that the positives outweigh the negatives. Still, about one-quarter (28%) are on the fence (40-59 points) and nearly 1 in 5 agree with this statement (18%).
Regardless of how Americans feel about AI and its relationship to the judicial system, it is a fact that these types of tools are already making their way into legal proceedings. This doesn’t affect public opinion much, as Americans are more likely to say that there are too many unknowns when it comes to AI and it should be severely limited in legal environments, allocating an average of 69 points towards this answer option and just 26 points towards agreeing that AI is a new reality and should be used with the right rules in place (Figure 2). This varies slightly, particularly by age and party affiliation; younger Americans and those who affiliate politically with the Democratic Party are the least likely to be wary of the unknowns when it comes to AI. There are also differences by gender, with men allocating 9 more points towards saying AI should be used compared to women (31 vs. 22 points).
The distribution of opinion for this question is another blow to the perception of AI in legal and judicial environments, at least when it comes to the general public (Figure 3). About two-thirds of Americans (67%) allocate 60 or more of their 100 allocation points toward saying AI should be severely limited in this field, compared to just 1 in 5 Americans (19%) who advocate for its use at the same level. With just 1 in 10 Americans (10-11%) on the fence about this (40-59 points), there is a lot to be done in terms of AI improvement - both to the tools themselves and their public image - before the American public will feel comfortable with its regular use.
So what is it exactly about the use of AI that concerns Americans? Asking Americans to allocate points between several possibilities indicates that concerns over poor quality work such as reviewing litigation documents and conducting research tend to be the highest concern, standing at an average of 29 points (Figure 4). Many of the concerns on all three of these graphs lean towards worries over the legitimacy of work and lack of emotional intelligence. Concerns like these can range toward juries needing to make a decision directed to the standing witness or appropriate language for a document containing the closing statements of a lawyer.
Looking at the distribution of opinion for the top three concerns shows that concerns over deep fakes have the most passionate levels of concern, with one-quarter of Americans (25%) allocating 40 or more points towards this answer option, compared to 13% who feel the same about the next two highest-ranked concerns (Figure 5). The 20-59-point range shows that it is likely that most Americans have concerns about all three of these issues, including the absence of human emotional intelligence in the courtroom, such as empathy, kindness, and understanding, which are emotions vital to elements such as the Jury deciding on behalf of opinion on the case matter.
Breaking down demographic differences, most groups have concerns over deepfakes and false information but at varying levels with Democrats allocating the highest number of points (35), and Republicans allocating an average of 26, instead putting more points towards other concerns (Table 1). Conversely, voters who identify as Independents are more concerned about the authentication and integrity of evidence compared to Democrats (25 vs. 15 points). There are also differences by age with the vast difference of opinion with AI writing closing arguments varies between the age ranges for those under 45 years old and those over 65, the younger generation is more concerned, with under 45 being double the amount of those over 65 (6 to 12).
With such vast differences, it can be hard to determine whether AI will actually be a threat, given the uniqueness of opinions varying across multitudes of backgrounds and experiences. Growing exponentially, AI is making its mark on all aspects of the world in a variety of environments and industries and will eventually impact us all. Evidently, there are relevant concerns to it in the withstanding field of law, which will be interesting to monitor as AI becomes more prevalent in the months and years to come.