Main Menu
Related Practices

Lawyer Discernment Is Critical In The World Of AI

Law360
April 24, 2023

By Jennifer L. Gibbs
To read this article in PDF format, please click here.

Influential preacher Charles Spurgeon said, "Discernment is not simply a matter of telling the difference between what is right and wrong; rather it is the difference between right and almost right."

Artificial intelligence has made incredible advancements in recent years, with many industries adopting it for various applications. Growing practical concerns among artificial intelligence experts, however, include the potential for biased responses, the ability to spread misinformation, privacy concerns and deeper questions regarding how artificial intelligence might upend professions and displace human judgment and discernment.

With every technological advancement comes risks and challenges, and artificial intelligence is no exception. Lawyers who have mastered the legal skill of discernment should have a seat at every table with those addressing the ethical and moral dilemmas to ensure that artificial intelligence is developed and deployed in a way that benefits society as a whole.

Artificial Intelligence 

Artificial intelligence is a hot topic of conversation — not only as it relates to the tech industry, but in nearly every sector of our economy. The term "artificial intelligence" was first coined by John McCarthy in 1956 and has evolved from a general-purpose mobile robot to self-driving vehicles and speech recognition applications like Siri or Google Assistant.[1]

The three most important technologies that make up AI are machine learning, deep learning and natural language processing.[2]

Machine learning is a process where machines learn to optimize responses based upon structured big data sets and ongoing feedback from humans and algorithms.

Deep learning is considered a more advanced kind of machine learning in that it learns through representation; however, it differs from machine learning in that the data does not need to be structured.

Natural language processing is a linguistic computer science tool enabling machines to read and interpret human language and translate it into computer inputs.

Artificial intelligence experts have become increasingly concerned, however, about the potential drawbacks and ethical concerns regarding AI. For example, AI systems can replicate and amplify human biases, leading to discrimination against certain groups of people.

AI also has the potential to replace certain jobs, leading to economic and social issues. Additionally, AI systems can manipulate personal information, leading to identity theft and security breaches.

AI systems can also be programmed to make lethal decisions, raising concerns about the ethics of using such weapons.

And finally, as AI systems become more complex, it becomes difficult to hold humans accountable for their actions.[3] Awareness of these concerns is important in ensuring that AI is developed and deployed in a way that benefits society as a whole.

 When Worlds Collide — Technology Versus Morality

Currently, there is no legislation specifically designed to regulate the use of AI.[4] Policymakers and regulators are continually working on developing rules to ensure that the public is protected while promoting innovation. These regulations may draw upon the reasoning employed in Isaac Asimov's three laws of robotics which provide:

  1. A robot must not harm humans, or, via inaction, allow a human to come to harm.
  2. Robots must obey orders given to them by humans, unless an order would violate the first law.
  3. Finally, a robot must protect its own existence, as long as doing so would not put it into conflict with the other two laws.[5]

In Frank Pasquale's book, "New Laws of Robotics: Defending Human Expertise in the Age of AI," the Brooklyn Law School professor proposes adding the following four new principles to Asimov's original three:

      4. Digital technologies ought to complement professionals, not replace them.

      5. AI and robotic systems should not counterfeit humanity.

      6. AI should be prevented from intensifying zero-sum arms races.

  1. Robotic and AI systems need to be forced to indicate the identity of their creators, controllers and owners.[6]

Pasquale argues that the role robotics and AI play in society should not be left to Silicon Valley alone to dictate in that "we need to ensure lasting and democratized human control of the development of technology."[7]

Some opine that the computer revolution has brought about the malaise of "swollen heads" and "shrunken hearts" — with many unable to draw on a wisdom of discernment greater than our own.[8]

Similarly, Joseph Weizenbaum, a pioneer of AI, recognized that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge.[9]

Discernment

Discernment is defined as "the ability to judge people and things well"[10] or, in Christian contexts, "perception in the absence of judgment with a view to obtaining spiritual guidance and understanding."[11] The word comes from a Latin word, cernere, which means to sift.

Discernment includes the capacity for self-reflection and awareness and empathy toward others — abilities that are critical skills in allowing one to make sense of the world and of oneself, and to make judgments, including judgments about who one should be and how one should act.[12]

Discernment in the legal context can refer to the ability of a judge to evaluate the evidence presented in court, assess the credibility of witnesses, and make a sound judgment or decision regarding the case at hand.

Discernment involves careful consideration of all relevant facts and legal principles in order to arrive at a fair and just outcome.

Stated more simply, discernment is the exercise of both judgment and skill in weighing evidence and making legal determinations. U.S. Supreme Court Justice Potter Stewart's "I know it when I see it" test for obscenity is a prime example of the use and need for legal discernment when facing difficult and complex issues.

But discernment does not begin and end on the bench. Rule 2.1 of the American Bar Association Model Rules of Professional Conduct provides:

In representing a client, a lawyer shall exercise independent professional judgment and render candid advice. In rendering advice, a lawyer may refer not only to law but to other considerations such as moral, economic, social and political factors, that may be relevant to the client's situation.[13]

In his paper titled "On Lawyers and Moral Discernment," Robert E. Rodes recognizes that lawyers should not advise their clients to do wrong, but in exercising discernment, they must also refrain from helping their clients do anything wrong that clients might think up for themselves:

Lawyers' moral discernment must extend not only to their clients' agendas but also to how far their service to their clients makes them complicit in whatever wrong the clients do.[14]

Rodes also recognizes that advocacy calls for moral dialogue, noting that a lawyer will make a more effective argument if he or she can lead the court or jury to a moral discernment consistent with the claim the lawyer is presenting.[15]

The process of enacting laws reflecting both public policy and the so-called golden rule also requires discernment, although according to Rodes, we "cannot effectively make people do good and avoid evil through the force of the enacted law, but we certainly do not want to deploy that force in favor of having them do evil and avoid good."[16]

Because the practice of law inherently refines the skills necessary for discernment — which include the capacity to listen, awareness of ethical principles, a sense of purpose and discipline — the legal community can offer a tremendous benefit in considering and weighing the benefits versus risks of AI, and lawyers should be at the ready to act as advocates and counselors as artificial technology advances.

For example, litigators regularly use their skills of discernment in crafting the theory of the case — carefully evaluating the intersection between the application of the law and human psychology.

Similarly, criminal defense attorneys utilize discernment in advising clients regarding the pros and cons of testifying at trial — giving careful consideration not only to the specific facts of the case, but the credibility of the defendant, their composure under pressure and how convincing the testimony is likely to be to the jury.[17]

Additionally, in sentencing the convicted, judges review sentencing guidelines, but typically consider other factors such as whether the defendant has cooperated with the prosecution, and whether the crime is a one-time departure on the record of an otherwise well- intentioned person.[18]

Legislators, who are often attorneys, should employ similar discernment skills in making decisions on how to vote, whom to support and what causes to champion.[19]

They must also be cognizant of not only the political climate, but how certain votes will affect reelection. In facing a difficult decision, legislators will use their own beliefs, value system and listen to their conscience.[20]

Similar discernment skills should be utilized when determining the appropriateness of implementing and developing artificial intelligence; asking the hard questions, such as how much privacy are we willing to trade for efficiency; and whether we value human communication and interaction over automation.

One concrete example where discernment plays an important role is in regard to self-driving vehicles — a technology with the potential for several ethical and moral dilemmas.

Debates continue as to whether the engineers who worked on the car technology should decide the ethics of self-driving cars or whether that task should be assigned to the government of the country where the vehicle will be driven.[21]

A team of experts at the Technical University of Munich have pioneered the world's first ethical algorithm for self-driving vehicles, which purportedly fairly distributes levels of risks instead of operating on an either-or principle.[22]

The ethical questions the TUM experts were asked to address include how the software handled unforeseeable events in order to make necessary decisions in the event of an impending accident.

According to Maximilian Geisslinger, a scientist at the TUM Chair of Automotive Technology:

Until now, autonomous vehicles were always faced with an either/or choice when encountering an ethical decision. But street traffic can't necessarily be divided into clear-cut, black-and-white situations; much more, the countless grey shades in between have to be considered as well. Our algorithm weighs various risks and makes an ethical choice from among thousands of possible behaviors — and does so in a matter of only a fraction of a second.[23]

The TUM researchers explained, however, that even though algorithms using risk ethics can make decisions based on the ethical principles of each traffic situation, they cannot ensure accident-free street traffic. Therefore, moving forward, additional differentiations, such as cultural differences in ethical decision making, will need to be considered.

Notably, Georgia Tech's research into object recognition in self-driving cars found that pedestrians with dark skin were hit about 5% more often than people with light skin. They found that the data used to train the AI model was likely the source of the injustice: The data set contained about 3.5 times as many examples of people with lighter skin, so the AI model could recognize them better.

According to a G2.com article, "That seemingly small difference could have had deadly consequences when it comes to something as potentially dangerous as self-driving cars hitting people."[24]

Ford Motor Co. recently collaborated with Chris Gerdes, professor emeritus of mechanical engineering and co-director of the Center for Automotive Research at Stanford University to address the ethical problems involved in self-driving vehicles.

According to Gerdes, the solution is apparent and is built into the social contract we already have with other drivers, as set out in established traffic laws and interpreted by courts.[25]

Gerdes believes that if self-driving vehicles can be programmed to uphold the legal duty of care owed to all road users, then collisions will only occur when somebody else violates their duty of care to the self-driving vehicle — or there's some sort of mechanical failure, or a tree falls on the road, or a sinkhole opens.

However, if another road user violates their duty of care to the self-driving vehicle by running a red light, the principles we've articulated state that the self-driving vehicle nevertheless owes that person a duty of care and should do whatever it can — up to the physical limits of the vehicle — to avoid a collision.[26]

Notably, Gerdes' proposed solution follows years of research and lengthy discussions with teams of philosophers, engineers and lawyers.

Lawyers who have mastered the skill of discernment must be at the ready to address difficult questions as technology advances, such as: Do we, as a society, feel comfortable with a computer algorithm deciding if a self-driving vehicle will swerve when a child runs out in traffic?

Similarly, what are the ethical implications related to law enforcement's use of facial recognition databases to identify participants of anti-government protests and create so-called strategic subject lists of likely future criminals?[27]

What are the ethical implications of software programs evaluating certain genetic markers to determine a person's likelihood of developing cancer without any human-to-human interaction?

Other difficult questions that are likely to be addressed as AI advances include whether AI machines might be entitled to certain rights, such as the right to be free from destruction and the right to be protected by the legal system.[28]

To that end, because it is unlikely that technological advancements will cease, the search for true discernment accelerates and deepens. We, the legal professionals of a global society, need to do our part in exercising discernment in forming a partnership between spirit and mind as the frontier of human progress shapes the future.

Jennifer Gibbs is a partner at Zelle LLP.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

[1] https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/ai-vs-machine-learning-vs-deep-learning (last visited April 5, 2023).

[2] https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence (last visited April 6, 2023).

[3] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/ (last visited April 6, 2023).

[4] https://www.liberties.eu/en/stories/ai-regulation/43740 (last visited April 6, 2023).

[5] "Runaround," Asimov (1942).

[6] https://onezero.medium.com/its-time-to-add-4-new-laws-of-robotics-8791139cdb11 (last visited April 5, 2023).

[7] Id.

[8] Skills of Discernment, Dr. Charles Waddy.

[9] Joseph Weizenbaum,Computer Power and Human Reason from Judgement to Calculation.San Francisco: W H Freeman Publishing; 1976.

[10] https://dictionary.cambridge.org/us/dictionary/english/discernment.

[11] Google.com (last viewed February 27, 2023).

[12] Gosselin, "Cultivating Discernment." Jesuit Higher Education: A Journal 1, 1 (2012).

[13] https://www.americanbar.org/groups/professional_responsibility/policy/ethics_2000_commission/e2k_rule21/ (last viewed February 27, 2023).

[14] Robert Rodes, On Lawyers and Moral Discernment, 46 J. Cath. Leg. Stud. 259 (2007).

[15] Id.

[16] Id.

[17] https://www.protasslaw.com/factors-to-consider-in-deciding-whether-to-testify-at-trial/ (last visited April 8, 2023).

[18] https://www.mololamken.com/knowledge-How-Does-a-Judge-Decide-What-Sentence-To-Impose-on-a-Defendant (last visited April 8, 2023).

[19] https://www.socialstudies.org/advocacy/how-legislators-make-decisions (last visited April 8, 2023).

[20] Id.

[21] https://www.forbes.com/sites/naveenjoshi/2022/08/05/5-moral-dilemmas-that-self-driving-cars-face-today/?sh=7a0ca236630d (last visited April 13, 2023).

[22] https://www.innovationnewsnetwork.com/autonomous-vehicles-made-safe-with-the-worlds-first-ethical-algorithm/29569 (last visited April 20, 2023).

[23] Id.

[24] https://learn.g2.com/ai-ethics.

[25] https://hai.stanford.edu/news/designing-ethical-self-driving-cars (last visited April 13, 2023).

[26] Id.

[27] Aziz Huq, "Constitutional Rights in the Machine Learning State", Public Law and Legal Theory Working Paper Series, No. 752 , page 18 (2020).

[28] https://www.jdsupra.com/legalnews/should-ai-machines-have-rights-4583419/ (last visited April 18, 2023).

Back to Page