SEE MORE LEGAL SOLUTIONS »
Your premier source for legal solutions, including Thomson Reuters Westlaw and law books.
Practice Innovations  —  Managing in a changing legal environment
Gray Rule
March 2018 | VOLUME 19, NUMBER 2
Gray Rule
Skill Fade: The Ethics of Lawyer Dependence
on Algorithms and Technology
»For easy printing – view as PDF

IN THIS ISSUE:
spacer

»Skill Fade: The Ethics of Lawyer Dependence on Algorithms and Technology
»Using Expected Value Calculations and Big Data to Guide Decision-Making
»Fending Off Incursions by the Big Four into the Legal Industry
»The Opportunity for "Back Office AI" in Law Firms
»Suffolk Law School: Leading Transformation of Legal Education
»The Future of Change is Client/Law Firm Collaboration
»Back to Contents

LINKS:
spacer
» Subscribe
» About Practice Innovations
» Editorial Board
» Past Issues
» Reader Feedback

rss feed View Feed XML

Skill Fade: The Ethics of Lawyer Dependence on Algorithms and TechnologyBy Brian Sheppard, Professor of Law, Seton Hall University School of Law, Newark, NJ
Artificial intelligence, particularly the kind that uses algorithm-powered machine learning, can evaluate, sort, and cull information outside the lawyer's view. But, automation might lead to a phenomenon known as skill fade. Clients could be harmed when AI ceases to perform as well as lawyers would have performed in its absence.

Changes to legal ethics codes could push lawyers to embrace legal technology, but will the technology make lawyers less ethical? In the last five years, over fifty percent of states have added language to their ethics codes that calls for lawyers to familiarize themselves with the technological tools of the profession.

Technologists have declared this addition a victory against Luddite lawyers which will hasten the adoption of legal technology and ultimately benefit clients. There is some evidence that the change could make a difference. In James v. National Finance, LLC, the Delaware Chancery Court invoked the technological competency provision when it rejected the argument that a lawyer should be excused from sanction for filing inaccurate spreadsheets because he could not turn on a computer without help.

Still, this ethical "sea change" is a moderate one for now. States have not generally altered their ethical rules regarding competency. Instead, they have opted to alter the advisory commentary to the rules. These changes mirror the equivocal wording suggested by the American Bar Association: lawyers must maintain knowledge of "the benefits and risks associated with relevant technology" (emphasis mine).

It is possible, however, that technological competency could someday become a basis for discipline. If so, should we consider how difficult it will be for lawyers to assess the benefits and risks of relevant technology? If history is any guide, it will be easy to assess future technology. Many innovations—from typewriters, to photocopiers, to personal computers—have been straightforward performance enhancements. There was little need for a lawyer to understand precisely how these devices worked, because operating them did not remove anything of legal significance from the lawyer's view. This made it simple for the lawyer to assess reliably whether the technology was producing better outputs. For example, the typewriter produced results that were the same or better than professional printing or handwritten work, but at a fraction of the cost. Boolean search of digitized textual databases gave lawyers greater access to primary and secondary sources, which empowered lawyers to produce briefs and opinions with more citations and deeper analyses of cited cases. There was little mystery; the value provided by the technology was often evident throughout the process of using it.

But forthcoming technology is different. Artificial intelligence—particularly the sort that uses algorithm-powered machine learning—can evaluate, sort, and cull information outside the lawyer's view. For example, existing technology allows cases or other resources to be selected, processed, and used as citations in machine-generated memoranda or contracts. The automated process of creation is almost always hidden. Companies wall it off in the name of intellectual property. But even if there were no walls, lawyers would be unable to understand the decision-making process that led an algorithm to sort information. And states could not set ethical duties at such a challenging level.

However, it is not easy to see how the inscrutability of algorithms creates an ethical problem. Won't lawyers be able to assess whether the technology is producing better outputs than they could have produced without it? Won't they be able to review memoranda and check for errors?

Perhaps not. Automation may lead to a phenomenon known as skill fade.

Skill fade has been observed in occupational fields in which automation has become widespread. For example, numerous empirical studies have shown that autopilot can lead to a decline in pilot skill. Calvin L. Scovel III, the Inspector General of the US Department of Transportation, became so concerned about skill fade that his office reprimanded the Federal Aviation Administration. They claimed that they no longer know how many pilots are still capable of manually operating planes. Unfortunately, human error after autopilot failure was a likely cause in the recent crashes of Turkish Airlines flight 1951 and Asiana Airlines Flight 214.

Skill fade becomes problematic when an automated system fails. If we conceive of system failure as an inability to access or otherwise use automated programs, then system failure in legal practice would be incredibly rare. It might happen during a long-term power outage, internet failure, or something similarly dramatic. While these developments are not impossible—Puerto Rico stands as a continuing example—they are highly improbable. I have been doing legal research since the turn of the millennium, and I haven't looked at pocket parts since my first year of law school.

However, the inability to access technology is not the only type of system failure. Clients could also be harmed when automation evolves to a point that it ceases to perform as well as lawyers. As with the skill fade from widespread use of autopilot, unnoticed degradation is a bigger risk than the system breaking outright. As legal skills fade, we might be unable to gauge whether the outputs of the system are as good as the outputs that we would have created in the pre-automation period.

One way that this could occur is if we become less connected to the legal resources that inform our work. In a sense, we fall out of the loop. Out-of-the-loop problems are real. Following psychologists like Christopher Wickens, scholars have long been identifying contexts in which automation decreases worker vigilance. The worker grows confident that the system is doing its job. Eventually, the worker becomes unmotivated to keep up with changes in its operation. So-called "OOTL" problems can even impair the ability to know when one lacks the ability to evaluate the quality of system outputs.

One can draw a familiar, non-technological parallel to law firm practice. Many junior litigators have toiled for days or weeks on a brief or motion, only to have an out-of-the-loop partner swoop in at the eleventh hour, read the fruits of their labor, and make a mediocre but successful oral argument. The partner, of course, thinks he or she knocked it out of the park, blissfully unaware of the near failure. Now imagine that automation is the junior litigator and the user is the senior partner.

Skeptics might doubt that algorithms create a risk of system degradation. They might believe that any risk would disappear or become insignificant as technology progresses. They might be right; it is difficult to predict the future, of course. However, the algorithmic programs that are adopted by lawyers will likely learn from lawyer conduct. Consequently, skill fade or out-of-the-loop problems could actually hasten the reduction of system quality without raising any alarms. Machine learning can adopt the worst behaviors of those who interact with it. The Microsoft Twitter bot that began to spew racist bile after less than one day is a cautionary example (albeit one that is rather easy to spot).

Of course, many software companies will give domain experts such as experienced lawyers access to the inner workings of a company's algorithms in an effort to prevent the program from going off the rails, but even today's domain experts struggle to understand what is happening with algorithmic technology. Moreover, there is little reason to believe that they will be immune to degradation problems that other lawyers face.

It could nevertheless be argued that a drop in the quality is not necessarily problematic, so long as the drop is more or less uniform across the profession and is paired with a fair drop in prices. If lawyers on both sides of a dispute in 2024 are relying on the same popular software, their clients are not necessarily victimized by the fact that the software fails to meet, say, 2018 performance expectations. Moreover, some scholars believe that lawyers now overshoot client needs by a significant margin. On this thinking, skill fade would have to be severe to fall below client demand. Clients may also accept the drop in quality if matched by a corresponding drop in price. Proponents of technology argue that lawyers will be able to produce client work faster. Then competition will lead lawyers to lower costs. While the profit margins on individual cases or projects might shrink, lawyers will be able to take on more clients to maintain their lifestyles.

This reasoning assumes that clients will be satisfied when the results of legal representation meet their price and performance expectations. Ethical discipline cases for lack of competence are uncommon, and they are rarer still when lawyers' clients are satisfied with the outputs of representation. Maybe the ethical problem is illusory.

We should resist the urge to adopt such an anemic view of ethical responsibility, though. If we understand lawyers less as hired hands and more as custodians of the integrity of the system of laws, then ethical risks might increase in a system of automation. This is not simply waxing poetic. The preamble to the ABA's Model Rules of Professional Ethics states in its first sentence that a lawyer is "an officer of the legal system and a public citizen having special responsibility for the quality of justice" and later describes that this involves "cultivat[ing] knowledge of the law beyond its use for clients." While preambles to ethics codes are seldom used as bases for discipline, the same could be said of the commentary to the rules, including the comment regarding technological competency.

As we begin to rely on algorithms, we should be mindful that our ability to assess it could be threatened unless we also adopt approaches that preserve our ability and motivation to monitor and assess the justice system itself.

Back to Contents