AI Accelerates Operational Intelligence, Not Wisdom: The Legal Profession Spent Decades Mistaking One for the Other
5.12.2026

The debate over artificial intelligence in law misses the fundamental issue: the profession’s decades-long erosion and underinvestment in the development of the human capacities that technology cannot (and will not) replace. What appears to be a technology problem is an institutional one.
I have spent decades inside the wellbeing reform movement, hosting the Lawyer-to-Lawyer Wellbeing Roundtable, speaking at conferences, and leading workshops on the profession’s structural contributions to the ills plaguing the profession. This vantage point reveals that AI has brought every underlying weakness into focus at once. As automation accelerates operational intelligence, it makes human judgment the limiting factor in legal legitimacy. The future of law will belong to professionals capable of deciding under pressure, taking responsibility without procedural cover, and exercising ethical stewardship when systems cannot resolve value conflicts. What must be restored is not a better toolset, but the human disciplines that technology cannot supply and institutions neglected to cultivate: the willingness to decide, to take responsibility, and to hold the line when systems cannot.
Law evolved to manage risk through precedent, process, and delay – structures that are legitimate and necessary features of any institution. Precedent preserves continuity; process enables coordination; structured deliberation allows for reflection. But over time, law has increasingly used these structures defensively, as shields against accountability rather than frameworks for judgment. Precedent, when divorced from judgment, reduces exposure by anchoring decisions in what “had always been done” rather than serving as a framework for interpretation. Process diffuses responsibility across institutions in ways that could enable collaboration but increasingly serves to enable individuals to avoid accountability. Delay – which can enable reflection – is used to soften hard choices or make them disappear altogether. These structures worked when information was scarce and time was the constraint. Over time, they have also ensured that judgment was no longer institutionalized as a measure of value.
AI removes all the former scaffolding – both good and bad. It eliminates delay, compresses precedent, and automates pattern recognition at a scale no human can match. What once required teams, time, and billable hours now happens in seconds. This is not necessarily a threat to law as a societal function, but rather the logical conclusion of how the business of law commodified itself. The profession spent decades mimicking machine logic – scale, speed, volume – while abandoning the human disciplines that gave precedent, process, and deliberation their meaning. Now AI does the machine part better than we ever could. However, we optimized for operational intelligence and hollowed out the judgment that legitimacy requires. Once the scaffolding falls away, what remains becomes clear: sound judgment, ethical stewardship, and the willingness to take responsibility for outcomes when process, precedent, and delay no longer provide cover. These capacities have not been institutional strengths because they were never systematically trained, reliably rewarded, or structurally reinforced. Rather, they have always been individual ones – and now they may be the only ones that ever really mattered.
The prevailing assumption in legal discourse is that artificial intelligence will reduce the need for human involvement by making legal work faster, cheaper, and more accurate. This assumption is backwards. As automation takes over the tasks that passed once for judgment (i.e. precedent matching, risk flagging, pattern recognition), it does not eliminate the need for human judgment. It concentrates responsibility in the decisions that remain: the ones involving ambiguity, competing values, incomplete information, and unpredictable consequences. Fewer lawyers will touch more consequential choices. The margin for error will narrow, not expand. In this environment, the absence of cultivated judgment is not merely inefficient; it is dangerous.
In practice, judgment is what remains when automated outputs conflict, when data is incomplete, when values collide, and when consequences cannot be predicted. It is the capacity to decide which risks are tolerable, which outcomes are unacceptable, and which principles must govern when no rule clearly applies. These decisions cannot be optimized or outsourced. They must be owned. In our most recent past, the lawyers who exercised real judgment by telling uncomfortable truths, making unpopular calls, and accepting accountability, were tolerated, sometimes admired, and often penalized. Risk minimization was rewarded. Moral clarity was not. Judgment existed, but it lived at the margins.
The cracks were visible long before AI forced a reckoning. Wellbeing initiatives and calls for reform revealed a profession trained to execute process – often at superhuman scale – while neglecting the cultivation of judgment. Precedent, meant to anchor judgment and preserve continuity, was increasingly used as a shield against responsibility rather than a framework for interpretation. Process and delay absorbed work that judgment once carried.
The problem, then, is not that law failed to adopt technology responsibly. It is that the profession has failed to treat judgment as a trainable, accountable discipline. Legal education has emphasized analysis without consequence. Professional advancement rewards risk avoidance over decision-making. Institutions are optimizing defensibility rather than discernment, and now AI simply exposes what the system deprioritized for decades.
That exposure brings the profession to a convergence point where the stakes extend beyond practice and into legitimacy itself. The rule of law does not rest on speed, scale, or predictive accuracy. It rests on human judgment exercised openly and with accountability, especially when precedent is thin and pressure is high. When judgment is replaced by process, law becomes procedural rather than principled. When responsibility is endlessly diffused, legitimacy erodes.
Yet much of the profession’s response to AI reflects mimicry rather than adaptation: ethics frameworks without ownership, AI policies without judgment, systems still designed to avoid responsibility. The profession is using AI to double down on the structures that hollowed out judgment in the first place.
What the debates over AI, lawyer wellbeing, the billable hour, and the rule of law all reveal is the same underlying loss. The profession did not simply become inefficient, overworked, or technologically exposed. It lost its sense of itself as the “system architect”: the designer and steward of accountability, judgment, and institutional integrity. Lawyers should be the professionals who build intentional structures that protect space for human judgment. These containers preserve slowness, deliberation, and reflection where those serve human values, not where they serve gatekeeping, bureaucracy, or professional monopoly. Instead, our profession has used delay to protect turf rather than to protect judgment. It has defended process as a barrier to entry rather than as a framework for accountability.
Autonomy became risk. Judgment became inefficiency. Ethics became compliance. Counsel became output. Lawyers were trained to execute process and manage exposure, rather than to exercise judgment and responsibility inside living systems. The machinery grew more sophisticated even as the vocation hollowed out.
Recent scholarship has begun to identify these structural vulnerabilities, with Woodrow Hartzog and Jessica Silbey arguing in “How AI Destroys Institutions” that AI’s core affordances (i.e. undermining expertise, short-circuiting decision making, and isolating humans) are inherently destructive to civic institutions, including law. While the analysis is both necessary and illuminating, the legal profession’s particular vulnerability to these dynamics is not accidental or the result of AI alone. It results from decades of failing to treat judgment as a trainable, accountable discipline. Human-centered language, absent structural accountability, reproduces the very dynamics that hollowed out judgment in the first place. Structural accountability means the institutions that cultivate judgment also bear responsibility for its exercise – where consequences flow back to the source. Without that, training is content (by “content creators”), rather than intentional implementation with reinforcement by the professional institutions themselves.
AI did not cause this reckoning, but it has brought every underlying weakness into focus at once.
Libby Clark writes about the impact of emerging technology on the legal profession. She is the first chair of the New York State Bar Association’s Standing Committee on Attorney Well-Being and co-chair of its Attorney Well-Being Task Force. In 2023, she co-founded LikeWell.Org with Dr. Kerry Murray O’Hara, PsyD, an education and training platform for the legal profession built on the premise that “the future of law is human.” Clark is a former general counsel and chief operating officer who now acts as outside general counsel and strategic advisor to leaders and organizations on high-stakes judgment under pressure.





