Skip to main content

Are boards ready for AI?

blog

Issue № 2 | April 2025

Welcome back!

This month we are looking at the future of Boards and AI. Artificial intelligence is no longer a futuristic dream tucked away in research labs. It’s already here, moving into boardrooms at an incredibly fast speed. The trend is undeniable, and companies — along with directors — are increasingly aware of it.

In a survey conducted by Deloitte in 2024, corporate leaders  were asked to select their three main priorities for the next 12 months: 43% selected “increasing the use of AI across the organization”—making it the number one priority.

It is no wonder that regulators are also paying attention. Take the new EU AI Act. It does not directly assign duties to boards — no explicit language about directors or fiduciary responsibilities. Yet, by mandating compliance frameworks around risk assessment, it necessitates board-level engagement.

Investors are alert too. Some of the largest U.S. pension funds recently announced they would oppose the re-election of directors at companies that failed to adequately oversee AI-related risks. In her featured interview (see below) for this newsletter, Prof. Katja Langenbucher interestingly points out that ‘Shareholders might be just as critical of a board’s overreliance on AI as of a board that fails to use AI.’
 
The Question of Ownership

When it comes to AI and boardrooms, one immediate challenge is responsibility. If AI helps directors to make decisions — or worse, makes them independently — who owns these decisions?

Katja Langenbucher's paper — yesterday awarded the 2025 ECGI Law Prize (Best paper in the Law Working Paper series) — offers a useful framework. She proposes a matrix based on two variables, "ownership" and "trust" which helps identify the appropriate level of judicial scrutiny when boards use AI to inform their decisions.

Directors are expected to "own" their decisions, meaning that they cannot abdicate their authority. In return, corporate law places “trust” in directors to make business judgments, free from judicial second-guessing. 

When both ownership and trust are low — for example, when boards rely heavily on AI and the decisions don’t concern business judgment — courts, she argues, should apply more intensive judicial review.
 
The Myth of Perfect Tech

It's tempting to think that AI will fix what humans get wrong. That old corporate governance issues — bias, agency problems, information asymmetries — might just fade away under the cool logic of machines. But Profs. Enriques and Zetzsche warn against this optimism. They call it the “tech nirvana fallacy”: the false hope that technology will solve all corporate governance problems.

Certainly, if well deployed, technology can increase the transparency of board decisions, speed-up processes and enhance their accuracy. But despite these advantages, AI could actually make governance harder.

Heavy reliance on AI could increase the risk of paralysis: machines may tend to identify a high number of red flags to be tackled and goals to be prioritised. More data isn’t always helpful. Boards may become overwhelmed, pressured by the weight of heightened expectations and the burden of deeper oversight.

Tech is not perfect — not yet, at least. Apart from its now-famous struggle to generate realistic hands, AI may replicate or even amplify misalignments between directors and shareholders if not trained carefully. Like human directors, machines can pursue their own objectives — or more likely, reflect the biases of those who designed them.

Not to mention that an excessive use of AI may increase cybersecurity threats, making companies — and boards — more susceptible to hackers’ attacks and the ensuing reputational risk. There is evidence that markets are punishing those companies that fail to properly manage cybersecurity.

Beyond Mere Cosmetic Solutions

Several  companies – including Boeing, eBay, Dell Technologies – have announced the appointment of a Chief AI Officer (or CAIO). Should this role be extended to the Board? Some boards may think that bringing in an “AI expert” solves the governance problem. It certainly looks good in the annual report and might reassure investors in the short term.

However, research by Profs. Shapira and Nili suggests being more sceptical. Specialist directors (like AI ones) often serve a symbolic rather than structural purpose. Their presence may distort board dynamics, as well as excessively enlarge the board size without improving oversight. 
 
Where This Leaves Us

The more we look at it, the clearer it becomes: most boards are likely not ready, at least not fully. AI has its place, and deep familiarity with its capabilities is increasingly necessary. Boards must adapt to fit a world where machines no longer quietly execute in the background but play a visible role in strategic choices.

Directors must remain vigilant, able to detect AI-related risks and mitigate errors. Ultimately, they must retain ownership and accountability over decisions, regardless of how intelligent the systems become. 
 
Learn More

For a deeper exploration of how boards can navigate AI governance challenges, read our exclusive interview with Prof. Katja Langenbucher (available below). She shares valuable and practical insights on black-box AI, board accountability and regulatory developments in this disruptive field.
 
Ciao for now,
~ Riccardo

Riccardo Rau

Riccardo Rao is a PhD candidate in business law at the Universities of Udine and Trieste, Italy. His PhD research focuses on benefit corporations, conducting comparative analysis across Europe and North America. 

✉️ Please feel free to get in touch, share your thoughts and let us know how we're doing, email [email protected].

Featured Interview

with Katja C. Langenbucher

This interview is based on Prof. Langenbucher's paper"Ownership and Trust – A Corporate Law Framework for Board Decision-Making in the Age of AI", which explores how boards can responsibly integrate AI into their decision-making processes.

🏆 The paper has been named the winner of the 2025 ECGI Law Series Prize.

Katja C. Langenbucher

Additional related content from ECGI

Working papers:

📄 'AI in Corporate Governance: Can Machines Recover Corporate Purpose?' by Boris Nikolov, Norman Schuerhoff, Sam Wagner (Mar 2025)

📄 Corporate Governance Meets Data and Technology by Wei Jiang and Tao Li (Mar 2024)

📄 Using Artificial Intelligence to Measure the Family Control of Companies by Mario Daniele Amore, Valentino D’Angelo, Isabelle Le Breton-Miller, Danny Miller, Valerio Pelucco, and Marc Van Essen (Jan 2024)

📄 Specialist Directors by Roy Shapira and Yaron Nili (Dec 2023)

📄 Use of AI by Financial Players: The Emerging Evidence by Gerard Hertig (Jan 2022)

📄 The Political Economy of AI-Driven Financial Supervision by Gerard Hertig (Jan 2022)

📄 Viewing Artificial Persons in the AI Age Through the Lens of History by Susan Watson (2021)

📄 Augmented Lawyering by John Armour, Richard Parnham, and Mari Sako (2020)

📄 The End of the Corporation by Mark Fenwick and Erik Vermeulen (2019)

📄 Corporate Technologies and the Tech Nirvana Fallacy by Luca Enriques and Dirk Zetzsche (Jun 2019)

ECGI Blog:

📘 The Shifting Tide of Board Expertise? by Roy Shapira (Jul 2024)

📘 A historical perspective on corporate officer accountability by Steve Kourabas, Nick Sinanis, and Timothy Peters (Feb 2024)

📘 Regulating for “humans-in-the-loop” by Talia Gillis (2022)

📘 Personhood for AI—Coming to a Jurisdiction Near You? by Carla L. Reyes (2022)

📘 Artificial Intelligence and the “S” in ESG by Katja Langenbucher (Sep 2022)

More from the Blog...

Videos:

The Future of Boards of Directors Around the World (Jun 2022)

AI in Corporate Law and Practice (Apr 2021)

Scroll to Top
OSZAR »