The charade of “self-regulation”
“Self-regulation” is the EBITDA of corporate policy. It’s meaningless.
I confess … I’ve talked a lot about “self-regulation” over the years. I’ve mentioned it in posts online, course curriculum and class lectures for my students, and various talks I’ve given about what companies are doing in the absence of real regulation. But I’ve come to the conclusion that it’s a meaningless term.
A read a piece in Lawfare this week asserting that last month’s OpenAI fiasco signaled the “death knell for AI self-regulation”, which begs the question: what exactly is this “self-regulation” we’re talking about? Is there a single consequential example in recent history of a company or industry “regulating itself” because Congress or regulatory agencies failed to enact a law or a proper rule? Imagine a company saying:
“You know, I really want to dump this toxic waste into the nearby river, and no one would ever know we did it. Even if the fish end up with three eyes and everyone in the neighboring town gets cancer, they wouldn’t be able to link it to us. But we’re going to ‘self-regulate’ and not do that, even though every other chemical company we compete against saves opex and boosts profits by doing this exact thing.”
This conversation doesn’t ever happen, and if that sounds cynical, do a little research on toxic torts to get a taste of the kinds of things companies attempt to get away with. Or go to Netflix and watch A Civil Action or Erin Brockovich or Dark Waters.
I am a believer in the free enterprise system and its role in making America’s economy the most successful in the world. I am not a believer, however, in unrestrained deregulation as some sort of magic innovation engine. Government regulation is necessary to place economic growth in balance with protection of the public interest. As it relates to AI, it seems that most government actors (administrative agencies and lawmakers from both parties) are in basic agreement on the need for a comprehensive regulatory regime. But before talking about tech, let’s start with the most basic notion of regulation.
What is a regulation?
A regulation is a rule—with the force and effect of law—that compels firms to do things that they otherwise would not do. Based on this seemingly non-controversial definition, the whole concept of a firm forcing itself to do something that it wouldn’t do if it weren’t forcing itself to do it (do I have that right?) is a non sequitur. The idea of it makes zero sense.
Companies don’t “self-regulate.” They make promises to the market that they may or may not keep. Business conditions, after all, are subject to change, and without regulatory constraints, they can renege if they want. Customers may get mad or take their money elsewhere. Industry watchdogs might lay into them. They might end up getting skewered on Last Week Tonight or profiled on 60 Minutes. But they don’t have to worry about being subjected to government enforcement actions. There are certain scenarios, however, in which regulatory regimes do allow for some degree of what looks a lot like “self-regulation”. One such scenario is the regulatory agency practice of delegation, a well-intentioned transfer-of-authority arrangement in which a government regulator can delegate its authority to the company it regulates. The practice is grounded in the pursuit of public goals and is codified in law. Here’s an example from the Federal Aviation Administration (FAA):
Under Title 49 of the United States Code (49 USC) 44702(d), the FAA may delegate to a qualified private person a matter related to issuing certificates, or related to the examination, testing, and inspection necessary to issue a certificate on behalf of the FAA Administrator as authorized by statute to issue under 49 USC 44702(a).
Corporations are private persons, and FAA can lawfully grant authority to organizations or companies under the Organization Designation Authorization (ODA), codified in Title 14 of the Code of Federal Regulations (14 CFR) part 183. If this all sounds bizarre, it’s not necessarily a bad thing when it comes to trivial, non-consequential decisions that should not reasonably require government regulators to approve. Do we really want aviation safety experts at the FAA spending time approving things like lavatory signs or fuselage paint colors? No, we don’t, and delegation provides for this. But when deregulation efforts use delegation scope creep as a lever, the results can be disastrous, in the literal sense.
Delegation as self-regulation: a cautionary tale
I recently visited my alma mater, the Purdue University School of Aeronautics and Astronautics, to give an engineering ethics lecture that was basically the presentation of a case study: the Boeing 737-MAX. You probably remember it from the news … Lion Air 610 crashes in Indonesia in October 2018, followed by news reports of a previously unknown automated flight control system possibly contributing to the crash, followed by Boeing’s insistence that the MAX was still safe to fly, followed by the crash of Ethiopia Airlines 302 five months later under eerily similar circumstances, followed by the worldwide grounding of the MAX. This story is told in detail though Peter Robison’s excellent book, New York Times and Seattle Times reporting, as well as a PBS Frontline documentary. One of the more pivotal storylines underpinning the scandal was Boeing’s relationship with the FAA and the role of delegation in empowering the company to make consequential flight safety decisions that were at odds with the public interest.
Long story short, the automated flight control system on the MAX was new to the 737 platform, and decisions about whether to both include it in the operations manual and require pilots to undergo training on the system as a condition of MAX certification were effectively delegated to Boeing. In a competitive market battle between the 737-MAX and the Airbus A320neo, any requirement that takes a customer’s pilots out of a revenue-producing airplane and puts them into a simulator is an impediment to making a sale. Because there was no such requirement for the A320neo, Boeing had a strong business incentive to avoid additional training requirements for MAX customers, leading the company to downplay the importance of the new flight control system. And by “downplay” I mean “conceal”. When the investigation into Lion Air 610 commenced in the immediate aftermath of the crash, the FAA didn’t even know the system was operational during low-speed phases of flight, such as takeoff and climb. The airlines didn’t know, either, and pilots were livid. This story involves a lot of other dynamics relating to corporate culture and failures of leadership, but germane to the topic of “self-regulation”, delegation was the key enabler of this catastrophic lack of oversight. In the US House Transportation Committee report on 737-MAX, the phrase “self-regulation” does not appear (nor should it, as the failure was, sadly, part of the regulatory regime at the time), but “delegation” is everywhere, and its role in these twin disasters serves as tragic evidence of what idealistic notions of corporate “self-regulation” would have in store for the public interest.
In the end, the FAA’s permissiveness with regard to delegation was partially clawed back upon passage of the Consolidated Appropriations Act that was signed into law in December 2020, which required regulated entities to now “submit safety critical information … as the [FAA] may require”. The section titled “Limitation on Delegation” states that the FAA “may not delegate any finding of compliance … until the [FAA] has reviewed and validated any underlying assumptions related to human factors.” The section titled “Oversight of ODA Unit Members” contains several provisions to re-insert the FAA back into decision-making by delegates working for regulated entities. Boeing was never described as “regulating itself”, as it was operating under the lawful regulatory regime at the time, but that’s essentially the effect that delegation produced in this case. This sad story provides an unmistakable lesson in what business pressures can do to a firm left to make product safety decisions on its own. In this case, 346 people died as a result.
The tech industry
There’s been a lot of activity in the last year about mitigating the potential for algorithmic and AI harms through various readily available means, while lawmakers continue the slow march toward something with the force and effect of law. These things are often (incorrectly) couched as “self-regulation”, so let’s talk about them.
Risk management is not self-regulation. It’s simply … risk management … a longstanding, widely adopted business discipline that virtually every company above a certain size uses to mitigate operational, financial, and market risks to which the firm (or the industry sector) is exposed. Companies don’t engage in risk management for the good of society. They do it because it’s how operationally mature companies are run. For example, if, as a long-term, risk-averse, buy-and-hold investor, you bought shares in a blue-chip firm and found out later that it had no risk management function because senior leadership felt it was “just overhead” and a waste of opex, would you feel better or worse about your holdings? Don’t you want the management teams of companies you invest in to take the time to look around corners? Risk management is part of how professionals run a company.
Voluntary commitments are not self-regulation. When companies state they’re voluntarily taking an action without being compelled to do so by the government, they’re simply stating company policy, and that’s a good thing. Actions like voluntary commitments and public proclamations of principles and values are good because they provide the market with a means to hold firms accountable. As any free-marketer will tell you, the market has the ability to punish companies that engage in misconduct or lie to their customers about what they’re actually doing.
Free market doctrine asserts that markets are self-correcting in this sense. I think that’s partially true, but rules with the force and effect of law are also necessary to protect against the kind of misconduct that isn’t necessarily obvious or detectable by the market. How would an airline passenger know whether the pilot of their plane got a good night’s sleep or hadn’t consumed alcohol in the last 24 hours? How would a customer in a restaurant know whether there is a rodent problem in the kitchen? How do we know our drinking water won’t kill us? Regulators are responsible for looking after these sorts of things, and the penalties for breaking the rules are generally proportional to the harm done. The notion that any of these businesses can be unilaterally trusted to regulate themselves in the face of immense pressures on revenue, profit, market share, etc. is folly. The good news for them is that the cost of an individual firm’s compliance becomes a cost of doing business in the sector, levelling the playing field for all participants. Yes, it’s true that more heavyweight regulation tends to favor well-funded, entrenched incumbents at the expense of competition from smaller upstarts, but the effect on competition is a separate issue. Generally, everyone plays by the same set of government-enforced rules.
What should companies do?
Statements of AI and privacy policy, declarations of company principles and values, voluntary commitments to ensure responsible AI, and management of AI risk are practices that constitute accountability mechanisms for stakeholders (e.g., customers, partners, employees, and even policymakers). Companies should continue to do these things. Certain companies with brand strategies anchored on trust will think long and hard about how far they’re willing to go here, as changing course or welching on commitments will erode trust as a brand attribute. This is a good thing: in the absence of real regulation, accountability to the market is the best game in town, and in a free market economy, it’s an (albeit suboptimal) instrument.
Companies with opinions on regulation are obviously free to lobby the government in an effort to shape the final outcome in a way that protects their ability to pursue innovation, economic growth, competition, and advancement of shareholder interest. Regardless of how anyone feels about corporate lobbying, it’s protected by the First Amendment and very much a part of the democratic process. But companies should *not* describe their voluntary actions as “self-regulation” because there is no such thing. More importantly, trade groups, lobbyists, law- and policymakers, and every other stakeholder should also steer clear of ascribing “do the right thing” proclamations of company policies as acts of self-regulation, as it misleads the public into thinking there are protections in place that actually have some bite.
In a free market economy, company values, principles, and mission statements do indeed mean something. Maybe I’m biased for having had the good fortune of working for companies that behaved ethically and took their public statements seriously, but firms making claims about responsible AI are inviting us to hold them accountable. If they’re bending the truth, we’ll all find out soon enough. But in the end, safeguarding the public is a job for the government, and is the only context in which “regulation” is real.
/end