Digital Business, Regulated Business
If every business is a digital business, then every business will be a regulated business.
The promise of digital transformation is clear: modernize and streamline legacy analog systems and processes with digital technology for more growth, productivity, and better business performance. But it comes with a twist: regulation of that same digital technology is the top priority of lawmakers and governments around the world. If every business is a digital business, then every business will be a regulated business.
##
It’s hard to pinpoint the exact date at which digital transformation took the enterprise technology business by storm, but in the context of tech industry marketing waves, it was fairly recent. Researchers estimate that the digital transformation market will grow from US$521 billion today to US$1.2 trillion by 2026, growing at 19% per year. It’s become clear that “every business is a digital business.” Even the metaverse is in on the action, with AR/VR and virtual worlds making their way to future visions of how we work together.
Meanwhile, the commercial viability of AI, the growth of social media, and tech’s role in social discord and algorithmic harms have given rise to a renewed interest among lawmakers and governments in more strict regulation of the tech industry, which has historically crossed paths with regulators in the context of things like intellectual property, antitrust, and more recently, data protection. But now everything is on the table: censorship of online speech, liability for defamatory utterances on tech platforms, algorithmic discrimination, and a slew of other digital maladies affecting people and organizations on the internet every day. Most of this activity takes place under the default umbrella of “big tech regulation.”
This “tech regulation” umbrella, however, can be misleading to business leaders outside of the tech industry, who we couldn’t blame for blithely assuming that all this discussion and debate on new regulations for algorithms and speech and data will be the tech industry’s problem. It won’t. Regulation of digital technology will cut far and wide across nearly every vertical industry that embraced digital transformation as a strategic imperative to automate and streamline business processes and core functions.
In other words, if every business is a digital business, then every business will be affected by new regulation of digital technology. As Stanford Cyber Policy Center director Marietje Schaake said recently, “We cannot think of technology as a sector anymore; it is a layer of almost everything.” Harms observed in industry verticals set the stage for how flawed & harmful implementations of technology have attracted and will continue to attract the attention of regulatory agencies. Some examples …
In the healthcare vertical, researchers published findings in 2019 revealing that an algorithm widely used in US hospitals for patient care referral recommendations was less likely to refer Black patients than white patients who were equally sick. The algorithm used an assignment of risk categories based on healthcare costs as a proxy for healthcare needs, thus encoding socioeconomic inequality into the system. As a result, Black patients needed to be sicker than white patients before being referred for additional care.
Retail examples abound: the infamous 2012 Target story about a pregnant 16-year-old who was outed to her family by a pregnancy-predicting data model that resulted in personalized, targeted marketing mailers sent to her home, and Amazon.com’s introduction of Prime Same Day delivery in 2016 that was optimized for ZIP codes with concentrations of existing Prime members, consequently leaving predominantly Black neighborhoods without the service. Retail was among the most affected sectors following the passage of GDPR in 2018, as retailers moved an increasing volume of commerce online. Since that time, we’ve all been inundated with incoherent notice & consent mechanisms, while the collection of personal data has continued unabated, raising more concerns related to manipulation, dark patterns, the sale of data to brokers, and a litany of more contemporary privacy harms.
In banking, discriminatory mortgage lending has been illegal since 1968, but the increasing use of algorithms to automate determination of creditworthiness has pushed decision-making deeper into opaque systems, beyond the view of regulators and law enforcement. Recent reporting has revealed that in the US, “loan applicants of color were 40%–80% more likely to be denied than their white counterparts. In certain metro areas, the disparity was greater than 250%.” The US Department of Justice recently launched a new initiative to combat this practice.
Digital transformation in education has also had unintended consequences. In 2020, students in the UK did not sit for exams due to the COVID-19 lockdown and were instead given grades by the government’s official exam regulator, Ofqual, who relied on teachers to estimate grades for each student by subject and provide a ranking compared with every other student. This data was fed into an algorithm to decide which students would receive the top marks in their school. When the grades were announced, they were 40% lower than teachers’ estimates, and additionally downgraded state schools compared to private schools. The furor over the results and the lack of transparency compelled the UK government to abandon the algorithmic results and revert to the original teachers’ estimates.
How should businesses think about tech-focused regulatory proposals currently under consideration against this backdrop of digital transformation gone wrong? The recent proposals put forth by US lawmakers tend to focus on things like revising Section 230 of the Communications Decency Act (the 1996 law that limits online platforms’ liability stemming from third-party content), mitigating misinformation and its amplification (despite the near-impenetrability of First Amendment speech protections), and accountability for a slew of other algorithmically-enabled harms. There are too many proposals to list here, but we can look at a couple of them from US lawmakers and consider their likely applicability to traditional industry verticals, even though they have a low likelihood of passage. Most of these bills are well-intentioned but unrealistic and laden with unintended effects, not to mention partisan opposition.
But regardless … the idea here isn’t to plan for these specific provisions, but rather to go through the exercise of applying a small sampling of them to industry verticals to understand the hypothetical breadth of the overall regulatory effect. Critique of these proposals is a topic for another essay.
##
The US House Committee on Energy and Commerce (which holds primary jurisdiction over Section 230) has proposed the Justice Against Malicious Algorithms Act, which would amend Section 230 to create liability for personalized recommendations that cause physical or emotional injury. At its core, this bill is about Facebook, YouTube, et al, targeting malicious algorithms while excluding those that don’t rely on personalization or those that generate content in response to a user-specified search query, while also excluding infrastructure providers like web hosters. But what about algorithmically personalized recommendations of products or services in which user reviews contain offensive content? What about personalization based on location that does the same thing? As legal scholar Eric Goldman has noted, the bill’s key provision “applies equally to personalized content and personalized ad targeting, so this bill would potentially wreak havoc on the entire advertising ecosystem.”
One could make an argument to check all vertical boxes here, but retail would certainly be the most affected by this particular bill. To evade it, any retailer with a website comment section would have to divert delivery of that content to search results and/or change personalized recommendations to those based on general product/service popularity, which assumes personalization algos can be untangled from non-personalization algos. In any event, a materially downgraded customer experience is likely.
##
Another US proposal comes from Senator Ed Markey (D-MA) and Representative Doris Matsui (D-CA) in the form of a bill called the Algorithmic Justice and Online Platform Transparency Act. The focus of this bill is to introduce prohibitions on discriminatory content and add requirements for transparency into how algorithms extract and use data, adroitly steering clear of direct carve-outs to Section 230. To be clear, this bill is not about discriminatory conduct (which is already illegal) but rather the delivery of discriminatory content, which is prevalent on the internet today. It is common for algorithmic systems to infer race, ethnicity, gender, age, ability, and sexual orientation as a means of classifying users to deliver content such as advertisements for things like housing, jobs, and products in a way that stereotypes and discriminates. While targeted at online platforms, it’s hard to imagine any business that engages in ad targeting not being impacted by this bill were it to become law, and that’s a good thing. Discriminatory algorithms (in addition to lacking transparency) are bad for the world, but I digress. Generally speaking, algorithmically grouping people for purposes of deciding who sees what is fraught with peril if there is not a deep understanding of how these systems work.
It’s not clear where liability would begin & end for businesses buying ads (for example) vs. digital ad platforms delivering those ads to various websites, but were liability to extend up the value chain, algorithmically posting job ads just for recruiting & hiring alone, which all businesses obviously do, is enough to qualify all vertical industries in this example. Any business with an online “jobs” page could theoretically be required to provide transparency into how the system decides which jobs are recommended to a user it knows something about. Lawmakers are thinking broadly about this, with Sen. Markey’s office calling for a “a comprehensive review of algorithms and their potential discriminatory impact on everything from healthcare to financial services to employment and higher education”.
##
Looking outside the US, we don’t have to go far: in April of 2021, the European Commission released its long-awaited regulatory framework proposal for artificial intelligence, dubbed the Artificial Intelligence Act (AIA). At over 100 pages, it’s too much to cover in detail, but describes itself as “horizontal [in] nature” (Article 1.2). Its applicability is spelled out more explicitly in Article 2, applying the regulation to:
(a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
(b) users of AI systems located within the Union; and
(c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.
The proposal takes a four-tiered risk-based approach (minimal, limited, high, and unacceptable) that classifies current and proposed applications of AI to partition use cases that are banned from those that would be allowed but with defined safeguards. Included in the “unacceptable” (banned) category are AI systems that result in physical and psychological harms stemming from manipulation of behaviors, exploitation of vulnerabilities, and biometric surveillance and social scoring to justify denial of a consequential service (e.g., the right to travel).
The “high-risk” category (Articles 6 and 7) includes uses of AI for “safety components of regulated products” such as medical devices and machinery that are already subject to third-party oversight. In addition, the proposal describes certain standalone AI systems in a number of areas including biometric identification, critical infrastructure, education, access to employment, access to essential private services, law enforcement, and the justice system.
The high-risk category received a lot of attention during the comment period prior to the AIA’s release, specifically relating to insufficient clarity on what constitutes a high-risk AI system. As CSIS’s Meredith Broadbent writes, “Stakeholders expressed concern that the ambiguity of the classification of high-risk sectors and applications would lead to uncertainty among businesses over whether they would be subject to regulations.” In response, the Commission established a framework for determination of risk level that entails conformity assessments, auditing requirements, and post-deployment monitoring, evaluation, and reporting. Imposing the determination framework is an impact unto itself, as it carries a cost and a change of business process to ensure compliance.
As a final note on the AIA, the proposal’s definition of AI in Annex I suggests that any system using “statistical approaches” is considered AI for regulatory purposes. This is broad in scope and would include many traditional application software implementations that are not technically considered AI, described by Deloitte as including “expert systems and statistical models long in place”. This casts an even wider net for any business operating such systems either in the EU or outside the EU but serving the EU market, as defined by Article 2.
As a practical matter, it will likely be years before the proposed rules become law in any form, but it’s hard to imagine an organization that uses digital technology for core operations doing business in the European Union remaining untouched by some version of these regulations, if/when this process is complete. Enforceability in the EU’s member states remains a source of debate, but there is no longer any mystery about the breadth of the Commission’s regulatory aspirations for AI.
##
What should businesses do? First and foremost, recognize tech as no longer a vertical industry unto itself, but rather a horizontal industry that is foundational to nearly everything. Technology, devices, data, algorithmic decision systems, and advanced analytics are essentially tools to conduct modern business, regardless of vertical industry.
Second, plan for what’s coming by identifying existing and planned digital systems that are likely to fall under new regulations. As the most basic level, regulatory proposals in the US and EU are rooted in the pursuit of ethical, responsible use of digital technology to ensure advancement of the public interest. Yes, that is an oversimplification of some complicated policy issues, but the emergence of regulatory interest that’s coincident with broad interest in tech ethics is not an accident. They are inextricably connected, and businesses that have already started to bring consideration of potential harms forward to the formative stage of design and development will fare better in the long run, both from the standpoint of societal responsibility and regulatory compliance.
Third, and most importantly, businesses need to invest in the culture change necessary to support the inevitable implementation of new compliance and governance mechanisms. As lawyer and sociologist Ari Waldman has written in the context of privacy and big tech, regulatory compliance in the wake of GDPR proved largely performative, noting that “[c]ompanies like to tell us that they ‘care’ about our privacy or that our ‘privacy is important’ to them, but the truth is that tech companies systematically co-opt both their employees and the law …” to serve business interests. I believe this is, in part, a lesson we can apply from GDPR: relying on regulatory requirements to change behavior is not a recipe for culture change.
If every business is now a digital business, businesses that don’t invest in culture change now can expect to see the same inclination toward ineffective, check-box compliance that Waldman observed in his four-year study of how tech companies reacted to privacy legislation. Except this time around, regulators might well be on to it. Future regulation of technology is becoming easier to predict, at least directionally, while the lessons of the recent past are there for the taking. Businesses investing in digital transformation would be wise to take a new approach that brings a culture of responsibility along for the ride.
/end