The White House issued an executive order which aims to set the groundwork for the “safe, secure and trustworthy development and use of AI”.
With privacy and consumer protections being two of its driving principles, the question arises: how will the EO impact the future (whether immediate or otherwise) of the procurement of AI-based products and services?
Zeroing in: In its section on consumer protection, the EO makes a call to independent regulatory agencies to “consider using their full range of authorities to protect American consumers from fraud, discrimination and threats to privacy”.
The document also pushes agencies to establish where existing regulations can already apply to AI, “including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use.”
Impact: Compliance consultant and former government official Julie Myers Wood sees the EO having “significant implications for businesses engaging with AI technology, particularly in the realm of compliance and risk management.”
“This new directive creates a regulatory landscape where inadequate AI risk management can lead to legal, reputational and operational repercussions for businesses,” she said in a written response. “This heightened environment of regulatory scrutiny and accountability is why businesses must now be extra vigilant when purchasing AI-based products or services.
“Additionally, businesses must ensure third-party AI services adhere to relevant federal laws and policies,” she added. “These standards extend to international collaborations, potentially complicating transactions, and contract negotiations, with offshore or nearshore AI providers.”
Things to come: The Office of Management and Budget (OMB), following the EO’s instructions, issued a draft guidance for federal agencies which use AI.
The memo instructs agencies, among other things, to establish the role of Chief AI Officer (CAIO), as well as the creation of AI governance bodies within their structure.
An interagency council helmed by the OMB Director will provide a list of “recommended documentation” required from businesses fulfilling federal AI contracts.
Following the leader: Although the directives of the EO are mostly limited to federal agencies, it is expected that much of its guidelines will help define standards and practices in business.
“At least here in the US, a lot of larger enterprises look to government agencies to set the standard,” commented Arti Raman, Founder and CEO of GenAI implementation and governance firm Portal26. “While they might not be required by law to make it a certain way, they will use that as a standard, and that will flow downhill from the government”.
“We are seeing AI being baked into everything […] Just like in information technology, just like in security, I think it makes a lot of sense to have someone whose job is to understand and wrap their arms around how AI is being used in the enterprise,” she added. “I’m seeing more and more companies appoint a Chief AI Officer. This is different from the Chief Security Officer and the Chief Information Officer. It’s definitely happening, and I think it will happen more and more.”
Counterpoint: The Biden administration is clearly aiming to set the groundwork for comprehensive regulation of AI generation and use. However, much of its EO will have an immediate effect on federal agencies only, and much of it aims to define future guidelines.
Legal experts, industry analysts, academic researchers and activists have expressed support for the attempts at regulation, though a portion of them agree that the EO lacks focus.
A tech lawyer consulted by NSAM characterized the document as “very much aspirational”, pointing out that actual regulatory action will depend on Congress.
“As a political maneuver, I think it indicates a growing interest from the government in AI,” commented Colin Levy, tech lawyer and Director of Legal at CLM solutions firm Malbek. “It certainly indicates where, at least the President, would like to see government move with respect to potentially regulating private enterprises who are developing or using AI in terms of there being standards and risk assessments.”
“The real challenge, however, is not only that it [the EO] lacks specifics, but that now the onus is on Congress to do something. And Congress, as we’ve seen, is fairly dysfunctional at the moment,” Levy added. “There’s likely to be quite a delay between the time in which they finally get their act together and do some sort of regulatory effort.”
NSAM’s Take: The general consensus seems to characterize the EO as a well-intentioned and ambitious document, though also one which lacks focus and force. In that sense, both sides of the B2B equation need not be concerned about an immediate impact on how they handle contracts for AI products and services.
Nevertheless, businesses should be on the lookout for how regulators will apply existing legal frameworks to AI services. The technology remains a very hot topic in the public sphere. Cases which explore the legal gray areas are sprouting here and there. Google, for example, has been taken to court over allegedly unconsented recordings of customer calls by its AI cloud platform, which is used by many brands for customer service.
Some of the more interesting developments will come from the OMB’s office. Chief AI Officers are the hottest new position in business, and the US government’s take on the job could end up molding it in significant ways. The same thing goes for documentation required for AI procurement, a topic which is more developed in Europe and the UK.
As our legal sources have stated in the past, the law is always trying to catch up with technology, and rarely does so successfully. Nevertheless, even when regulation in the US hasn’t crystalized, providers and purchasers of AI products and services should tread the territory carefully.