The Utah state Legislature passed an artificial intelligence law that will hold companies accountable if their product is used to deceive consumers, but does little to regulate the technology itself.

Senate Bill 149, the Artificial Intelligence Policy Act, cleared the legislature 28 Feb. following overwhelming majority votes in the House and Senate. It is among the first U.S. state-level AI bills introduced this year that went beyond public-sector requirements and addressed private-sector AI deployments. Barring a veto from Gov. Spencer Cox, R-Utah, the bill will be enacted by 21 March.

The bill's main enforcement focus is on bringing generative AI usage into Utah's current consumer protection laws. If a business uses generative AI, and the system deceives a consumer, the business will be held responsible for the fault, not the AI product itself.

The law is on the lighter end of the spectrum of bills being considered in state legislatures across the U.S. During a 20 Feb. Utah House committee hearing on the bill, state Sen. Kirk Cullimore, R-Utah, indicated the less-stringent approach was intentional, noting the state does not want to get in the way of innovation. Instead, the bill seeks to expand existing consumer protection, health care and financial services laws to cover AI usage.

"In Utah, we believe government should have a light touch, and we want to encourage innovation in Utah," Cullimore said.

SB 149 also brings transparency obligations for those deploying AI. Certain licensed professions, like mental health providers, must disclose upfront when a person is interacting with AI technology or looking at material created by generative AI. Other professions, like telemarketing, must disclose the use of chatbots as well — but only if asked to do so.

Those who violate the law could face an administrative fine of up to USD2,500 and a civil penalty of up to USD5,000, according to the proposed statute. A person who uses a generative AI model to commit a crime, such as using AI to create a telephone solicitation, could be subject to criminal charges.

The bill also establishes the Office of AI Policy within Utah's consumer protection division, which is tasked with overseeing the AI Learning Lab Program.

The lab will function similarly to others set up by states: it will study the risks and benefits of current AI technology and make recommendations to lawmakers on how to respond to them. Utah Department of Commerce Director Margaret Woolley Busse said the lab will also look at proactive ways to use AI across various industries by inviting stakeholders to give opinions and test out new technology.

"What this allows us to do as a state is to be agile, to be on top of all the things that are changing very, very, very rapidly in this space, that this lab can be proactively looking at and making recommendations to you," Busse said at the 20 Feb. hearing.

But the lab also acts as safe harbor for developers. A business interested in creating an AI product but is worried about running afoul of regulations would apply for a regulatory mitigation agreement. The developer would be required to demonstrate they have the financial resources to create the product and have a red-teaming plan to limit potential risks and monitor its performance.

Under the program, the developer has 12 months to test the product, Busse said — and if it deceives a customer or runs into another issue, the developer can work with regulators to get a reduced penalty and make the consumer whole. The idea is to encourage small AI developers to innovate without fear, she said.

The lab's purpose and functions are similar to what Utah did in its financial tech industry when it created a sandbox for businesses to test new products in 2019, according to Ballard Spahr Of Counsel Jacey Skinner. The state went further in 2021 when it established a general regulatory sandbox two years later.

Skinner said the fintech sandbox was initially slow to see buy-in, but has since played a major role in shaping Utah's regulations since it was established. "Utah's always prided themselves of being a place that fosters innovation and is a unique place to do business," she said.

Utah is certainly not the first state to put guardrails around AI usage.

Colorado put in place provisions to opt out of profiling for automated decisions and requirement assessments for higher-risk activities within its comprehensive privacy law in 2023. Additionally, the California Privacy Protection Agency is mulling extensive rules around how automated decision-making technology can be used.

But Manatt Partner Brandon Reilly characterized Utah's approach as being simpler than those risk-based approaches by clarifying that existing laws apply to generative AI, as well. 

"It's really just defining the worst potential uses of AI, which is, of course, the uses of AI that might violate the law," he said. 

States are introducing their own AI bills at a rapid clip, creating the possibility for patchwork policy landscape with U.S. Congress tied up on spending bills. But while big players like California and the European Union — set to finalize its own AI Act within the coming weeks — are known for setting standards, Reilly said Utah's bill may have a longer reach than anticipated. He pointed to Washington's 2019 privacy bill, which he said set the template for privacy laws in other states.

"Everybody's trying to look for inspiration about what regulation should look like," he said. "And so the fact that Utah lawmakers can point to this successful legislative effort and extol the benefits of it, and everyone can kind of examine that as well as downsides — I think that's going to have a profound effect at the state level."