Salary.com Compensation & Pay Equity Law Review

New State Laws Governing AI are Coming

NEWSLETTER VOLUME 3.4 | January 24, 2025

Editor'0s Note

New State Laws Governing AI are Coming

While tech companies court Washington, state legislatures are looking at the risks of AI and are starting to follow Colorado's lead in governing how and when AI is used.

Colorado's law borrows heavily from the EU's AI Act, which categorizes AI applications based on their risk of harm involving fundamental human rights. The Act is intended to encourage developers to create trustworthy AI while mitigating potential harm to people. As the preface to the AI Act explains:

"The proposal is based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them. AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being."

The EU Act also creates different obligations depending on the person or entity's role in the process between creating the system and ultimately using it. The act tries to categorize the risk of harm and assigns responsibility for both creating those risks and for creating harm—even when that person didn't create the risk.

It's complex and based on some fundamental rights and principles that the US doesn't always follow. For example in the EU, privacy is considered part of human dignity and it cannot be taken away. In the US, the first ten amendments to the US Constitution arguably create a right of privacy based on the things the government cannot do to individuals, but we're still arguing about that (among other things). There is no express right to privacy in the US Constitution, although some states recognize privacy as a fundamental human right.

Even though the legal foundations between the EU and US don't always match perfectly, the framework of trying to sort out where the risk of harm exists, when it arises, what causes the harm, and who should be legally responsible is consistent with law everywhere. What's new about the EU's approach to AI in the AI Act (and pay equity in the EU Pay Transparency Directive) is that these laws are trying to prevent harm and actually solve problems instead of simply creating a remedy after the harm is done.

It's a new way to think about law. And technology has made it possible. Of course, technology has also created the problems. It's like cars. When cars replaced horses, we didn't really worry about accidents the way we do now. Cars are really heavy and destructive, especially as they gain speed. But they are also part of our lives now and necessary for many things that humans do. So we try to mitigate the damage they cause through both design and human error.

With AI though, it's even more complicated because AI starts with data and there's all sorts of things that can go wrong with data, including inaccuracies, using data for a different purpose than it was created for, using a limited data point to represent a bigger more complex thing when it doesn't really. Even if you get the data right, size matters. So does what you do with it in designing the AI system and functionality, what purpose it's used for, and how and whether a decision maker uses the outputs.

We're going to see a lot more activity by states to address new issues with technology. Already we're seeing different and potentially conflicting approaches. This is a great article summarizing some of the ways state legislatures are looking at AI.

- Heather Bussing

2025 is set to see an even greater level of activity in the AI governance space. Below are three areas practitioners should focus on.

1. Expanded comprehensive AI laws at the state level

Colorado will not be the only state with a comprehensive AI law by the time 2025 comes to a close. Early proposals for this upcoming legislative session are building on the Colorado model (which itself is on track for a series of updates) but adding their own twists. These include:

  • Massachusetts HD 396 – requires corporations operating in Massachusetts that use AI systems to target specific consumer groups or influence behavior to disclose (i) the purpose for using such tools, (ii) the ways they are designed to influence consumer behavior, and (iii) details on third parties involved in the design, deployment or operation of such tools.
  • New Mexico HB 60 – creates a private right of action against a developer or deployer for declaratory or injunctive relief (as well as attorneys’ fees) for violations.
  • New York A768 – allows companies to rely on a rebuttable presumption that they exercised reasonable care to avoid algorithmic discrimination when they engage third parties approved by the attorney general to complete bias and governance audits.
  • Texas HB 1709 – adds a new category of regulated entity—distributors, which make an AI system available on the market for a commercial purpose—and requires them to use reasonable care to protect consumers from algorithmic discrimination, including withdrawing an AI system from the market if the distributor knows or has reason to know that the system is in non-compliance. The bill also prohibits certain AI uses, including social scoring and categorization based on sensitive attributes.
  • Virginia HB 2094 – adds yet another regulated entity—integrators, which knowingly integrate an AI system into a software application and place such software application on the market—and requires them to adopt an acceptable use policy for the purposes of mitigating known risks of algorithmic discrimination and make certain disclosures to deployers, including information about how the integrator modified the AI system. The bill also requires developers of generative AI systems to “watermark” outputs.

We expect more bills like these (and some that introduce their own curveballs). But it is too soon to tell whether Congress will step in and preempt an emerging patchwork of state laws.

2. Continued regulatory guidance and enforcement

As discussed in our 2024 recap, there was significant activity by regulators enforcing existing rules against companies engaging in discriminatory conduct or unfair or deceptive practices relating to their use of AI. This will undoubtedly continue, as regulators at the state and federal levels have been quick to point out that they don’t need AI-specific legislation to protect consumers. Companies moving to embed AI into their business practices should look to regulators and their enforcement activities for guidance on what constitutes appropriate uses of AI. Based on that activity to date, the three key principles to focus on are transparency, accountability and fairness to individuals.

3. Internal accountability measures

Companies that are working towards documenting their specific use cases will be better positioned to both comply with existing requirements and address new obligations coming down the pipeline. Although the comprehensive laws referred to above focus on high-risk use cases—those that have material impacts on consumers in certain situations—there may be other uses that will be subject to regulation. Therefore, companies that have their use cases identified will be better positioned to ensure they comply with both existing and new legal requirements.

This content is licensed and was originally published by JD Supra

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.