Virtually every major industry has seen the proliferation of artificial intelligence systems in recent years. Despite a lack of comprehensive legislation at the federal level, over thirty states have enacted regulations or restrictions on the use and development of artificial intelligence (“AI”) systems. With few exceptions, state-based AI regulation has been targeted and industry-specific. However, Texas lawmakers are currently considering HB 1709, the “Texas Responsible Artificial Intelligence Governance Act” (“TRAIGA” or “the Act”), which would impose broad, heavy-handed restrictions and obligations across many major industries regarding the use and development of AI systems in Texas. This blog post focuses on the impact TRAIGA would have on the insurance industry.
The primary purpose of TRAIGA is to protect consumers from algorithmic discrimination. “Algorithmic discrimination” is defined as “any condition in which an [AI] system when deployed creates an unlawful discrimination of a protected classification in violation of the laws of this state or federal law.”[1] Insurance rate filings often involve algorithmic calculations to apply rating factors to a wide variety of filed classifications for different types of business. TRAIGA does not define “unlawful discrimination.” The Texas Insurance Code does not define “unlawful discrimination,” but it does prohibit unfair discrimination based on a number of protected characteristics.[2] Unlike TRAIGA, the Texas Insurance Code recognizes that insurers consider factors such as age, gender, geographic location and disability during the underwriting phase, and this is not considered “unfair discrimination” if based on sound actuarial principles or underwriting standards related to actual or anticipated loss experience.[3] As written, there are no similar exemptions in TRAIGA, and the Act would be in direct conflict with Chapter 544 of the Texas Insurance Code.
To hedge against algorithmic discrimination, TRAIGA focuses primarily on the use and development of “high-risk artificial intelligence systems,” which is defined as any AI system that is a “substantial factor” to a “consequential decision.”[4] A “substantial factor” is a factor considered when making a consequential decision, that is likely to affect the outcome of a consequential decision, and that is weighed more heavily than any other factor contributing to a consequential decision.[5] A “consequential decision” means “any decision that has a material, legal, or similarly significant, effect on a consumer’s access to, cost of, or terms or condition of” insurance.[6] These circular definitions are central to the regulatory framework and restrictions the Act seeks to impose. The definitions of “substantial factor” and “consequential decision” are not only broad but are reliant on each other which causes ambiguity at the outset and could lead to subjective enforcement.
Despite the Act’s ambiguity, it is clear that TRAIGA will impose stringent regulatory burdens and compliance costs on insurers. The Act’s regulatory obligations would apply to any insurer doing business in Texas that deploys or develops a high-risk AI system. The Act requires deployers of high-risk AI systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. As defined, virtually all insurers would be considered a “deployer” under TRAIGA.[7] Further, deployers must immediately suspend the use of an AI system if the deployer knows or has reason to know that it is not compliant with its duty to guard against algorithmic discrimination. Deployers must, in addition, notify the developer of a high-risk AI system if that system is not compliant with TRAIGA.[8]
TRAIGA also requires deployers, or the deployer’s vendors, to complete annual impact assessments regarding any high-risk AI system it utilizes. Additionally, deployers are required to complete an impact assessment within ninety days after any intentional and substantial modification to the systems. Among several other items, an impact assessment must include a description of the measures taken by the deployer to mitigate the risks of algorithmic discrimination.[9] In addition, a deployer is also required to create and maintain a risk management policy regarding its use of high-risk AI systems.[10]
TRAIGA would require a deployer that uses a high-risk artificial intelligence system that is intended to interact with consumers to issue disclosures to consumers before or at the time of the interaction. These disclosures must include:
- That the consumer is interacting with an AI system;
- The purpose of the system;
- That the system may or will make a consequential decision affecting the consumer;
- The nature of any consequential decision in which the system is or may be a substantial factor;
- The factors to be used in making any consequential decisions;
- A description of any human components of the system, any automated components of the system, and how human and automated components are used to inform a consequential decision; and
- A declaration of the consumer’s rights under the Act.[11]
The Act’s disclosure requirement may apply to every line and type of insurance where insurers use data or third-party data to create risk classifications.
Not only does TRAIGA impose heavy regulatory burdens, it prohibits or severely limits the use of AI systems on specific activities:
- Manipulation of Human Behavior to Circumvent Informed Decision Making: The Act prohibits AI systems that use subliminal techniques beyond a person’s consciousness, or purposefully manipulative or deceptive techniques.
- Social Scoring: The Act prohibits the use of an AI system to evaluate or classify natural persons or groups of natural persons based on their social behavior or predicted personal characteristic.
- Biometric Identifiers: The Act prohibits an AI system developed with biometric identifiers of individuals and the targeted or untargeted gathering of images or other media from the internet or any other publicly available source for the purpose of uniquely identifying a specific individual.
- Categorization Based on Sensitive Attributes: The Act prohibits AI systems developed or deployed with the specific purpose of inferring or interpreting sensitive personal attributes of a person or group of persons using biometric identifies, except for the labeling or filtering of lawfully acquired biometric identifier data.
- Utilization of Personal Attributes for Harm: The Act prohibits the utilization, by an AI system, of a person’s characteristics based on their race, color, disability, religion, sex, national origin, age, or a specific social or economic situation, with the objective, or the effect of materially distorting the behavior of that person in a manner that causes or is reasonably likely to cause that person harm.[12]
TRAIGA is the most sweeping piece of AI legislation introduced to date and will have significant impacts on insurers using AI systems. AI is employed by insurance companies for a variety of tasks. For instance, insurers use AI to speed up the underwriting process, identify fraudulent claims, prevent losses, create personalized pricing based on a customer’s risk factors, provide customers more personalized and targeted coverage, and improve internal operational efficiency. That list is not exhaustive, and the use of AI systems in the industry benefits both insurers and insureds. Not only will the Act increase compliance costs, it could stifle innovation.
If enacted into law, TRAIGA will become effective September 1, 2025.
Various insurance trade associations and other groups have been working diligently to convince the author of this legislation to include some type of carve out for the business of insurance similar to what has passed in other states. Unless this is included, compliance will be extremely difficult for insurance companies doing business in Texas because of the potential conflicting standards in the bill as filed with the provisions in Texas law.
The Between the Lines blog is made available by Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C. and the law firm publisher. The blog site is for educational purposes only, as well as to give general information and a general understanding of the law. This blog is not intended to provide specific legal advice. Use of this blog site does not create an attorney client relationship between you and Mitchell Williams or the blog site publisher. The Between the Lines blog site should not be used as a substitute for legal advice from a licensed professional attorney in your state.