Martin M. Looney

Senate President Pro Tempore

Martin M. Looney

An Advocate for Us

December 20, 2024

Senate Democrats Announce Caucus Priority Bill Concerning Artificial Intelligence

HARTFORD – Today, Senate President Pro Tempore Martin Looney (D-New Haven), Senate Majority Leader Bob Duff (D-Norwalk), and Senator James Maroney (D-Milford) announced a Senate Democratic Caucus priority bill concerning Artificial Intelligence. This bill will work to create regulations for Artificial Intelligence in Connecticut. This bill will focus on:

-Transparency and accountability;
-Training Connecticut’s workforce to use artificial intelligence;
-Criminalization of non-consensual intimate images.

On May 17, 2024, Colorado passed the first comprehensive Artificial Intelligence bill in the United States. Colorado’s bill will impose obligations on developers and deployers of high-risk AI systems to protect consumers from discriminatory consequential decisions by such systems. It primarily targets AI systems that make significant decisions impacting individuals’ access to services like education, employment, and healthcare.

“It is without a question we need to be next in passing legislation that will work to fight digital discrimination,” said Senator James Maroney, Chair of the General Law Committee. “As AI continues to evolve, it’s crucial that we implement thoughtful regulations to ensure its development aligns with ethical standards, safeguards privacy, and minimizes potential harm.”

“Connecticut needs to require guidelines to ensure decisions are made fairly, accurately, and transparently,” said Senate President Pro Tempore Martin Looney. “Without these regulations, the technology could outpace our ability to manage its risks, creating unintended consequences for our state. Connecticut needs to be next in leading legislation to manage Artificial Intelligence.”

“Without regulation, AI poses risks such as bias, privacy violations, and unforeseen societal impacts,” said Senate Majority Leader Bob Duff. “We must be proactive so AI does not negatively impact us before it is too late.”

Transparency and Accountability

This legislation will put safety brakes in areas where AI is being used to make important decisions about people’s lives, like housing, lending, employment, and government services. 80-88% of companies are using AI to make employment decisions. 50-70% (depending on the survey) of large landlords are using AI for screening tenants.

Connecticut will build upon legislation passed in 2023 that encompasses transparency and accountability surrounding AI so people know they are interacting with AI. In addition, companies that deploy AI to make decisions impacting consumer’s access to credit, housing, insurance, education, employment, health care, or a governmental service will be subject to reporting and oversight by the Attorney General. These companies will need to show proper safety parameters are being made to protect consumers from the potential hazards of AI.

Artificial intelligence is fast becoming a regular part of daily life, shaping the way Americans work, play, and receive essential services. A Pew Research Center study finds that many Americans are aware of common ways they might encounter AI. Still, at the same time, only three in ten U.S. adults can correctly identify all six uses of AI asked about in the survey, underscoring the developing nature of public understanding.

Criminalizing Deepfake Porn

Under this legislation, the bill will work to prohibit the use of AI to make deepfake porn of people, including the use of AI to create revenge porn.

In November of 2023, an undisclosed number of girls at a New Jersey high school learned that one or more students at their school had used an artificial intelligence tool to generate what appeared to be nude images of them. Those images were being shared among classmates. These AI-generated images that impose a face or body onto another to make it look like someone else are called deepfake photos.

Not all deepfake photos are porn; any time a face is imposed onto another body or a face is used to assign spoken words to someone who did not say the thing, it is a deepfake (no nudity required).

Deepfakes can use a real person’s face, voice, or partial image and meld it with other imagery to make it look or sound like a depiction of that person. Under this proposal, there will be an update made to the revenge porn statutes to include generative AI images & put a prohibition on models in child porn or non-consensual images.

Workforce Development and Training

The intersection of workforce development and artificial intelligence (AI) presents both opportunities and challenges. While AI can improve productivity and lead to innovations, its impact on the workforce has raised concerns about potential negative consequences.

Challenges can include automation, skill gaps, and economic inequality. While AI can create new jobs, these roles often require specialized skills, meaning employees may need to reskill, which can be difficult without access to education or training programs. To mitigate these challenges, workforce retraining should be made accessible. This legislation will work to provide training opportunities to Connecticut residents while reaching people where they are.

50% of gateway jobs are at risk of being automated by generative AI. Under this legislation, we will work to provide opportunities to get careers and give skills to stay relevant in today’s job market.

Hiring algorithms have been shown to discriminate based on age. Some algorithms have given higher interest rates for loans based on race, and many government-used algorithms in other states, ranging from the provision of SNAP benefits to deciding when to investigate reported incidents of child abuse, have been shown to discriminate based on income.

The online world has an increased capacity to store data that can relinquish unwanted results. AI can produce ethical challenges, including a lack of transparency and unneutralized decisions. Choices made through AI can be susceptible to inaccuracies, discriminatory outcomes, and inserted bias.

FOR IMMEDIATE RELEASE

Contact: Kevin Coughlin | 203-710-0193 | kevin.coughlin@cga.ct.gov

Share this page: