STORM WATCH

Morning snow followed by deep cold in Connecticut

More than Alexa: New CT task force tackles AI rules

The 21-member panel includes state agencies and key lawmakers, as well as tech experts, professors and civil liberties groups.

John Craven

Sep 21, 2023, 9:28 PM

Updated 456 days ago

Share:

From Alexa to self-driving cars, artificial intelligence feels like it’s taking over our lives. A new state task force aims to keep Connecticut ahead of the rapidly changing technology, but it faces significant challenges.
Connecticut’s new Task Force to Study Artificial Intelligence met for the first time on Wednesday.
“DEEP FAKES”
When Sen. Richard Blumenthal opened a congressional hearing on AI recently, he had an ominous warning.
“Too often, we have seen what happens when technology outpaces regulation,” he said.
Except Blumenthal wasn’t speaking at all. Fellow senators heard a computer-simulated “deep fake” of his voice.
“And it sounded eerily – creepily – exactly like I did,” Blumenthal told task force members.
The 21-member panel includes state agencies and key lawmakers, as well as tech experts, professors and civil liberties groups.
The goal? To regulate artificial intelligence without stifling innovation.
“Understanding it, appreciating it, and knowing what its strengths and weaknesses are,” said task force co-chair Nick Donofrio, a nationally recognized tech industry veteran who lives in Ridgefield.
THE PROMISE – AND RISKS – OF AI
“Machine learning” saves time and money by teaching computers to complete routine tasks on their own.
“Most of the time, when you're engaging with Google's products, you're engaging with AI and you probably don't even know about it,” Beth Tsai, Google’s AI policy director, told the panel. “If you have Pixel phone and you use Magic Eraser to get rid of a random person in the background – or remove the leash from your dog or something like that. Those are AI systems.”
But AI also poses major challenges, like “deep fake” scams, potential job losses, risks to health care privacy and even discrimination in housing and jobs.
“If resumes are screened by an AI model that eliminates certain candidates from employment,” said Blumenthal.
Experts warned that crafting specific regulations is difficult, especially as the technology rapidly evolves.
“None of the laws that are related to bias in lending or loan applications are going to consider things like reinforcement learning,” said Erick Aldana with Credo AI, a company that helps organizations develop responsible AI rules.
But other states are finding that existing laws may already cover artificial intelligence
“Don't reinvent the wheel,” said Susan Frederick, head of the National Conference of State Legislatures’ Task Force on Artificial Intelligence, Cybersecurity and Privacy. “You don't need to know coding or how to create an algorithm as a state legislator. You need to know what the impact of that will be on your people, on state government.”
NEW LAWS COMING?
Lawmakers are taking action – in Hartford and in Washington.
In Congress, Blumenthal and Republican Sen. Josh Hawley are proposing new federal guidelines. Their proposal would create a new government oversight body to license and register artificial intelligence companies. Unlike social media companies, AI vendors could be sued for damages. And users would get clear notice that they're dealing with an automated bot.
In Connecticut, state lawmakers just passed new AI protections, which include the new task force. But the lawmaker behind the legislation admitted AI rules are still the Wild West.
“Technology advances much more rapidly than our ability to regulate it,” said state Sen. James Maroney (D-Milford).
The group is also looking at how state government is using artificial intelligence. They will send state lawmakers recommendations in February.