1 min read

Link: An analysis shows that at least 26 US states have passed or are considering bills regulating the use of generative AI in election-related communications (Ivana Saric/Axios)

As the 2024 election approaches, at least 26 states are acting against the use of generative AI in political campaigns, revealing a complex mix of regulations. This move highlights the growing concern about AI's potential to influence voter behavior improperly.

Senator Mark Warner expressed concerns about the misuse of generative AI to confuse or suppress voters, indicating that developers and platforms might underestimate this risk. Recent incidents include AI-generated robocalls impersonating President Biden in New Hampshire, and a super PAC using AI to mimic Donald Trump's voice in Florida.

Despite these challenges, the U.S. lacks robust federal rules to manage AI's role in elections, with voluntary measures like President Biden’s AI executive order offering little enforcement. Meanwhile, states have been proactive; nineteen have passed laws targeting AI-generated "deepfakes" and "synthetic media."

Texas and California were early adopters, banning certain deceitful AI-generated content aimed at harming political figures. Minnesota, Michigan, and Washington have also implemented laws, and several other states have introduced or are considering similar regulations.

Experts like Alix Fraser from Issue One advocate for broader regulations, suggesting a need for a federal agency that can adapt to AI’s rapid advancements. Despite state-level efforts, many believe that without federal intervention, these solutions will remain incomplete.

Ultimately, the patchwork of state laws may not suffice to safeguard elections from AI manipulations, pointing to a significant need for federal legislation. #

--

Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.