The View

Lyu calls for regulations for generative AI

By CORY NEALON

Published September 30, 2024

Print
siwei lyu.
“These growing concerns show the urgent need for action to regulate the misuse of generative AI technologies and deepfakes. ”
Siwei Lyu, Empire Innovation Professor
Department of Computer Science and Engineering

Appearing before a panel of New York State lawmakers, UB artificial intelligence expert Siwei Lyu laid bare the benefits and dangers associated generative AI platforms such as ChatGPT.

“Generative AI has made it much easier, faster and cheaper to create realistic content,” said Lyu, adding that “a user only needs to describe what they want in text prompts, and within minutes, they can generate realistic content using online generative AI tools and services.”

He continued: “These low-cost tools are widely available and require little knowledge of AI.”

Lyu, a SUNY Empire Innovation Professor in the Department of Computer Science and Engineering, is an expert on machine learning and digital media, including the detection of deepfakes and other digital forgeries. He co-directs UB’s Center for Information Integrity, which works to identify, ameliorate and combat unreliable information that pollutes the public sphere.

He appeared virtually on Sept. 20 before representatives who serve on the state assembly panels for consumer affairs and protection, and science and technology.

“AI generated content created to deceive and mislead is often referred to as deepfakes,” Lyu said. “Deepfakes introduce significant risks to consumers. AI-generated voices are used in scams to impersonate individuals and authorize fraudulent money transfers, as well as in ransomware attacks and identify theft.”

He also discussed how deepfakes videos of celebrities and influencers are being used to falsely promote products or services.

Lyu told lawmakers that the “negative impact of deepfakes go way beyond financial harm,” citing how deepfake pornography and other malicious content can ruin reputations, cause psychological trauma and even normalize domestic abuse.

“These growing concerns show the urgent need for action to regulate the misuse of generative AI technologies and deepfakes,” Lyu said.

To curb misuse of the technology, he said AI companies should add more safeguards to generative AI tools, including tools that verify the authenticity of content. He said social media companies should implement stronger filters to detect and label deepfakes.

Lyu also said individuals creating harmful false content must be held accountable by the justice system, and that there should be increased public support for research and education to counter the negative effects of deepfakes.