FTC Updates Rules to Address AI Deepfake Threats to Consumer Safety

by Cristopher Gerhold

FTC Updates Rules to Address AI Deepfake Threats to Consumer Safety

FTC Updates Rules to Handle AI Deepfake Threats to User Safety

AI Deepfake

Supply: Pixabay

The US Federal Alternate Commission (FTC) proposed making new updates on an artificial intelligence (AI) deepfake rule on February 16. The govt. agency acknowledged the proposed rule adjustments would offer protection to customers from AI impersonations.

In response to the ‘Rule on Impersonation of Government and Agencies’ doc, AI deepfakes that impersonate firms and governments might well well moreover face right action.

No AI Deepfakes Allowed for Agencies and Government Companies


The FTC acknowledged the adjustments are main as a result of the occurrence of impersonations of firms, govt officers, and parastatals.

The endgame is to guard customers from imaginable wound incurred from generative AI platforms.

The up up to now rule will come into carry out 30 days following its e-newsletter in the Federal Register.

For now, public feedback are welcome for the following 60 days. As soon as the rule is enacted, the FTC will be empowered to head after scammers who defraud customers by impersonating unswerving firms or govt agencies.

The AI exchange has come a protracted means on legend of the famous launch of ChatGPT in November 2022 by the OpenAI team. The firm, led by Sam Altman, has honest honest these days launched a new product called Sora.

Sora makes expend of AI prompts to generate life like videos with highly detailed scenes, complex camera motions, and vivid emotions.

Highly efficient AI instruments admire these provided by OpenAI and Google maintain elevated productiveness for many other folks and firms.

On the other hand, they’ve also change into an efficient instrument in the hands of cybercriminals. With the instrument, criminals can without assert alter the appearance or voice of someone to deceive a goal viewers.

The FTC rule switch will come down laborious on these criminals to make certain they face the paunchy weight of the regulation.

Whereas there is no concrete rule that makes AI-generated recreations unlawful, US Senators Chris Coons, Marsha Blackburn, and Thom Tillis maintain taken steps to address the topic.

Impersonator Scams Stole $2.7 Billion in 2023


Impersonator scams, though not in general featured in tabloids, pose a valuable threat to the US.

Speaking on the topic, the FTC Chair Lina Khan current that voice cloning and AI-pushed scams had been rising.

Khan proposed that updating the foundations would toughen the agency’s skill to address AI-enabled scams that impersonate other folks.

Striking a figure on the aptitude hazard impersonator scams elevate, Khan current that US electorate misplaced upwards of $2.7 billion in 2023.

The new rules would also enable the agency to come relieve the stolen funds to the affected victims.

Within the period in-between, the head of the Federal Communications Commission (FCC), Jessica Rosenworcel, has proposed categorizing all calls with AI-generated voices as unlawful.

The announcement came after experiences surfaced that US electorate had been getting robocalls imitating President Joe Biden.

Within the resolution, US voters had been informed not to vote in the US Presidential elections.

Within the period in-between, in the crypto exchange, AI deepfakes are a risk.

In response to Michael Saylor, about 80 deepfake videos of himself are removed on daily foundation. Most videos demonstrate him asking customers to ship their Bitcoin to a posted pockets address.

Recent ones emerge on daily foundation, nonetheless. Saylor, who serves because the Chairman for Microstrategy, has warned crypto investors concerning the pattern.

Source : cryptonews.com

You may also like