Cracking Down on Deepfakes: AI Regulation News

Photo - Cracking Down on Deepfakes: AI Regulation News
Deepfakes, the brainchildren of AI, are swiftly morphing into a colossal issue. Not only are they infringing upon the personal lives of average citizens, but they're also jeopardizing U.S national security. As federal regulations lag, individual states are stepping up to rein in the rampant misuse of AI.
Indeed, some TikTok users are manipulating AI to produce videos where deepfake depictions of murdered children gruesomely narrate their violent ends. These digital horrors rake in millions of views and exploit the anguish of real victims and their families.
Examples of images from AI-generated videos.  Source: CNN

Examples of images from AI-generated videos. Source: CNN

Deepfakes have the potential to spread misinformation and manipulate public opinion.Users are everaging their ability to create compelling videos featuring politicians engaging in actions or making statements they never actually did. A provocative deepfake video featuring a world leader could potentially stoke international tensions, even leading to armed conflict.

As the 2024 presidential elections approach, Joe Biden appears to fully grasp these risks. In late June, he met with business leaders and experts to discuss the emerging challenges associated with advanced language models.
The Congress needs to pass bipartisan privacy legislation to impose strict limits on personal data collection, ban targeted advertising to our children, and require companies to put health and safety first,
Biden insisted.
President Biden has already engaged in discussions with industry experts such as Sam Altman from OpenAI, Satya Nadella from Microsoft, Dario Amodei from Anthropic, and Sundar Pichai, who is at helm of Alphabet and Google. Moreover, both Microsoft and Google have committed to undergo independent public audits of their systems. Meanwhile, the US Department of Commerce is preparing to certify AI models before they are introduced to the American market.

Another challenge lawmakers must address is the lack of specific regulation that would hold individuals accountable for creating sexually explicit deepfakes. Theoretically, victims could rely on certain provisions from laws regarding intellectual property, privacy invasion, and defamation.The latter legal area guards citizens against the spread of information that tarnishes an individual's honor, dignity, and business reputation, as well as of repercussions that follow the violation.

For instance, this summer, a federal court in Los Angeles is hearing a lawsuit from a reality TV celebrity. The unnamed plaintiff alleges that he never gave consent for users to use AI to superimpose his face onto someone else's body. However, the First Amendment, which guarantees citizens' freedom of speech, comes into play. Its interpretation varies slightly from state to state. Generally, the First Amendment protects intellectual property rights over an image only if it is used for commercial purposes. The primary purpose of most deepfakes, however, veers towards revenge, disinformation, or political strife.
Pornographic deepfake was not necessarily a violation of any existing law,
states Professor Matthew Kugler, one of the pioneers behind an anti-deepfake bill presently awaiting the Illinois Governor's signature.
Analogous legislation is in various stages of formulation in three other states already. Up until now, nine states have ratified such laws countering deepfakes, aimed at combating disinformation associated with artificially generated intimate imagery and election-oriented technology.