EU Calls for Labels on AI-Generated Content

Photo - EU Calls for Labels on AI-Generated Content
European Union authorities are contemplating additional steps to enhance the transparency of artificial intelligence (AI) tools such as OpenAI's ChatGPT, addressing concerns over potential misuse for disinformation.
Vera Jourova, Deputy Head of the European Commission, voiced this intent during a press briefing on June 5. She stressed the need for companies deploying generative AI tools capable of generating potentially misleading content to label such content explicitly, a move aimed at countering the spread of "fake news".

Companies that incorporate generative AI into their services, such as Microsoft's Bingchat and Google's Bard, should devise "safeguards" to prevent their misuse for disinformation, according to Jourova.

The European Union established its "Code of Practice on Disinformation" in 2018, which serves as a guideline and tool for tech industry participants to self-regulate against the spread of disinformation. Major tech corporations including Google, Microsoft, and Meta Platforms have already pledged adherence to this Code of Practice.

In her statement, Jourova urged these companies and others to report on new AI-related safeguards in the upcoming July. She also pointed out that Twitter's departure from the Code in the week preceding her press conference would likely attract heightened scrutiny from regulators.

This focus on increased AI transparency comes as the European Union gears up to introduce the EU AI Act, a comprehensive set of guidelines regulating public AI usage and the companies that deploy these technologies.

Although these formal regulations are slated to come into effect in the next two to three years, European officials are advocating for a voluntary code of conduct for generative AI developers in the interim.

GC
GN Crypto
Author