Discover more from AI World Today
AI Must Be Clearly Labeled To Prevent Misinformation, Says EU
Is it real or made by AI?
European regulators are really concerned about the risks of artificial intelligence and want tech companies to clearly label AI-generated content to prevent misinformation from spreading. Vera Jourova, the European Commission deputy head, said companies using AI should warn users when content is machine-made rather than human-created.
Forty-four online platforms have signed on to the EU's voluntary code of conduct against disinformation, but Twitter recently withdrew. Jourova condemned Twitter's move, saying "they chose the hard way."
The EU's upcoming Digital Services Act will require labels on AI content for all companies, including Twitter.
Advances in generative AI that can create realistic images, videos and text raise concerns it could be abused to spread false information. Jourova said signatories of the disinformation code " should build in necessary safeguards" to ensure AI is not used maliciously.
Jourova said the EU wants "online platforms to mark content created by artificial intelligence in such a way that a normal user can clearly see that certain text, image or video content is not human-made."
The EU plans an AI Act to regulate high-risk AI applications and potentially ban the most dangerous ones. Officials want companies to fall in line with those rules by the end of 2023, before the law goes into effect.
Jourova said "I don't see any right for the machines to have the freedom of speech."
AI World Today is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.