OpenAI to Tackle Fake News and Manipulated AI Content Threatening Upcoming Elections
With crucial polls around the globe on the horizon, ChatGPT creator OpenAI has announced measures to curb the spread of false and misleading information peddled through artificial intelligence. Powerful AI models like OpenAI's own text generator ChatGPT and image maker DALL-E 3 have amplified worries that disinformation could undermine the democratic process in countries holding elections this year.
In a carefully-worded statement, OpenAI acknowledged the very real risk posed by AI-powered fakes spreading online undetected. With hundreds of millions set to cast their votes, ensuring truthful and verifiable content is key. OpenAI revealed plans to equip ChatGPT and DALL-E 3 with new safeguards - like tracing text back to its AI source and flagging computer-generated images.
The moves come as experts voice alarm over deepfakes and manipulated media potentially clouding voters' judgements. Major economies like the US, India and UK all face polls in 2024, testing security measures against emerging threats. OpenAI stressed it won't license its AI for direct political campaigning. Instead, it aims to provide impartial guidance through ChatGPT on civic participation.
If successful, OpenAI's Attribution and Detection Tools for AI Content Could Help Secure Upcoming Global Elections.