(Reuters) – OpenAI is launching a tool that can detect images created by its text-to-image generator DALL-E 3, the Microsoft-backed startup said on Tuesday amid rising worries about the influence of AI-generated content in this year’s global elections.
The company reported the tool’s 98% accuracy in identifying images created by DALL-E 3. It handles modifications like compression and cropping well.
The ChatGPT creator plans to add tamper-resistant watermarking to mark digital content like photos or audio. This signal should be hard to remove.
As part of the efforts, OpenAI has also joined an industry group that includes Google, Microsoft and Adobe and plans to provide a standard that would help trace origin of different media.
In April, during the ongoing general election in India, fake videos of two Bollywood actors that are seen criticizing Prime Minister Narendra Modi have gone viral online.
AI-generated content and deepfakes increasingly infiltrate India and global elections, impacting the U.S., Pakistan, and Indonesia.
OpenAI said it is joining Microsoft in launching a $2 million “societal resilience” fund to support AI education.
OpenAI’s DALL-E image detection tool significantly impacts sectors such as journalism, cybersecurity, and content moderation.
Looking ahead, OpenAI’s launch of the DALL-E image detection tool heralds a new era in image analysis and verification.
read more
image source