Openai Releases a Detector for Disinformation Researchers

The New York Times

As experts warn that images, audio and video generated by artificial intelligence could influence the fall elections, OpenAI is releasing a tool designed to detect content created by its own popular image generator, DALL-E.
On Tuesday, OpenAI said it would share its new deepfake detector with a small group of disinformation researchers so they could test the tool in real-world situations and help pinpoint ways it could be improved.
“This is to kick-start new research,” said Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy.
“That is really needed.” Because this kind of deepfake detector is driven by probabilities, it can never be perfect.
So, like many other companies, nonprofits and academic labs, OpenAI is working to fight the problem in other ways.
OpenAI also said it was developing ways of “watermarking” A.I.-generated sounds so they could easily be identified in the moment.
In a year stacked with major elections around the world, calls for ways to monitor the lineage of A.I.
OpenAI’s new deepfake detector may help stem the problem, but it won’t solve it.


OpenAI is releasing a tool to detect content created by its own popular image generator, DALL-E, as experts warn that artificial intelligence-generated images, audio, and video could influence the fall elections. However, the well-known A. Me. Start-up agrees that in the coming months and years, more resources will be required to combat so-called deepfakes; this tool is just a small portion of what will be required.

OpenAI announced on Tuesday that it would make its new deepfake detector available to a select group of disinformation researchers for testing in real-world scenarios and to identify areas for improvement.

According to Sandhini Agarwal, an OpenAI researcher who specializes in safety and policy, “this is to kick-start new research.”. “That is incredibly necessary. “.

There can never be a perfect deepfake detector because this type is based on probabilities. Thus, OpenAI is attempting to tackle the issue in different ways, much like a lot of other businesses, charitable organizations, and research facilities.

The business is joining the steering committee of the Coalition for Content Provenance and Authenticity, or C2PA, an initiative to create credentials for digital content, along with industry heavyweights Google and Meta. For photos, movies, audio clips, and other files, the C2PA standard serves as a sort of “nutrition label” that indicates when and how the content was created or modified, including with A. I.

Moreover, OpenAI stated that it was creating methods for “watermarking” A. Me. -generated sounds to make them instantly recognizable. The company wants to make it challenging to get rid of these watermarks.

Prominent corporations such as OpenAI, Google, and Meta, the A. 1. Industry is under growing pressure to take responsibility for the content that its products create. Authorities are urging the sector to stop users from creating harmful and deceptive content and to provide methods for tracking down its source and propagation.

A year filled with significant global elections begs for methods to keep an eye on A’s ancestry. I. Contents are becoming more and more frantic. Voting and political campaigns have already been impacted by sound and images in recent months in Slovakia, Taiwan, and India, among other places.

Although it won’t fix the issue, OpenAI’s new deepfake detector might help contain it. As Ms. Agarwal stated: “There is no silver bullet in the fight against deepfakes.”. “.

scroll to top