Artificial intelligence (AI) is taking the world by storm. Everyone is shocked at how a computer can replicate things that humans can do, from creating content from scratch to producing high-quality images. As a result, some writers, authors, and readers alike are calling AI an invisible author.
Now, OpenAI’s ChatGPT has taken generative AI to the next level, and it’s surprisingly good at what it does. ChatGPT can create formulaic articles on almost all subjects based on human-created templates. You won’t be able to differentiate – whether an AI chatbot or a human has written the content you’re reading. This is presenting challenges for schools, professional writers, and publishers alike.
Should we be worried that computers can now write content and design images? Definitely.
The Concern of Authenticity
Let’s talk about deep fakes. Deep Fakes are images or audio that show famous persons doing or saying something they didn’t really do or say. Deepfake software grafts their likeness or voice onto someone else’s image or voice very convincingly. Now, advertisers can save thousands of dollars on human actors or models using these software programs.
DALL-E 2 from OpenAI is one such image-creation app. It can create images of people, and you won’t be able to distinguish these AI-generated images from images of real people. This is called synthetic media.
Now, the question is how can you trust that the image you’re seeing or content you’re reading comes from real people or hasn’t been manipulated in undetectable ways? So, this is insidious, particularly for publishers and their customers.
The Concern of Counterfeiting
Now, this is a more urgent issue for many commercial publishers, including some of the world’s leading ones. Counterfeit publishers represent themselves as real publishers and sell their books online for lower prices than what real publishers offer.
Another issue is these fake publishers appear at the top of the results on e-commerce websites. They get more traction because of lower prices. But most of the books they sell are pirated or of lower quality, and buyers remain unaware of this until they receive the books. Sometimes, buyers don’t even come to know about the counterfeiting.
Now, the Good News – Efforts to Help Publishers Fight Disinformation
In addressing the issues of authenticity, counterfeiting, and provenance, the new media is developing a “certificate of authenticity.” This tamper-proof metadata can confirm who created media, who altered it, how it was manipulated, and the legitimacy of the providing entity. Although the primary focus is on image authenticity, the work may soon apply to visual, textual, or audio media. This metadata is embedded in the content, and recipients can access it to document the content’s provenance. They can validate or invalidate the authenticity of the media asset. The solution is expected to be freely available, standardized, and global.
Three main (new) organizations working to make this happen are:
- The Coalition for Content Provenance and Authenticity (C2PA) is a joint Development Foundation project backed by Adobe, Arm, Intel, Microsoft, and Truepic. C2PA is working on the technical standards for the solution.
- Adobe-led Content Authenticity Initiative (CAI) is focused on developing systems that support digital media with context and history. The organization was founded in 2019 by the New York Times and Twitter. According to CAI’s Verify website, content credentials help identify data attached to images to understand more about what’s been done to it, where it’s been, and who’s done it.
- Microsoft- and BBC-led Project Origin address disinformation in the digital news ecosystem. Project Origin is also led by CBC/Radio-Canada and the New York Times. It’s creating a framework intended to help publishers maintain the integrity of their content.
Version 1.0 of the solution was launched in February 2022, and it allows content creators to digitally sign metadata. They can do this by using C2PA assertions, which refer to statements documenting the provenance and authenticity of media assets. It’s reportedly in version 1.2 and seeing huge support.
This is how these three organizations are collaborating to create an open ecosystem to protect publishers and the public against the rising danger of disinformation, deep fakes, fake news, and counterfeit sellers in a globally standardized, non-commercial way.
Navkiran Dhaliwal is a seasoned content writer with 10+ years of experience. When she's not writing, she can be found cooking up a storm or spending time with her dog, Rain.