New tech is coming to help fight the spread of misinformation in photographs online
The Content Authenticity Initiative (CAI), launched in 2019 as a collaboration between Adobe, Twitter and the New York Times, looks like it might finally be making some visible progress. The initial goals to help provide authenticity in images shared online, particularly when it comes to social media, appear to be coming to some kind of early fruition.
Adobe has already shown some technology for preventing the manipulation of images. But a new article on the NYT R&D website shows how theyâre fighting misinformation on social media where photographs are misused. For example, claiming that a photograph of a McDonaldâs that burned down in a 2016 grease fire was something caused by rioters in 2020.
The issue isnât just confined to social media, though. Genuine media (and Fox News) have been duped into running either unrelated photographs or digitally manipulated ones alongside stories before, too, misinforming and misleading the public. Based on findings from NYTâs News Provenance Project, though, tools are coming to tackle it.
NYT R&D says that their work is centred around making it easier for readers to know the real source of what they see and helping the readers trust that the information they see is true and accurate. And a new prototype proof-of-concept is helping to make it happen, by presenting the viewer with extra data about the image thatâs currently on their screen.
Itâs a bit like the fact-checking weâve come to see on sites like Facebook, that let us know when a news article is misleading, an extremist site posing as a news authority, or people spreading something thatâs just a bad joke or a hoax. And, well, occasionally flagging satire and parody sites as âfake newsâ â to be fair, though, people do that, too.
Except, this is a bit more involved. As Digital Camera World describes it, itâs like a âchain of custodyâ showing who shot the image, where itâs been published and when. So, the next time thereâs some trouble going on in your area and you see a riot photo posted to Facebook claiming thatâs whatâs going on in your town right now, you can instantly see that it was from more than a decade ago in a completely different part of the world.
Thatâs the theory, anyway.
The tech is still in the prototype stages and it appears to rely on metadata (which many social media sites currently strip), so Iâm not sure how it will be implemented in the real world, or what will stop somebody from simply stripping the metadata themselves and reposting an image (although apparently it will show that the data is missing), but itâs a start.
NYT R&D says that theyâre going to be collaborating with Partnership on AI to enhance the technology, amongst others, so perhaps that will help to resolve potential issues around missing metadata if images can be recognised by their visual data rather than just metadata.
It will be interesting to see how this tech continues to be developed and if and how itâll be adopted both by the platforms and the general public at large over the next few years. You can read more about it on the NYT R&D website, but the CAI suggests subscribing to their mailing list to stay up to date on all the latest info (itâs the form way down at the bottom of the page).
Is this tech going to crush âfake newsâ completely? Probably not, but given the events of the last five or six years, every little helps!
[Images: New York Times/Content Authenticity Initiative/News Provenance Project]