PhototgraphySports

New tech is coming to help fight the spread of misinformation in photographs online

The Content Authenticity Initiative (CAI), launched in 2019 as a collaboration between Adobe, Twitter and the New York Times, looks like it might finally be making some visible progress. The initial goals to help provide authenticity in images shared online, particularly when it comes to social media, appear to be coming to some kind of early fruition.

Adobe has already shown some technology for preventing the manipulation of images. But a new article on the NYT R&D website shows how they’re fighting misinformation on social media where photographs are misused. For example, claiming that a photograph of a McDonald’s that burned down in a 2016 grease fire was something caused by rioters in 2020.

The issue isn’t just confined to social media, though. Genuine media (and Fox News) have been duped into running either unrelated photographs or digitally manipulated ones alongside stories before, too, misinforming and misleading the public. Based on findings from NYT’s News Provenance Project, though, tools are coming to tackle it.

NYT R&D says that their work is centred around making it easier for readers to know the real source of what they see and helping the readers trust that the information they see is true and accurate. And a new prototype proof-of-concept is helping to make it happen, by presenting the viewer with extra data about the image that’s currently on their screen.

It’s a bit like the fact-checking we’ve come to see on sites like Facebook, that let us know when a news article is misleading, an extremist site posing as a news authority, or people spreading something that’s just a bad joke or a hoax. And, well, occasionally flagging satire and parody sites as “fake news” – to be fair, though, people do that, too.

Except, this is a bit more involved. As Digital Camera World describes it, it’s like a “chain of custody” showing who shot the image, where it’s been published and when. So, the next time there’s some trouble going on in your area and you see a riot photo posted to Facebook claiming that’s what’s going on in your town right now, you can instantly see that it was from more than a decade ago in a completely different part of the world.

That’s the theory, anyway.

The tech is still in the prototype stages and it appears to rely on metadata (which many social media sites currently strip), so I’m not sure how it will be implemented in the real world, or what will stop somebody from simply stripping the metadata themselves and reposting an image (although apparently it will show that the data is missing), but it’s a start.

NYT R&D says that they’re going to be collaborating with Partnership on AI to enhance the technology, amongst others, so perhaps that will help to resolve potential issues around missing metadata if images can be recognised by their visual data rather than just metadata.

It will be interesting to see how this tech continues to be developed and if and how it’ll be adopted both by the platforms and the general public at large over the next few years. You can read more about it on the NYT R&D website, but the CAI suggests subscribing to their mailing list to stay up to date on all the latest info (it’s the form way down at the bottom of the page).

Is this tech going to crush “fake news” completely? Probably not, but given the events of the last five or six years, every little helps!

[Images: New York Times/Content Authenticity Initiative/News Provenance Project]



Source link