PhototgraphySports

Clearview AI under fire for “breaking privacy laws”

Ever since it appeared, Clearview AI has been surrounded by controversy. Privacy groups in Europe recently accused it of breaking privacy laws, and groups from several countries have even taken legal action against the company.

Privacy International, Noyb, and other campaigners filed complaints with data watchdogs in Austria, France, Greece, Italy, and the U.K. They claim that Clearview Ai and its practices “have no place in Europe,” and urge the regulators to officially declare it.

Speaking with Bloomberg, Privacy International’s Ioannis Kouvakas said that “extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users.”

To remind you, Clearview AI scrapes billions of publicly available photos from Facebook, Instagram, Twitter, YouTube, and Venmo. It combines them with a photo recognition app and has allegedly been selling the results to law enforcement organizations with no governmental oversight. There are also some similar concerning apps, such as PimEyes.

Clearview AI operates in a legal gray area, as Engadget explains. But in spite of that, thousands of public law enforcement agencies have reportedly been using the tool. There are many concerns about the technology and how it may be used, so multiple actions have been taken against the company.

Engadget sums them up, starting with the UK and Australia and their joint case. It was open last year, investigating Clearview AI’s facial recognition technology and how it uses the data it scrapes. Then, the US Senate introduced a bill wanting to block government agencies from buying Clearview data. Major networks like Twitter, Google, and YouTube sent Clearview AI cease-and-desist letters, claiming that it violates their terms of service. And most recently, the EU representative said they are “particularly concerned” by certain developments in facial-recognition technology and by the “unprecedented” issues raised for data protection, according to Bloomberg.

There definitely are privacy concerns regarding Clearview AI. I feel like I’m in Orwell’s Nineteen Eighty-Four. But despite all concerns, there are some good sides to this kind of technology (presuming it’s used properly and ethically).

Clearview says that it “has helped thousands of law enforcement agencies across America save children from sexual predators, protect the elderly from financial criminals, and keep communities safe.” The company points out that its technology “can help investigate crimes like money laundering and human trafficking, which know no borders,” so national agencies have expressed “a dire need” for it.

[via Engadget]



Source link