Skip to content
Go back

Clearview AI Tried to Buy Social Security Numbers, Because Apparently 10 Billion Faces Wasn't Enough

Edit page

Article featured image

The company that already has your face now wants your Social Security number. Facial recognition giant Clearview AI reportedly attempted to purchase Social Security numbers and mugshots to expand its already massive biometric database, pushing the boundaries of facial recognition privacy ethics into even more troubling territory.

In an industry already plagued with ethical concerns, Clearview AI stands out for its aggressive data acquisition strategies. The company has built a database of over 10 billion facial images scraped from social media and public websites without consent, selling access to law enforcement agencies across the United States. Now, this latest revelation adds a new layer to the company’s controversial practices.

From Social Media to Social Security

Clearview AI’s approach to data collection has always been audacious. The company built its massive facial recognition database by scraping billions of images from Facebook, Instagram, LinkedIn, and other platforms – often in direct violation of these services’ terms of use. This tactic allowed Clearview to create a system where police could upload a photo and potentially identify almost anyone.

But apparently, 10 billion faces weren’t enough. The company reportedly approached various data brokers in an attempt to purchase Social Security numbers and mugshot databases to enhance their identification capabilities. This move would have potentially linked biometric data with highly sensitive personal identifiers, creating unprecedented surveillance capabilities in private hands.

This aggressive data acquisition strategy reflects a troubling pattern in the facial recognition industry, where the race to build larger databases often outpaces ethical considerations and regulatory frameworks. As one industry watchdog noted, when companies treat personal data as merely raw material to be harvested, privacy becomes collateral damage.

The Wild West of Biometric Surveillance

The facial recognition landscape resembles a regulatory patchwork quilt, with dramatically different approaches across jurisdictions. In the European Union, the General Data Protection Regulation (GDPR) provides some of the strongest protections for biometric data, classifying it as sensitive information requiring explicit consent.

Meanwhile, in the United States, regulation varies wildly by state. Illinois stands out with its Biometric Information Privacy Act (BIPA), which requires companies to obtain written consent before collecting biometric data. This law became the basis for a lawsuit against Clearview AI by the American Civil Liberties Union, arguing the company collected faceprints without consent.

Other states have followed with their own legislation, but most Americans live in places with little to no protection against facial recognition overreach. This regulatory vacuum allows companies like Clearview to operate with minimal oversight, pushing the boundaries of data collection practices that would be illegal in other jurisdictions.

When Law Enforcement Meets Big Brother

Clearview AI’s primary customers are law enforcement agencies, with the company supplying its technology to more than 600 police departments across the United States. This relationship between private surveillance companies and public law enforcement raises serious questions about accountability and civil liberties.

The use of facial recognition by police creates a troubling feedback loop. As departments become dependent on the technology, they develop vested interests in expanding its use, despite growing evidence of bias and misidentification issues that disproportionately affect communities of color. Several studies have shown that facial recognition algorithms often have higher error rates when identifying women and people with darker skin tones.

The attempted acquisition of Social Security numbers would have taken these concerns to a new level, potentially allowing police to bypass traditional investigative safeguards. By connecting biometric data with personally identifiable information like Social Security numbers, the system could enable unprecedented levels of surveillance with minimal oversight.

The Privacy Rebellion Grows

As facial recognition technology expands, so does the resistance. Privacy advocates have increasingly pushed back against the unchecked growth of biometric surveillance, with some notable successes. The European Commission recently received an open letter from 51 organizations calling for a blanket ban on facial recognition tools used for mass surveillance.

Several U.S. cities, including San Francisco, Boston, and Portland, have banned government use of facial recognition technology. These bans reflect growing public concern about the implications of widespread facial surveillance for civil liberties and democratic values.

The tech industry itself shows signs of internal conflict about facial recognition ethics. While companies like Clearview push aggressive data collection strategies, others like Meta’s AI ethics teams have advocated for more responsible approaches to biometric data collection and use.

As Clearview AI continues expanding its surveillance capabilities, the attempted purchase of Social Security numbers marks a critical inflection point in the debate over facial recognition privacy ethics. Without comprehensive federal regulation, the boundaries between legitimate security applications and dystopian surveillance will continue to blur, leaving individuals’ biometric privacy at the mercy of corporate data appetites.


Edit page
Share this post on:

Previous Article
The Pork Paradox: How Cancer Therapy's Food Disguise Hack Reveals Immune System Intelligence
Next Article
How Adults With Colorful Sticks Are Building Stronger Neighborhoods Than Social Media Ever Could