author photo
By Alex Vakulov
Wed | Oct 2, 2024 | 4:24 AM PDT

The invisible hands of artificial intelligence are reaching deeper into our lives than ever before. Have you ever scrolled through Facebook and noticed ads that seem eerily tailored to your interests? Or written a private email, only to later see similar phrases appearing elsewhere online? Perhaps you have even found content from your personal blog replicated in Google AI summaries. These are not mere coincidences; this is the hidden world of AI training at work.

Our personal photos, private messages, and sensitive data are being used without our knowledge or consent to train AI systems. This unsettling reality is no longer a distant future but a rapidly encroaching present, and it is about to get even worse.

Get ready to uncover how technology companies are harvesting your data to fuel the next generation of artificial intelligence.

Meta

Advocacy group NOYB has called on European privacy enforcers to stop Meta's plan to use personal data to train AI models without user consent. Starting June 26, Meta's updated privacy policy may allow the use of years of personal posts, private images, and online tracking data for AI technology. NOYB has launched 11 complaints in various European countries, urging immediate action due to the imminent policy changes. Meta claims its approach complies with privacy laws, citing the use of publicly available and licensed information. However, NOYB argues this violates the EU's GDPR, which protects users' data rights against unauthorized use.

Google

The new NotebookML, an AI research assistant powered by Google, has recently become widely available. It not only provides access to paywalled content but also helps publish it for the general public.

At the same time, Raiza Martin, senior product manager for AI at Google Labs, recently told TechCrunch that Google does not use any of the data users upload to NotebookLM to train its algorithms. She said, "Your data stays private to you." NotebookLM allowing its reviewers to access files uploaded by users raises even more privacy concerns.

Slack

It was discovered that Slack was quietly using data from private corporate channels—such as messages, content, files, and usage information—to train its AI. This data may include not only corporate secrets but also personal information. To make matters worse, if you want this to stop, you have to ask your organization's Slack admin (like HR or IT) to email the company. You cannot do it yourself.

Microsoft

Previously, Microsoft forced Windows users to install Copilot, which cannot be uninstalled, only suspended. This unannounced change, like the "UCPD driver" blocking Registry hacks, raised privacy concerns. Microsoft confirmed the app does not run any background code or capture user data. The update, intended to prepare devices for future Copilot enablement, mistakenly appeared on all devices but did not fully install Copilot.

Adobe

Recently, many popular programs have updated their terms of use, causing concern among users. A new scandal has emerged: you can no longer use Photoshop and other Adobe programs unless you grant them rights to all your content to train their AI. This has prompted some users to consider quitting Adobe in protest. Additionally, users bound by NDAs cannot grant such rights to third parties, further complicating matters.

Clearview AI

Last year, a U.K. judge ruled that the ICO cannot sanction Clearview AI, an American firm that harvested billions of social media images without users' consent for its facial recognition software. Despite being fined £7.5 million and ordered to delete U.K. data, Clearview appealed, arguing the data was used by foreign law enforcement. The tribunal agreed, citing a lack of jurisdiction. This decision raises significant privacy concerns, as it limits the U.K.'s ability to regulate companies that process data of its citizens.

Protecting your privacy

It is important to take proactive steps to protect your privacy. Here are some tips to help you stay safe:

  • Regularly review and update the privacy settings on social platforms and other online services.
  • Request to see the data they have collected and ask them to delete it where possible.
  • Where possible, opt out of data collection for AI training.
  • Avoid uploading sensitive photos or sharing personal information on platforms that do not guarantee privacy.
  • Share information about data privacy issues within your community to promote collective action.

Conclusion

The examples above illustrate the various ways in which our data is being exploited. Whether through updated privacy policies, hidden data collection practices, or legal loopholes, the protection of our personal information is under constant threat.

The quest for data by AI developers is intensifying, and the types of data they seek are becoming more personal and sensitive. We must remain vigilant and advocate for stronger privacy protections to ensure our digital lives are not exploited without our consent. In a world where "everyone else is doing it" becomes a justification for privacy violations, it is up to us to push back and demand better standards.

Comments