UK Regulator Questions Meta Over AI Glasses Privacy Concerns
After hearing about how data from AI smart glasses is reviewed, the UK’s data protection regulator has gotten in touch with Meta. The investigation that led to the inquiry found that outsourced workers might be able to see sensitive videos taken by users’ devices. Regulators want to know if Meta follows UK data protection rules when it handles this kind of personal information.
The Information Commissioner’s Office said that devices that collect personal information must be open and give users clear control. Authorities said the claims raised serious concerns about how content that has been captured is handled. The regulator said it would ask Meta for detailed answers about how it handles data.

Source: BBC
Investigation Claims Contractors Viewed Sensitive User Footage
Swedish newspapers reported that subcontracted workers sometimes looked at videos taken by Meta’s AI glasses. The report says that workers in Kenya looked at recordings that showed very private situations. Some of the footage supposedly showed private things, like people in bathrooms or private moments.
Reports say that workers said they saw a lot of different things that the smart glasses’ cameras had recorded. One reviewer said they often saw recordings of everyday life as well as very private ones. These claims have made people worry about the privacy of people who are filmed by the devices.
Meta Says Human Review Helps Improve AI Experience
Meta said that contractors may look at user-shared data in some cases to make AI systems work better. The company said that looking at pictures or transcripts helps its AI tools learn. Engineers can improve how the AI understands visual information and answers questions by doing this kind of analysis.
The company said that this only happens when users decide to share what they have captured with Meta’s AI systems. Meta says that data goes through filtering processes before being looked at by human moderators. These filters are meant to lower privacy risks by hiding information that could be used to identify someone.
Recommended Article: OpenAI Develops Smart Speaker and AI Glasses Devices
Privacy Filters Designed To Protect User Identities
Meta said that its systems try to blur faces and take out identifying information from recorded content. The company says that these privacy protections lower the risk of personal information being made public during review. But sources in the investigation said that these protections didn’t always work as well as they should have.
Some reviewers said that faces or other sensitive information could still be seen in the footage. This made people worry about whether privacy filtering always protects people who are in recordings. As wearable cameras become more common, some people say that stronger protections may be needed.
Workers In Kenya Train AI Through Data Annotation
The people who were questioned in the investigation worked as data annotators, which means they were in charge of training Meta’s AI models. Their job is to help AI understand visual scenes better by labeling pictures or going over transcripts. These tasks help AI systems see things, do things, and notice things in the environment.
It is said that the annotators worked for an outsourcing company in Nairobi called Sama. The company has worked with big tech companies that make AI tools before to provide data services. These kinds of outsourcing deals are common in the AI development industry around the world.
Smart Glasses Technology Raises Growing Privacy Debate
Smart glasses with AI have become more and more popular as companies make wearable tech that can look around and analyze what’s going on. People can use these devices to ask questions about things, translate text, or get help right away. People who can’t see well may find these features especially helpful.
But as more people start using it, there are also worries about misuse and unauthorized recording. Some people have said that people wearing smart glasses filmed them without their permission. Privacy advocates say that wearable cameras could change how people expect their privacy to be protected.
Meta Defends Data Practices Amid Regulatory Questions
Meta says that most of the content that the glasses capture stays on the user’s device. Automated systems and human moderators can only look at content that was shared with Meta’s AI services on purpose. The business says that these kinds of reviews are needed to make AI features more reliable.
The outsourcing company Sama also said that it follows strict rules for keeping data safe. It said that employees work in safe places where they can’t bring their own devices. Staff members are said to get training on how to keep data safe and how to use AI responsibly.













