Meta hired Kenyan‑based Sama to label data for its smart glasses, but workers reported seeing private footage—including bathroom scenes—revealing a major privacy lapse. The incident shows how outsourced data labeling can expose sensitive user content and underscores the need for tighter oversight and safeguards.
Meta’s subcontracted data annotators are reportedly watching private footage, exposing a critical flaw in the company’s privacy safeguards.
Ars Technica reports that employees of the Kenyan‑based Sama, a subcontractor hired to annotate data for Ray‑Ban Meta smart‑glasses, have watched footage that captured users in highly sensitive contexts, including bathroom use. The February report, a collaboration between Svenska Dagbladet, Göteborgs‑Posten, and freelance journalist Naipanoi Lepapa, is based on interviews with more than 30 Sama staff at various levels, some of whom work on video, image, and speech annotation for Meta’s AI systems. According to Ars Technica, the reporters did not access the materials themselves but relied on firsthand accounts from workers who describe seeing “live” annotations of private moments.
This incident is not an isolated glitch; it highlights a broader trend in AI training pipelines where raw user data is outsourced to gig‑workers with minimal training or oversight. The same workers are often paid a fraction of the wages earned by their corporate counterparts, yet they hold the keys to sensitive content that can be leveraged for model refinement or, worse, commercial exploitation. The fact that these workers can view footage of users in vulnerable situations raises questions about the adequacy of existing privacy safeguards and the ethical boundaries of data monetization.
If Meta’s partners can access or analyze private moments, the trust that underpins the market for wearable AI devices erodes. The episode forces a reckoning on whether current regulatory frameworks can keep pace with the distributed nature of data‑annotation labor, and whether companies must adopt stricter internal controls and transparent auditing mechanisms to protect users.
The implications are clear: without tighter oversight, the promise of AI will be shadowed by a growing culture of surveillance that extends beyond corporate walls.