Get the top HN stories in your inbox every day.
gorbachev
SlinkyOnStairs
"Fun" bonus fact: This isn't the first time Sama (the outsourcing company) has had these problems.
OpenAI had them classify CSAM, so Sama fired them as a client back in 2022. https://time.com/6247678/openai-chatgpt-kenya-workers/
We're 4 years on, 3 years since that report broke. Not a single thing has improved about how tech companies operate.
prepend
How else do you want companies to remove and prevent CSAM? It seems like you must have some human involvement to train and monitor.
It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).
SlinkyOnStairs
> How else do you want companies to remove and prevent CSAM?
Different situation.
Facebook has to do CSAM moderation because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.
And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.
---
In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.
The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.
---
This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to. (But then they also shouldn't be outsourcing this and traumatizing contract workers for $2-3 an hour)
But OpenAI is not such a firm. It's a general AI company.
abdullahkhalids
CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.
The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
Yokohiii
These workers prepare data for AI. I don't think the need for them will go away anyway soon.
Westeners are too expensive and unwilling to do it. AI is a business model that requires poverty and extreme inequality to function. Yes other businesses do that too, but they don't claim it's a solution to everything while it actually has very special human requirements.
frm88
This is the swedish newspaper report quoted in the sumitted article: https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-...
There are more reasons why these jobs are located in developing countries, it's not only the price of labour. Imagine for a second, these annotations would have to be done in the US. The public outrage would probably be audible across the Atlantic. This is another form of imperialism.
duxup
I agree that there’s no good way to do this other than like… no user generated content ever or just ban everyone for their baby pics and etc….and nobody can post them.
Granted the latter is kinda happening distantly on YouTube where you can’t talk about “ suicide “ so everyone self censors…
freejazz
I don't understand why their size is an excuse for them to not remove and prevent CSAM.
IncreasePosts
Couldn't you just use multiple classifiers? Like "is a minor" classifier coupled with "is sexual content" classifier?
deaux
> Sama (the outsourcing company)
If script writers gave the company this name in a fictionalization it would be rejected as too on the nose.
cyanydeez
Isn't it more that tech companies are just more high profile and integral to political and social landscape than older companies; but reviewing the current political zeitgeist, they're in lockstep to what some, if not all, would just call fascism?
2ndorderthought
They are literal defense and offense contractors. They hang out at the Pentagon. They sell political data to sway elections. They give gifts to leaders for favors. It is technofacism.
intended
Yes and no.
Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.
I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.
I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.
I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.
Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.
SlinkyOnStairs
Companies of the 20th century certainly weren't more ethical. (Though a few select tech companies seem to be intent on proving the opposite.)
But it's not really a fascism thing. While fascism does love the oppression of women, and the current crop of fascists have a notable connection to the Epstein case, this is a lot more boring.
Sam Altman's not a fascist, he's a wet noodle who sucks up to the Trump administration for money. He's not even good at it. The way his company handled CSAM does cast aspersions on Altman & the accusations from his sister, but all other evidence suggests he's just a moron acting recklessly. Not identifying the problem ahead of time, and acting poorly in response.
In the case of Meta. We know who Zuckerberg is. The company got it's start as, in crude terms, a sex pest website. The original "Facemash" website forcibly taken down by Harvard. This is not some new consequence of this turn to fascism, Zuckerberg's always been like this, and the actions taken against him were clearly not enough to avoid the company culture following his precedent.
inquirerGeneral
[dead]
everdrive
Sounds about right. If you know someone who uses these smart glasses, it's important not to tolerate them whatsoever. Don't speak with them, interact with them. I wouldn't even recommend being in their presence.
elevation
> I wouldn't even recommend being in their presence.
Great! Now do people with smart TVs and people with smart phones
AlexandrB
I'll grant you smartphones, but smart TVs usually don't have cameras/microphones. The problem with smart glasses is that they constantly capture video and upload it to $VENDOR like in this case.
intended
Don’t we already hate the invasive ad tech industry?
Aren’t there already posts and articles on how to ensure that TVs don’t farm information from us?
HotGarbage
[flagged]
paulddraper
Are GoPros acceptable?
I went to the beach, jet skiing. One of the guys had Meta glasses.
I liked the footage.
red_admiral
The problem is there's places where you'd get noticed and probably removed for filming with a gopro, or even a smartphone. My local "wellness center" and pools have you deposit your smartphone before you exit the changing area into the showers.
The danger with creep glasses is that many people don't know what they are, they can be used with the LED disabled so they're perfect for filming people without their knowledge, and "these are prescription glasses" has a good chance of working. In a place with a "no recording devices" policy, "could you put that gopro away" has wide social acceptance/support, "take those glasses off" less so.
everdrive
>I liked the footage.
So did the Meta's LLM training model as well as the contractor across the globe reviewing your footage.
cosmicgadget
People with GoPros are more likely to send, resulting in entertainment value.
divan
[flagged]
jmholla
You're aware of the privacy implications but think people talking about avoiding people who use them are proposing dumb arguments? I don't follow your logic.
HotGarbage
[flagged]
Aaronstotle
I want to get the Oakley Meta ones so I can record bike rides easier, should I not be tolerated?
bombcar
Wear a GoPro on your helmet like the rest so you can be shunned.
If you insist on the glasses, wear a fake GoPro.
bee_rider
A mostly-solitary sporting event (or one where you know all the other participants and can get their consent to record beforehand) seems like a reasonable use of these sorts of glasses. I wouldn’t personally give consent just as a sort of privacy reflex, but it really depends on your social circle.
HotGarbage
[flagged]
mplewis
No. Fuck off
arowthway
Also make sure to avoid people with smartphones and places with video surveilence.
powvans
Don't let perfect be the enemy of good.
There's also nothing stopping us from stigmatizing the use of smartphones in public. Even a slight discouragement of it would be progress. It doesn't have to be all or nothing.
HumblyTossed
Is this an honest argument? Surely you can think of how glasses might be ... in a different league than the two items you mention?
freehorse
If somebody was pointing a camera on me all the time? I would definitely avoid them.
stackghost
Mark Zuckerberg and disrespect for user privacy.
Name a more iconic duo.
Frieren
Whistleblower protection is key for any working society. Only dictatorships and oligarchies protect criminals while shaming whistleblowers.
I do not care which country the outsourcing company is in. When criminals go global, protection whistleblowers should go global too.
ignoramous
> the content they were paid to classify
A Kenyan workers' organisation alleges Meta's decision was caused by the staff speaking out.
Meta says it's because Sama did not meet its standards, a criticism Sama rejects ...getnormality
Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.
Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.
I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.
EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.
elphinstone
What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.
Why reflexively defend a massive tech corporation caught repeatedly violating the law?
Tangurena2
> Why reflexively defend a massive tech corporation caught repeatedly violating the law?
Because it is the natural expansion of the quote attributed to Upton Sinclair:
> Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires
undefined
giraffe_lady
There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.
undefined
noir_lord
Beautifully and succinctly put.
ImPostingOnHN
You would help conceal a crime against the people just because it's good business??
Congratulations, you have a bright future in politics and/or tech CEOing.
Bridged7756
More like a bright future being someone's fall guy. The ignorance to think that a large tech giant like Facebook would give a crap about any of those concerns makes this person too politically inept to make it anywhere
OutOfHere
Proactively address the issues? Are you kidding me? This is not an issue that just happened to slip by; it is 100% by design. You're fooling no one.
getnormality
What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.
redbell
> "We see everything - from living rooms to naked bodies," one worker reportedly said.
> Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.
Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.
ryandrake
> > Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.
> Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.
It's total fantasy. I've worked in big tech. Casually uploading and providing company/contractor access to non-redacted intimate photos or pictures of the insides of people's homes vaguely "for the purpose of improving the customer experience" would not pass even a surface-level privacy or data-protection review anywhere I've ever worked. Do Meta even read what they are saying?
undefined
intended
I’ve worked in trust and safety - for me this is stupid, but well below the threshold of impossible.
Hell, I know of a major firm that decided QA was not needed for their trust and safety process.
Another common issue will be SEA Arabic speakers tasked with labelling Middle Eastern Arabic content, because accents and cultural dialects are not a thing.
I’ve had people at FAANG firms cry on my shoulder, because they couldn’t get access to engineering resources at their own firms.
There was the famous case of meta executives overriding T&S policy and telling them that what content was news worthy during the Boston bombing. On a separate incident, they told their team that cartel violence was not newsworthy when friends in London complained about it.
When you say this is fantasy, what do you mean precisely?
ryandrake
What I mean is: I'm not sure what they base their statement that it's "a common practice among other companies" on. Unlikely they are talking about their peer companies. I suppose if you read the sentence literally, there surely exist one or more "other companies" in the broad universe of "other companies" that routinely do this kind of stuff. But I wouldn't think anywhere serious.
abustamam
Meta could at least pretend that they don't intend to capture people in their most intimate and vulnerable moments instead of slobbering on the sideline like "mm... Data..."
2ndorderthought
Well you gotta give out black mail material to the scam centers somehow. Otherwise they don't actually have leverage! Oh right... We don't want that happening.
DuncanCoffee
I once read the manual of one of those small floor cleaning robots (Ecovacs Deebot U2 pro), and it basically said that by using it you were giving them a right to take pictures and send them to a remote server (to analyze issues or something like that)
dotancohen
> Am I reading this correctly?!
What you should have read correctly was the Facebook terms of service. I still get strange responses when I tell people that I don't use WhatsApp. All Meta's properties are tainted such that I won't use them.falcor84
> What you should have read correctly was the Facebook terms of service.
I'm reminded of Bo Burnham's wonderful "That Funny Feeling" from 2021's "Inside", where one of the absurd examples he offers in the lyrics is:
There it is again, that funny feeling
That funny feeling
Reading Pornhub's terms of service ...chneu
How is this weird? People have been trading away their privacy for the smallest possible gains in convenience for a long time.
moritzwarhier
Are you conflating telemetry with literally live-streaming your life to Meta? Because that's what makes the statement weird.
edit 2: OK, I see what you mean. But I'm wondering if it should be possible to consent to this via T&C. Basically the same issue as with many online services, turned up to 11, sure. And it involves OTHER people, who have not consented.
Stuff like this used to be outrage fuel even when it was more of a social experiment, e.g. the documentary "We live in public" or the "Big brother" TV show. By now, I'm sure there have been millions of influencers doing similar things, but it's very much not considered normal?
Streaming to an unknown number of employees might be considered different from streaming to the public, sure.
But the core question here is whether there's informed consent, and, IMO also, if it should be possible to consent to this when the other party is a company like Meta and the pretext is not deliberately seeking attention (like influencers and streamers do).
edit, clarified social media comparison
abustamam
Tangential but I always thought reality shows like Big Brother were mostly staged. Like not scripted, but definitely not natural.
pfortuny
Tagging, tagging, tagging. That is what "improving...": teaching its LLMs and diffusion models.
2ndorderthought
Meta is a defense contractor. They see the world a little differently from everyone else.
HarHarVeryFunny
Not sure which is worse here - that Meta are recording video from customers' smart glasses, or that they are firing people who talk about it.
embedding-shape
The latter, as they can't even claim to have done so by accident, or "it was just bug".
OutOfHere
Everything having to do with Meta, starting with its very name, has been evil from the start.
frm88
Did you know that after being an angel investor in facebook, Peter Thiel personally mentored Zuckerberg for couple of years? https://www.youtube.com/watch?v=TP7Z_Eqxhxk
orthecreedence
Or maybe that people are wearing surveillance glasses while they fuck their partner? I know we need to push these companies to not be shitheads, but ultimately you can only be a shithead with the data people give to you (with the exception of things like Flock, who are shitheads with "public" surveillance data).
I know our culture is so supremely fucked at this point that wearing corporate surveillance goggles during intimate moments could somehow be normalized, but holy shit. How did people get so trusting?
abustamam
well, this seems kinda like victim blaming, like when a bunch of celebrity phones got hacked and their nudes leaked to the public. Those were supposed to be for personal consumption/partners' eyes only, but they ended up on the web because some assholes decided the whole world was entitled to those photos. like, should those celebrities have uploaded their intimate photos the the cloud? probably not, but it doesn't mean that Google or Apple should be able to do whatever they want with those photos.
but i do agree that people just have become too trusting with our tech overlords, and its that trust that makes them continue to do shit like this over and over.
senordevnyc
People upload videos of themselves having sex for anyone to see. Some people just don’t care about privacy the way you do.
bookedkit
wow, right call, this all feels wrong, and hardly anyone knows
SV_BubbleTime
Can I squeeze in the just a teeny tiny bit of… why the hell are you wearing an internet camera on you while naked and/or having sex?
… although I really extend that to why are you wearing an internet connected camera that is obviously going to be monitored by Meta.
embedding-shape
So already, this person wearing these glasses are already agree with that Meta can monitor them. They also probably trust Meta when they say "When the glasses are off, nothing is recording", for better or worse. So with that perspective in mind, it's not far fetched to assume these same people will willingly be naked into front of these recording devices they believe to be off.
Of course, anyone who opened a newspaper in the last 10 years or so would know better, but I can definitively see some people not giving a fuck about it.
Tangurena2
There are "content creators" who intentionally record people without any sort of consent. At least when they point cameras, one can notice the cameras and take action. With these sorts of glasses, no one in view has consented, nor have they agreed to any sort of terms & conditions.
I never understood the appeal of upskirt pictures. But I think that taking videos of non-consenting participants/victims is the current version of the upskirt photo craze.
abustamam
> I never understood the appeal of upskirt pictures
i think its a mixture of fetish (panty-fetish is a whole craze in some parts of the world...) and voyeurism, like the appeal _is_ the lack of consent. I recently saw on reddit there was a whole deluge of non-consensual porn being uploaded to a certain site and once that news broke, visits to that site spiked. I think that just says a lot about society as a whole.
sunaookami
The Ray-Ban stays ON during sex!
abustamam
even when i had very expensive designer prescription eyewear i never wore them during sex, that's just weird.
slumberlust
Voyeurism
undefined
jmull
I believe the tricky privacy and security issues around smart glasses (and other "personal" tech) can be navigated successfully enough by a thoughtful, diligent, responsive company.
Which is why I'd never touch a person tech device from Meta.
Their entire DNA is written to exploit their users for profit. In my judgement, they literally cannot and will never consider those issues as anything other than something to obscure to keep people unaware of the depth of the exploitation.
reliablereason
I wonder under what circumstances footage from the glasses are uploaded for classification.
Probably this is people asking the glasses something about what they see and the glasses uploading video for classification to generate an answer.
People think it is "just AI" so are not very concerned about privacy.
pfortuny
Always by default I assume.
reliablereason
Unlikely. That would be extremely expensive in bandwidth, storage and compute. Deciding to build the product like that would be an engineering decision that i would fire someone for.
pfortuny
Well, say a frame per second. Also: how many of these are there today?
You can discard them after tagging+using them for learning.
KaiserPro
Ex Meta employee here (yes you are right to boo):
The thing that really gets me is that internally there are 4 levels of data 1 being public domain shit (the sky is blue) up to 4 which is private user data, or something that is sensitive if leaked or shared.
I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.
The part where I worked, in visual research, we had to jump through a years worth of legal hoops to get permission to record videos in public. We had to build an anonymisation pipeline, bullet proof audit trail, delete as much data as possible, with auto delete if something went wrong.
We had rigid rule about where that data could be stored and _who_ could access it. We were not allowed to share "wild" footage (ie data that might have the hint of anyone who hadn't signed a contract) for annotation because it would be given to a third party. THe public datasets we released all had traceable people, locations all with legal waivers signed.
Then I hear they just started fucking hosing private data to annotators to _train_ on? without any fucking basic controls at all? Just shows that whenever Zuck or monetsization want something, the rules don't apply.
I look forward to that entire industry collapsing in on it's self.
dntrkv
> I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.
Given the size and nature of Meta's business, I would assume they would have better systems in place. SWEs should only have access to PII with explicit consent from users/customers e.g. support tickets.
Especially someone going through boot camp. Do they have access to de-anonymized user data during training?
Shit, at my last company I had to jump through so many hoops to access user data even with consent from the customer.
KaiserPro
> I would assume they would have better systems in place.
They did when I was there. every time you got close to user data an "interstitial" would pop up asking you for a ticket number and justification. There were a bunch of tools that ran searching for people accessing user data.
For example in boot camp you'd create a page that pulled your profile details. this was to introduce the idea of "ents" (the API that manages the social graph) and mercurial. You could, if you wanted to then traverse your friend graph. as soon as you did that, it'd trigger one the automated rules and your account would be suspended and you'd be yeeted within hours.
The point was, if you were doing something legitimate it was fine, but if you stepped out of line, the automated systems would find out and fire you on the stop.
also as everything is done through remote dev boxes, _everything_ is recorded (along with all the files on your laptop, and the regular screenshots, plus all the browser history and keystrokes) Data exfiltraition is super hard, hence why there are hardly any "angry nerd extorts girl" type stories. Its not because meta isn't full of angry nerds, it because its really really difficult to get at user data without getting caught.
donkey-hotei
This is bogus. Meta doesn't have bootcampers escorted out of the building for accessing PII all the time. PII is locked down behind ACLs which are not auto-granted for just anyone asking.
theplatman
have always wondered about this especially post Cambridge Analytica where Meta imposed really stringent requirements for API use even for personal things while it was blatantly obvious that internally it was a different story
KaiserPro
But thats the thing internally it wasn't. The user dat controls were really quite good.
Its not a mistake that this data got into contractor hands, it was a decision that took lots of time, numerous legal reviews and signoff from Zuck himself.
dhosek
This headline reminds me that “row” is one of these words I’ve been mispronouncing almost my whole life (I just learned the correct pronunciation this year). In this context row rhymes with cow,¹ now dough.
⸻
1. The first rhyme that came to mind was bow, but I realized there was a problem with that example.
i2shar
You will appreciate: https://youtu.be/uZV40f0cXF4
:)
hodgesrm
It's a source of jokes in the UK at least. Most Americans don't know the difference. As the saying goes, "two countries separated by a common language."
booleandilemma
That's amazing. I've also been mispronouncing it incorrectly my entire life. Thanks for this info! I'm a native English speaker and my language continues to surprise me.
https://www.merriam-webster.com/dictionary/row#dictionary-en...
jjk166
Row as in rowdy
swiftcoder
One of the bigger commercial niches for smart glasses is filming POV porn, so it is hardly surprising that sort of content ended up in the moderation queue. The project should have planned to account for that use case.
swiftcoder
And I do appreciate how awkward it is for Meta to admit that use case exists. Even in the Oculus Go days there were a bunch of polite euphemisms internally to avoid mentioning "our device has to ship with a browser so people can watch porn on it"
hosteur
Why is there even a “ moderation queue”? Isn’t this people’s private recordings?
dylan604
This is my question too. I get moderating things that people are posting. Being not familiar with the device and how it works, I'd assume that all footage is posted to the user's cloud account even if not publicly posted. This being cloud storage, Meta is "moderating" the footage to ensure CSAM or other restricted footage type is not being stored on their (Meta's) platform. That's my very generous take on it, not that I believe it
inerte
Yes but also we don't want people live streaming murder and suicide, so there's detection and moderation in place.
jdiff
Private recordings aren't public live streams.
intended
I’m betting this is going to some ML / Data labelling pipeline.
swiftcoder
Yeah, moderation may instead be labelling in this case. Its likely the same type of firm handles both sorts of work on behalf of FAANG
ozozozd
How do you moderate what people do? You send someone to stop them from having sex because it was streamed to your servers?
swiftcoder
The key phrase from the article is "review content filmed on its smart glasses when people shared it with Meta AI". I take that to mean the user took some action to actively share the footage with Meta (although knowing Meta, that could also mean they just didn't opt out)
sheepcow
If you want to read more about how unsavory aspects of AI-training are off-loaded onto poor workers in third-world countries, would recommend Karen Hao's "Empire of AI". These workers are paid pennies an hour for unstable jobs that expose them to some horrific material.
intended
Which examples did they cover in the book?
touwer
Bigtech and the race to the bottom of the ethical pitt. We can still go lowerrrr!
malshe
A question for the HN folks who work for Meta - Is the pay so good that it makes it worth working for such a morally bankrupt organization?
allthetime
There are countless large, high paying, morally bankrupt companies out there. It’s no mystery that people continue to work for them.
bradlys
I’d like to know the well paying and non-morally bankrupt companies. What company out there has a flawless reputation and is paying $400k/yr for senior eng in SV?
cosmicgadget
There certainly is no middle ground between morally bankrupt and flawless!
jmye
I think it's the excitement because it's a morally bankrupt organization. Some people really get off on knowing that the sum total of everything they do professionally is bent around making kids depressed to the point of suicide, and angry to the point of shooting up their school.
I assume that every single person who still works at Meta has done that personal calculus and decided that they fall on the "this is fucking amazing, important work" side.
Get the top HN stories in your inbox every day.
Meta cancels the contract with the outsourcing company they contracted to classify smart glasses content after employees at the company whistleblow about serious privacy issues with the content they were paid to classify.