Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

neilv

> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.

And some teen may be traumatized. Again, unsafe.

Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.

omnipresent12

https://www2.ljworld.com/news/schools/2025/aug/07/lawrence-s...

Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.

These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.

avidiax

https://archive.is/DYPBL

> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. “I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.

It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.

zugi

Exactly. In a saner world, we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on.

But alas, we don't live in that world. We live in a world where there will be firings, civil, and even criminal liability for those who make wrong judgments. If the AI says "possible gun", the human running things who alerts a SWAT team faces all upside and no downside.

Hmm, maybe this generation's version of "nobody ever got fired for buying IBM" will become "nobody ever got fired for doing what the AI told them to do." Maybe humanity is doomed after all.

Terr_

"It wasn't used as directed", says man selling Big Boom Fireworks to children.

danaris

I do not, in any way, disagree with holding Gaggle accountable for this.

But can we at least talk about also holding the school accountable for the absolutely insane response?

You talk about not selling to schools that have "zero tolerance" policies as if those are an immutable fact of nature that can never be changed, but they are a human thing that has very obvious negative effects. There is no reason we actually have to have "zero tolerance" policies that traumatize children who genuinely did nothing wrong.

"Zero tolerance" for bringing deadly weapons to school, I can understand. So long as what's being checked for is actual deadly weapons, and not just "anything vaguely gun-shaped", or "anything that one could in theory use as a deadly weapon" (I mean, that would include things like "pens" and "textbooks", so...).

"Zero tolerance" for particular kinds of language is much less acceptable. And I say this as someone who is fully in favor of eliminating things like hate speech or threats of violence—you don't do it by coming down like the wrath of God on children for a single instance of such speech, whether it was actually hate speech or not. They are in school; that's the perfect place to be teaching them a) why such speech is not OK, b) who it hurts, and c) how to express themselves without it, rather than just treating them like terrorists.

michaelt

> They are suing Gaggle, who claims they never intended their system to be used that way.

Yeah, there's a shop near me that sells bongs "intended" for use with tobacco only.

reaperducer

The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time.

All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.

HWR_14

> They are suing Gaggle, who claims they never intended their system to be used that way.

Is there some legal way to sue a pair of actors (Gaggle and school) then let them sue each other over who has to pay what percentage?

ashdksnndck

You separately sue everyone that might be liable. Some of the parties you sue might end up suing each other.

RajT88

> This is a paid addon, though

Holy shitballs. In my experience such pain addons have very cheap labor attached to them, certainly not what you would expect based on the sales pitch.

dotancohen

This very much reminds me of Boeing's paid addon for a second AoA sensor for the MCAS system.

b00ty4breakfast

>...its purpose is to “prioritize safety and awareness through rapid human verification.

Oh look, a corporation refusing to take responsibility for literally anything. How passe.

chillingeffect

The invention of the corporation is virtually to eliminate responsibility/culpability from any individual.

Human car crash? Human punishment. Corporate-owned car crash? A fine which reduces salaries some negligible percent.

perplex3d

Yes, corporations have all of the rights of a person, abilities beyond a person, yet few of the responsibilities of a person.

JumpCrisscross

> a corporation refusing to take responsibility for literally anything. How passe

Versus all the natural people at the highest echelons of our political economic system valiantly taking responsibility for fuckall?

selcuka

> Versus all the natural people

We can at least hold them responsible.

b00ty4breakfast

I certainly didn't imply that to be the case and I'm not sure how you could draw that conclusion from 2 whole sentences.

DrewADesign

Engineer: hey I made this cool thing that can help people in public safety roles process information and make decisions more efficiently! It gives false positives but you save more time than it takes less time to weed through them.

Someone nearby: well what if they use it to replace human thinking instead of augment it?

Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.

Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.

Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…

::6 months later—some kid is being held at gunpoint over snacks.::

casey2

Nice fantasy, but the reality is that the "people in public safety roles" love using flimsy pretenses to harass and abuse vulnerable populations. I wish it was just overeager sales and marketing, but you're view of humanity is way too naive especially as masked thugs are disappearing people in the street as we type.

DrewADesign

What? A) The naïveté of the engineer’s perspective was literally the whole point of the story. B) Saying I’m somehow absolving law enforcement by acknowledging other factors is absurd. My childhood best friend was shot and killed by police during a mental health crisis. C) If you think that police malevolence somehow absolves the tech world’s role in making tools for them, that’s as naive as it gets.

JimTheMan

Refer to the post office scandal in Britain and the robodebt debacle in Australia.

The authorities are just itching to have their brains replaced with by dumb computer logic, without regard for community safety and wellbeing.

jononor

Lack of Accountability as-a-Service! Very attractive proposition to negligent and self-serving organizations. The people in charge don't even have to pay for it themselves, can just funnel the organization money to the vendor. Encouraging widespread adoption helps normalizes the practice. If anyone objects, shut them down as not thinking-of-the-children and something-must-be-done (and every other option is surely too complicated/expensive).

DrewADesign

And the black box sentencing recommendation systems some US states bought into like a decade ago.

random3

It’s actually “AI swarmed” since no human reasoning, only execution, was exerted - basically have an AI directing resources.

trhway

delegating decision to AI, excluding human from the "human in the loop" is kind of unexpected as a first step, as in general it was expected that exclusion will start from the other end. Sideway i wonder how is that going to happen on the battlefield.

for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.

actionfromafar

Reverse Centaur. MANNA.

NackerHughes

3-in-1. Lack.

goopypoop

when attacked by bees am I hive swarmed?

janalsncm

In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.

But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.

Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.

nkrisc

In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.

Lammy

> the primary cause of gun violence in the first place: the ubiquity of guns in our society

I would have gone with “a normalized sense of hopelessness and indignity which causes people to feel like violence is the only way they can have any agency in life” considering “gun” is the adjective and “violence” is the actual thing you're talking about.

Aeolun

I tend to categorize these under a dutch idiom which I can’t describe, but which is abundantly clear in pictorial form:

https://klimapedia.nl/wp-content/uploads/2020/01/Dweilen_met...

janalsncm

To be clear, the false negative here would be a student who has brought a gun to a school and the computer ignores it. That is a situation where potentially multiple people can be killed in a short amount of time. It is not far, far worse to send cops.

tacticus

> But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.

Given the probability of police officers in the USA taking any action as hostile and then ending up shooting him a false positive here is the same as swatting someone.

The system here sent the police off to kill someone.

A4ET8a8uTh0_v2

Yep. Think of it as the new exciting version of swatting. Naturally, one will still need to figure out common ways to force a specific misattribution, but, sadly, I think there will be people working on it ( if there aren't already ).

janalsncm

Sure. But school shootings are also common in the US. A student who has brought a gun to a school is very likely not harmless. So false negatives aren’t free either.

lelandfe

I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.

We answered the screams at the door to guns pointed at our faces, and countless cops.

It was explained to us that this was the restrained version. We got a knock.

Unfortunately, I understand why these responses can't be neutered too much. You just never know.

collingreen

In this case, though, you COULD know, COULD verify with a human before pointing guns at people, or COULD not deploy a half finished product in a way that prioritizes your profit over public safety.

MisterTea

Happened to a friend of mine by an ex GF who said he was on psych meds (true though he is nonviolent with no history) and that he was threatening to kill his parents. NYPD SWAT no-knock kicked the door down to his apartment which terrorized his elderly parents as they pointed guns at their son (in his words, "machine guns".) BUT because he has psych issues and on meds he was forced into a cop car in front of the whole neighborhood to get a psych evaluation. He only received an apology from the cops who said they have no choice but to follow procedure.

edit should add sorry to hear that.

SoftTalker

Do the cops not ever get tired of being fooled like this? Or do they just enjoy the chance to go out in their army-surplus armored cars and pretend to be special forces?

dfxm12

It doesn't make sense. If you were holding people hostage, you'd have demands for their release. Windows could be peeked into. If you dragged a dead body into a house, there'd be evidence of that.

trehalose

False positives can effectively lead to false negatives too. If too many alarms end in teens getting swatted (or worse) for eating chips, people might ignore the alarm if an actual school shooter triggers it. Might assume the AI is just screaming about a bag of chips again.

Spooky23

I think a “true positive” is an issue as well if the protocol to manage it isn’t appropriate. If the kid was armed with something other than nacho cheese, the provocative reaction could have easily set off a tragic chain of events.

Reality is there are guns in schools every day. “Solutions” like this aren’t making anyone safer. School shooters don’t fit this profile - they are planners, not impulsive people hanging out at the social event.

More disturbing is the meh attitude of both the company and the school administration. They almost engineered a tragedy through incompetence, and learned nothing.

bilbo0s

>And some teen may be traumatized.

Um. That's not really the danger here.

The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.

This tech is not supposed to be used in this fashion. It's not ready.

neilv

Did you want to emphasize or clarify the first danger I mentioned?

My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.

When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.

I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.

Aeolun

I’d argue the second danger is worse, because shooting might be incidental (and up to human judgement) but being traumatized is guaranteed and likely to be much more frequent.

wat10000

I fully agree, but we also really need to get to a place where drawing the attention of police isn't an axiomatically life-threatening situation.

Zigurd

If the US wasn't psychotic, not all police would have to be armed, and not every police response would be an armed response.

array_key_first

Even despite the massive protests in the past few years, we're moving further in that direction.

undefined

[deleted]

krapp

Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.

Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.

[0]Even though no other free society has to pay that price but whatever.

coryrc

Far more deaths by automobile than homicides by guns.

akoboldfrying

> The danger is that it's as clear as day that in the future someone is gonna be killed.

This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.

So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)

GuinansEyebrows

> This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.

tartoran

"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"

Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.

xbar

Charge the superintendent with swatting.

Decision-maker accountability is the only thing that halts bad decision-making.

nomel

> Charge the superintendent with swatting.

This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.

CGamesPlay

Human verified the video -> human was the decision-maker. No human verified the video -> Human who gave a blank check to the AI system was the decision-maker. It's not really about the quality of journalism, here.

xbar

Not at all.

Superintendent approved a system that they 100% knew would hallucinate guns on students. You assert that if the superintendent required human-in-the-loop before calling the police that the superintendent is absolved from deploying that system on students.

You are wrong. The superintendent is the person who decided to deploy a system that would lead to swatting kids and they knew it before they spent taxpayer dollars on that system.

The superintendent also knew that there is no way a school administrator is going to reliably NOT dial SWAT when the AI hallucinates a gun. No administrator is going to err on the side of "I did not see an actual firearm so everything is great even though the AI warned me that it exists." Human-in-the-loop is completely useless in this scenario. And the superintendent knows that too.

In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>. We are not close to safely betting lives on it, but people will do it immediately anyway.

joe_the_user

So, are you implying that if humans surveil kids at random and call the SWAT team if a frame in a video seems to imply one kid has a gun, that then it's all OK?

Those journalists, just trying to get (unjustified, dude, unjustified!!) emotes from kids being mistakenly held at gun point, boy they are terrible.... They're just covering up how necessary those mistakes are in our pursuit of teh crime...

rufius

Wouldn’t matter if they did. There’s no penalty for getting it wrong so the human is always incentivized to say yes and then say oops if it was wrong.

If there’s no feedback mechanism, verification doesn’t matter.

lawiejtrlj

Yes, clearly the journalist was the cause of the problem here. You're an idiot.

dekken_

> Make them pay money

It already cost money paying for the time and resources to be misappropriated.

There needs to be resignations, or jail time.

SAI_Peregrinus

The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).

undefined

[deleted]

russdill

I wonder how much more likely it is to get a false positive from a black student.

vee-kay

The question is whether that Doritos carrying kid is still alive, instead of being shot at by the violent cops (who typically do nothing when an actual shooter is roaming a school on a killing spree; apropos the Uvalve school shooting, when hundreds of cops milled around the school in full body armor, refusing to engage the shooter on killing spree inside the school, and they even prevented the parents from going inside to rescue their kids) based on a false positive about a gun (and the cops must have figured that it's likely a false positive, because it is info from AI surveillance), only because he is white?

kelnos

Before clicking on the article, I kinda assumed the student was black. I wouldn't be surprised if the AI model they're using has race-related biases. On the contrary, I would be surprised if it didn't.

joe_the_user

I assume they were provide gift cards good for psychotherapy sessions.

akoboldfrying

> Make them pay money for false positives instead of direct support and counselling.

Agreed.

> This technology is not ready for production

No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).

neuralRiot

I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.

akoboldfrying

> nobody realy wants to solve problems, what they want is a marketable product

I agree, but this isn't specific to AI -- it's what makes capitalism work to the extent that it does. Nobody wants to clean a toilet or harvest wheat or do most things that society needs someone to do, they want to get paid.

shemtay

teenagers carrying and using guns in baltimore actually is a real problem

Zigurd

In the US cops kill more people than terrorists. As long as you quantifying values take that into account.

akoboldfrying

I get that people are uncomfortable with explicit quantification of stuff like this, but removing the explicitness doesn't remove the quantification, it just makes it implicit. If, say, we allow people to drive cars even though car accidents kill n people each year, then we are implicitly quantifying that the value of the extra productivity society gets by being able to get places quickly in a car is worth the deaths of those people.

In your example, if terrorists were the only people killing people in the US, and police (a) were the only means of stopping them, and (b) did not benefit society in any other way, the equation would be simple: get rid of the police. There wouldn't need to be any value judgements, because everything cancels out. But in practice it's not that easy, since the vast majority of killings in the US occur at the hands of people who are neither police nor terrorists, and police play a role in reducing those killings too.

froobius

Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)

We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?

[1] https://arxiv.org/abs/1506.02640

EdwardDiego

And it feels like they missed the "human in the loop" bit. One day this company is likely to find itself on the end of a wrongful death lawsuit.

an0malous

They’ll likely still be profitable after accounting for those. This is why sociopaths are so successful at business

dfxm12

[flagged]

jawns

I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.

He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.

My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.

But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.

cyanydeez

Someday there'll be a lawyer in court telling us how strong the AI evidence was because companies are spending billions of dollars on it.

mothballed

Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.

bluGill

And there needs to be an opposing lawyer ready to tear that argument to pieces.

Terr_

You mean in the same fallacious sense of "you can tell cigarettes are good because so many people buy them"?

simulator5g

That sort of rhetoric works very well unfortunately.

kelnos

The article says the police later showed the student the photo that triggered the alert. He had a crumpled-up Doritos bag in his pocket. So there was no gun in the photo, just a pocket bulge that the AI thought was a gun... which sounds like a hallucination, not any actual reasonable pattern-matching going on.

But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.

undefined

[deleted]

hinkley

Is use of force without justification automatically excessive force or is there a gray area?

mentalgear

Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.

MiiMe19

I might be missing something but I don' think this article isn't about palantir or any of their products

wartywhoa23

Palantir is but one head of the hydra which has hundreds of them, and all concerns about a single one apply to the whole beast hundredfold.

anigbrowl

It's still not helpful to wander into threads to talk about your favorite topic without making effort to provide some context on why your comments are relevant. When random crazy people come up to you spouting their theories in public places, the problem is not that their concerns are necessarily incoherent or invalid; the problem is that they broadcasting their thoughts randomly with no context, and their audience has no way of telling whether they just need to verbalize what's bothering them or have mistaken a passer-by for one of the villains in their psychodrama.

tl;dr if you want to make a broad point, make the effort to put it in context so people can appreciate it properly.

yifanl

You're absolutely right, Palantir just needs a different name and then they'd have no issues.

joomla199

This comment has a double negative, which makes it a false positive.

seanhunter

The article is about omnialert, not palantir, but don’t let the facts get in the way of your soapbox rant.

mzajc

Same fallible systems, same end goal of mass surveillance.

seanhunter

That may be the case, but only one of them is actually responsible for armed police swarming this student and it wasn't Palantir. It seems very strange that you're so eager to give a free pass to the firm who actually was at fault here.

courseofaction

American, please, wake up. The masked border police are on the streets arresting citizens, the military is being paid as a client of the president, corruption is legal, and a mass surveillance machine unfathomable to prior dictatorships is being/has been established. You're fucked. Listen to the soapbox. It is very, very relevant. Wake up.

undefined

[deleted]

wartywhoa23

I'm pretty sure that some people will continue to apply the term "soapbox ranting" to all opposition against the technofascism even when victims of its false positives will be in need of coroners, not psychologists.

protocolture

I dont think a guy who knows so much about the anti christ could be wrong.

jason-phillips

[flagged]

mentalgear

Pre-emptive compliance out of fear - then my boy, the war is already lost.

polotics

"competition is for losers"... right? ;-)

undefined

[deleted]

rolph

Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.

the AI "swatted" someone.

bilbo0s

Calling it today. This company is going to get innocent kids killed.

How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?

First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.

This is a really bad idea right now. The technology is just not there yet.

mothballed

And then there's plenty of bullies who might put a sticker of a picture of a gun on someone's back, knowing it will set off the image recognition. It's only a matter of time until they figure that out.

xp84

That's a great and terrifying idea. When that inevitably happens, you'll then have a couple of 13-year-olds: one dead, and one shell-shocked kid in disbelief that a stupid prank idea he cooked up in 60 seconds is now claimed as the root cause why someone was killed. That one may be charged with a crime or sued, though the district who installed this idiotic thing is really to blame.

withinboredom

When I was a kid, we made rubber-band guns all the time. I’m sure that would set it off too.

mrguyorama

>First time it happens, there will be an explosion of protests.

Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved

In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.

Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.

Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.

Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.

callalex

> The technology is just not there yet.

The technology literally can NEVER be there. It is completely impossible to positively identify a bulge in clothing as a handgun. But that doesn’t stop irresponsible salesmen from making the claim anyway.

etothet

The corporate version of "It's a feature, not a bug."

nyeah

Clearly it did not prioritize human safety.

tencentshill

"rapid human verification." at gunpoint. The Torment Nexus has nothing on these AI startups.

palmotea

Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.

anigbrowl

I get that you're being sarcastic and find the police response appalling, but the sad reality of Poe's Law is that there are a lot of people who would unironically say this and would have cheered if the cops had shot this kid, either because they hate black people or because they get off on violence and police shootings are a social sanctioned way to indulge that taste.

vee-kay

We all know the cops will go for the easy prey:

* Even hundreds of cops in full body armor and armed with automatic guns will not dare to engage a single "lone wolf" shooter doing a killing spree in a school; the heartless cowards may even prevent the parents from going inside to rescue their kids: Uvalde school shooting incident

* Cop on a ego trip, will shoot down a clearly harmless kid calmly eating a burger in his own car (not a stolen car): Erik Cantu incident

* Cops are not there to serve the society, they are not there to ensure safety and peace for the neighborhood, they are merely armed militia to protect the rich and powerful elites: https://www.alternet.org/2022/06/supreme-court-cops-protect-...

akimbostrawman

All of your examples are well known not because its normal and accepted but because they are exceptions. For every one bad example there are thousand good ones, that's humans for you.

Doesn't mean they are perfect or shouldn't criticised but claiming that's all they are doing isn't reasonable either.

If you look at actual per capita statistics you will easily see this.

drak0n1c

The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.

ggreer

According to a news article[1], a human did review the video/image and flagged it as a false positive. It was the principal who told the school cop, who then called other cops:

> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.

What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?

1. https://www.wbaltv.com/article/student-handcuffed-ai-system-...

xp84

Good lord, what an idiot principal. If the principal saw how un-gun-like it looked, he could have been brave enough to walk his lazy ass down to where the student was and said "Hey (Name), check this out. (show AI detection picture) The AI camera thought this was a gun in your pocket. I think it's wrong, but they like to have a staff member sign off on these since keeping everyone safe from violence is a huge deal. Can I take a picture of what it actually is in your pocket?"

undefined

[deleted]

wat10000

Sounds like a "better safe than sorry" approach. If you ignore the alert on the basis that it's a false positive, then it turns out it really was a gun and the person shoots somebody, you're going to get sued into the ground, fired, name plastered all over the media, etc. On the other hand, if you call in the cops and there wasn't a gun, you're fine.

Etheryte

Next up, a captcha that verifies you're not a robot by swatting you and checking at gunpoint.

proee

Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.

Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.

There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.

How does this not spiral out of control?

mpeg

To be fair, at least you can choose not to wear the cargo pants.

A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...

stavros

How is it fair to say that? That's some "why did you make me hurt you"-level justification.

mpeg

No, it's not.

I have shoes that I know always beep on the airport scanners, so if I choose to wear them I know it might take longer or I might have to embarrassingly take them off and put them on the tray. Or I can not wear them.

Yes, in an ideal world we should all travel without all the security theatre, but that's not the world we live in, I can't change the way airports work, but I can wear clothes that make it faster, I can put my liquids in a clear bag, I can have bags that make it easy to take out electronics, etc. Those things I can control.

But people can't change their skin color, name, or passport (well, not easily), and those are also all things you can get held up in airports for.

franktankbank

> guess his ethnicity...

Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.

malux85

Speak up citizens!

Email the state congressman and tell them what you think.

Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.

Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy

anigbrowl

If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.

actionfromafar

Yeah, Republicans hide from townhalls. Most of them have one constituent, Trump.

xp84

Get Precheck or global entry. I only do a scanner every 5 years or so when I get pulled at random for it. Otherwise it's metal detector only. Unless your zippers have such chunky metal that they set that off you'll be fine. My belt and watch don't.

Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.

Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."

hollow-moe

"Just pay to not be harrassed or have your rights/dignity stepped on" a typical take to find on the orange site.

rkagerer

...maybe not, but a few bucks could still solve this problem

Sure, can't argue with that. But doesn't it bug you just a little that (paying a fee to avoid harassment) doesn't look all that disimilar from a protection racket? As to whether it's a few bucks or many, now you're just a mark negotiating the price.

xp84

> protection racket

It actually doesn't! Plenty of people never fly at all and many fly incredibly rarely. The Precheck and GE programs cost money to administer as they have to do background checks and conduct interviews. This actually accomplishes actual security goals, since it allows them to flag risky behavior and examine it.

Who benefits from these programs? Primarily heavy travelers (and optimizers like me who value their time saved more than the $24 a year). These programs also actually make everything better for everyone since I'm no longer taking up a space in the slower-moving, shoes-off line, and TSA/CBP get an actual background check done on me.

The way it is now, heavy travelers who can easily afford it, pay the full costs of the program.

Would you rather:

1. Precheck is free and paid for by all taxpayers even though a lot of people will never bother to enroll (you have to assume -- the cost is so low today that it can't be a barrier for almost anyone who can afford to fly, so it seems a ton of people can't be bothered to follow simple instructions and go get fingerprinted at Staples)

2. Precheck is eliminated and everyone has to go back to the dumb liquids-out, shoes-off thing

3. Precheck is eliminated and we just treat everyone like the Precheck people today, without doing any background checks. Basically like pre-9/11.

fgbarben

The brownshirts will never get my money.

xp84

lol, enjoy torturing yourself to stick it to the man to save literally $24 a year cost of Global Entry.

voidUpdate

I don't often fly, but back when I went to germany on a school trip, on the return flight I got pulled aside into a small room by whatever the german equivalent of TSA is and they swabbed the skin of my belly, and the inside of my bag. I'm guessing it was a drugs check and I must have just looked shifty because I get nervous in situations like that, but I do find it funny that they pulled me aside instead of the guys with me who almost certainly had something on them.

Also my partner has told me that apparently my armpits sometimes smell of weed or beer, despite me not coming in contact with either of those for a very long time, and now I definitely don't want to get taken into a small room by a TSA person (After some googling, apparently those smells can be associated with high stress)

walkabout

I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.

(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)

hinkley

I was getting pulled out of line in the 90’s for having long hair. I don’t dress in shitty clothes or fancy ones, I didn’t look funny, just the hair, which got regular compliments from women.

I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.

The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.

JustExAWS

Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.

In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.

proee

I wasn't implying TSA-cargo-pant-groping is comparable. My point is to show escalation in public facing systems. We have been dealing with TSA. Now we get AI Scanners. What's next?

Also, no need to escalate this into a race issue.

JustExAWS

Yes because I’m sure if a White female had been detected by AI of carrying a gun, it would have been treated the same way.

hinkley

But it was a black man they harassed.

more_corn

Why don’t you pay the bribe and skip the security theater scanner? It’s cheap. Most travel cards reimburse for it too.

proee

I'm sure CLEAR is already having giddy discussion on how they can charge you to get pre-verified access to walk around in public. We can all wear CLEAR certified dog tags so the cops can hastle the non-dog-tagged people.

jason-phillips

I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.

oceanplexian

If the system used any kind of logic whatsoever a CCW permit would not only allow you to bypass airport security but also carry in the airport (Speaking as both a pilot and a permit holder)

Would probably eliminate the need for the TSA security theater so that will probably never happen.

mothballed

You can carry in the airport in AZ without a permit, in the unsecured areas. I think there was only one broo-ha-ha because some particularly bold guy did it openly with a rifle (can't remember if there's more to the story).

some_random

The point of the security theater is to assuage the 95th percentile scared-of-everything crowd, they're the same people who want no guns signs in public parks.

rglover

This may be mean, but we should really be careful about just handing AI over to technically illiterate people. They're far more likely to blind trust the LLM/AI output than someone who may be more experienced and take a beat. AI in an agentic-state society (what we have in America at least) is an absolute ticking time bomb. Honestly, this is what AI safety teams should be concentrated on: making sure people who think the computer is infallible understand that, no, it isn't, and you shouldn't just assume what it tells you is correct.

hollow-moe

We already handed over the Internet to technically illetrate people long time ago.

hanspeter

It's basically a failure of setting up the proper response playbook.

Instead of:

1. AI detects gun on surveillance

2. Dispatch armed police to location

It should be:

1. AI detects gun on surveillance

2. Human reviews the pictures and verifies the threat

3. Dispatch armed police to location

I think the latter version is likely what already took place in this incident, and it was actually a human that also mistook a bag of Doritos for a gun. But that version of the story is not as interesting, I guess.

shaky-carrousel

He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.

ggreer

I think the reason the school bought this silly software is because it's a dangerous school, and they're grasping at straws to try and fix the problem. The day after this false positive, a student was robbed.[1] Last month, a softball coach was charged with rape and possession of child pornography.[2] Last summer, one student was stabbed while getting off the bus.[3] Last year, there were two incidents where classmates stabbed each other.[4][5]

1. https://www.nottinghammd.com/2025/10/22/student-robbed-outsi...

2. https://www.si.com/high-school/maryland/baltimore-county-hig...

3. https://www.wbaltv.com/article/knife-assault-rossville-juven...

4. https://www.wbal.com/stabbing-incident-near-kenwood-high-sch...

5. https://www.cbsnews.com/baltimore/news/teen-injured-after-re...

evanelias

That certainly sounds bad, but it's all relative; keep in mind this school is in Baltimore County, which is distinct from the City of Baltimore and has a much different crime profile. This school is in the exact same town as Eastern Tech, literally the top high school in Maryland.

boneitis

Hi, I'm not following the point being made.

I skimmed through all the articles linked in GP and finding them pretty relevant to whatever decision might have been made to utilize the AI system (not at all to comment on how badly the bad tip was acted on).

Hailing from and still living in N. California, you could tell me that this school is located in Beverly Hills or Melrose Place, and it would still strike me as a piece of trivia. If anything, it'd just be ironic?

aidenn0

That sounds to me like it's pretty close to the middle of the curve a large High School in the US.

ggreer

I doubt that. I moved around a lot as a kid, so I went to at least eight different public schools from Alabama to Washington. One school was structurally condemned while I attended it. Some places had bullying, and sometimes a couple of people fought, but never with weapons, and there was never an injury severe enough to require medical attention.

I also know several high school teachers and the worst things they've complained about are disruptive/stupid students, not violence. And my friends who are parents would never send their kids to a school that had incidents like the ones I linked to. I think this sort of violence is limited to a small fraction of schools/districts.

simoncion

Based on your reporting, that's one violent crime per year, and one alleged child rapist. [0]

The crime stats seem fine to me. In a city like Baltimore, the numbers you've presented are shockingly low. When I was going through school, it was quite common for bullies to rob kids... even on campus. Teachers pretty much never did anything about it.

[0] Maybe the guy is a rapist, and maybe he isn't. If he is, that's godawful and I hope he goes to jail and gets his shit straight.

Havoc

>the system “functioned as intended,”

Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.

Except the fictional one from the series was more accurate...

macintux

I think the most amazing part is that the school doubled down on the mistake by parroting the corporate line.

I expect a school to be smart enough to say “Yes, this is a terrible situation, and we’re taking a closer look at the risks involved here.”

JKCalhoun

Lawyer's advice?

macintux

I would think "no comment" would be safer/smarter than "yeah, your kids are at risk of being shot by police by attending our school, deal with it".

JKCalhoun

Good point.

lisbbb

Except they never say that.

macintux

> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

> Baltimore County Public Schools echoed the company’s statement in a letter to parents, offering counseling services to students impacted by the incident.

(Emphasis mine)

Daily Digest email

Get the top HN stories in your inbox every day.