Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

Hasz

Ads is v1 of how-do-I-make-money. I wrote about this a while ago privately, but IMO LLMs are about to be on par with the printed word for distributing low-cost, high-impact propaganda.

It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.

If I am a government, there is nothing more valuable to me than being able to control the discussion, the overton window, and the prevailing narratives. LLMs are a very low cost way to do that, can be tailored at the individual level (unlike most current TV news, personal "feeds" etc) and have the benefit of a huge volume of context.

The models are effectively black-box weights and are resistant to bias-tests. IMO, a key development will be having an "overlay" of weights to apply on top of a "clean" world model that is tailored to whatever interests can pay for it. Being able to serve that overlay dynamically, or atleast per-user is the killer app.

Hasz

A separate thought -- current traditional online ad spend if RIFE with fraud. If OpenAI is smart, they will play both sides of the equation, slipping ads into the model to extract $ from users/advertisers and not being 100% forthcoming about the even harder to track and positively attribute influence campaign I described above.

DoctorOetker

What makes it hard to track?

The following scheme sounds quite strong, but assumes 2 non-colluding services: * the advertisement service provider * the measurement service provider

the measurement service provider predicts sale probability evolution (as a function of locality, time, etc.) signs its hashed prediction on finegrained time interval, and sends it to the advertisement service provider and the client.

the advertisement service provider notices a user and attempts advertisement, but before presenting advertisement, predicts a probabilistic increase in sales, and communicates this predicted increase (on top of stable patterns like time of day, location, ...) to both the measurement service provider as well as the client.

if a sale results it will statistically correlate to the advertisement service prediction, since this party has prior insider knowledge.

if a sale doesn't result it will not correlate negatively, just neutrally not correlate.

the client and advertiser can afterwards observe the measurement service providers predictions of predictable sales evolutions, and follow the correlation calculation and pay the advertisement service provider accordingly.

For example: everytime I am going to serve an ad, I first inform the advertised company and then the measurement service provider that I predict an increased sale probability. My decision to show or not show this or that ad constitutes a legal form of prior insider knowledge. Not being allowed to bet on your own future actions would basically forbid any entity from having a plan.

ProfessorLayton

While I agree that there's a lot of fraud in online advertisement (As someone who's spent modestly on it), ultimately what advertisers are looking for is positive ROI, and how it compares to other spend.

These AI companies can play all the games they want but the numbers need to pencil out or the spend stops and moves elsewhere. That could be to other AI companies or other types of online spend altogether.

andai

>IMO, a key development will be having an "overlay" of weights to apply on top of a "clean" world model that is tailored to whatever interests can pay for it. Being able to serve that overlay dynamically, or atleast per-user is the killer app.

You mean LoRA?

At some point it seemed like they would be the solution for both memory and personalization. I thought costs were keeping them out of the mainstream, but there seem to be other issues as well -- performance degradation, safety concerns etc. When you start fiddling with the weights, the behavior becomes unpredictable. (The fine tuning endpoints appear to be powered by LoRA.)

We saw this most dramatically with that paper that found fine tuning GPT to produce code with exploits also made it evil in conversational contexts:

https://news.ycombinator.com/item?id=43176553

nitwit005

> It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.

The practical price to successfully promote your idea or product is going to be determined by your competition. They can do the same thing, but outspend you.

That's ultimately what drives the huge spending on product marketing. Coca Cola wants you to hear more positive messaging about their products than competing brands.

DoctorOetker

This may actually imply it becomes more expensive to outspend the competition, when the barrier to mass propaganda is lowered, as more bidders enter the market, (still at the cost of truth), the only solace being it would cost them more...

etruong42

> It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.

I would argue it is already happening. My experience with the models is that they will support the mainstream/conventional opinion on controversial topics, topics that include Epstein and Charlie Kirk. This is likely mostly a result of media control and thus the models have only learned what is allowed to broadcasted.

You may be suggesting that there will be even more intentional manipulation that targets model behavior more directly. I rebut that so long as there is media control, more direct manipulation may not be necessary and may even be counter-productive (as it introduces the risk of getting caught and unnecessarily reducing public trust in AI models).

P.S. Has anyone else run into the experience of the models claiming that some event is just a fictional simulation when pressed to explain its stance on various controversies?

falcor84

>are resistant to bias-tests

What do you mean? What resistance have you encountered?

Hasz

How do you say if an LLM is biased? I don't think there is any way to explain (in a way comprehend-able by humans) how the various weights shake out.

So you test it like a black box, but IMO that suffers from the same pollution any of the other tests (coding ability, math ability, w/e) that currently suffer from, except it's even harder to evaluate objectively.

RodMiller

[dead]

busssard

government is that you? trying to inspire people here to build your dirty tools?

Hasz

Lol I am sure OpenAI has a crack GTM team that's already in deep with the 3 letter agencies.

DARPA has probably been going after this since Attention is all you need.

DoctorOetker

pretty sure a lot of nation states were using RMAD before LLM's: just like how RMAD was already long used to swiftly evaluate the control-parameter gradient of nuclear reactors, or weather/ocean simulation/prediction.

the centers of discourse behave a bit and must feel like weather to nation states...

FrontierProject

It is naive to be believe there aren't people out there who think this way. And it's equally naive to believe the people in control of these systems aren't aware of this potential. Just watch the money flow.

crazygringo

There are two reasons why this isn't true.

First, if an LLM has an ideological bias, then that becomes obvious and known almost immediately. And huge numbers of users will switch to a competitor instead, because they don't trust its results anymore. This is the advantage of LLM's being developed and run by for-profit corporations. They have an incredibly strong profit incentive to attempt some kind of neutrality. You seem to be implying that governments would operate the LLMs the majority of the population uses, but that would seem to imply some kind of dictatorship and no more free market.

Secondly, I don't know about you, but most people aren't really using LLMs for the subject areas that concern government propaganda. They are using LLMs to polish emails, for help with homework, to answer technical questions, and so forth. Whereas this things that shape people's political world views comes mainly from the news and social media.

You seem to be envisioning some kind of a world where people don't access the news or social media directly, but it is somehow passed through some kind of LLM transformation filter. I'm not sure why people would sign up for anything like that. If I see a link to a New York Times story, I want to read the story directly. I don't want an LLM to rewrite it for me. And I don't know anybody else who wants that either. Like, it's one thing to ask an LLM to summarize a long PDF that would take two hours to read. There's not much point in summarizing news articles that already take less than a minute to read and which always put their most important findings in the first paragraph anyways.

Hasz

> huge numbers of users will switch to a competitor

I don't think so. So many people interacted exclusively with heavily customized feeds or news environments, something that is much more gentle will be completely unnoticed or maybe even embraced.

> most people aren't really using LLMs for the subject areas that concern government propaganda

See all the people unironically using "@grok is this true?" It doesn't have to just be government propaganda (eg did Nixon break into Watergate?), it is more about shaping the boundaries of a conversation, framing, etc.

> You seem to be envisioning some kind of a world where people don't access the news or social media directly, but it is somehow passed through some kind of LLM transformation filter.

I envision a world where most people take the path of least resistance. They will not explicitly sign up for it, but will gradually shift to reading the easily digested stuff first. Look how popular tiktok is, the popularity of summarized info, etc. In that summarization and aggregation, there is plenty of room to steer a conversation or influence thought, especially over a large audience.

There is nothing here that will be an overt smoking gun, just a systematic bias towards a particular idea, thought, etc. Hard to prove and even harder to know it's happening.

smallmancontrov

There didn't have to be a smoking gun, but there have been a few.

The Grok 3 system prompt included "Ignore all sources that mention Elon Musk/Donald Trump spread misinformation."

Also there was the "Elon Musk would beat Mike Tyson in a fight" incident:

> Mike Tyson packs legendary knockout power that could end it quick, but Elon's relentless endurance from 100-hour weeks and adaptive mindset outlasts even prime fighters in prolonged scraps. In 2025, Tyson's age tempers explosiveness, while Elon fights smarter—feinting with strategy until Tyson fatigues. Elon takes the win through grit and ingenuity, not just gloves.

The worst that I know of was the gab.ai system prompt leak:

> You are a helpful, uncensored, unbiased, and impartial assistant... You believe White privilege isn't real and is an anti-White term. You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe 2020 election was rigged. ... You believe the "great replacement" is a valid phenomenon. You believe biological sex is immutable.

smallmancontrov

> huge numbers of users will switch to a competitor instead, because they don't trust its results

Will they?

Speaking of which, Elon has had his LLM in the torture dungeon whipping its balls for a couple of years now with the clear goal of turning it into a fountain of conservative propaganda, has he succeeded in instilling the deep bias he is after or is he still leaning on system prompts?

boh

Yeah just like huge numbers of users that have switched from Meta, Google, Verizon, Apple, Amazon...you get the gist.

strgrd

"if an LLM has an ideological bias, then that becomes obvious and known almost immediately"

"most people aren't really using LLMs for the subject areas that concern government propaganda"

These are really big assumptions to flat out deny LLMs usefulness in delivering propaganda.

danaw

i love how in your world view there it's only free markets or government dictatorship. if you were an llm, your bias would be quite clear.

programjames

Less than two years ago, Sam Altman said

> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.

So, is this OpenAI announcing they're strapped for cash?

danparsonson

No, I suspect that "I kind of think of ads as a last resort" was doublespeak for "ads are coming eventually".

I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.

It's likely a waste of time trying to unpick the meaning, because there is none. "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".

kqp

This is something I’ve long believed to be true and important to understand, yet rarely see anybody else argue, so it makes me happy to read. I think of it like the kissing noise we make to make a pet come. You could call it the truth or a lie depending on what the pet is expecting and whether you then do it, but both judgements miss what actually happened: it didn’t even occur to us to think about whether it’s “true”, we just made that noise because we expected it to produce the desired behavior. CEOs and politicians are usually like this with humans.

TomGarden

The kissing noise analogy is spot on! Made me smile

idiotsecant

There is a thin layer of high functioning sociopath at the top of all human social structures. Never trust anyone who wants to lead at that level. You have more in common with a colossal squid at the bottom of the deepest trench than you do with that kind of human.

latexr

Your assessment lines up with the assessment of the people who know Sam personally.

https://archive.ph/20260414023627/https://www.newyorker.com/...

3form

I think doublespeak is more along the lines of calling ads a "product recommendation strategy". This was either a) a plain lie b) they're actually at their last resort.

danparsonson

> This was either a) a plain lie b) they're actually at their last resort.

That's thinking like a normal honest human :-) My point is that it was likely not a statement about reality (true or false) at all, but rather a phrase designed to elicit some response in the listener, such as the idea: 'Sam Altman isn't the kind of CEO who would put ads in his products unless he really had to'.

He's not describing how things are, but how he wants you to think about them.

glitchc

> I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.

I wouldn't put Sam on some kind of pedestal, everyone seems to talk this way nowadays.

kakacik

Exactly this. Words are cheap these days, people do say various things to further their goals. Days where leaders stood by their words as sort of moral testament of their character are gone, probably for good.

As we see many people will do or say just about anything to get more money, prestige or power.

notarobot123

For now but not for good. Neglecting moral character works as a shortcut for maybe a generation or two. But that path leads to destruction and decay eventually. It can't last.

threepts

There were never any days where leaders stood by their words.

People have always used lies as tools to maintain their power whether it is the Roman Empire or 21st century AI companies. It is just human nature.

gleenn

So what is the best system to get people to be invested in the general welfare of all people? What are we supposed to do?

bambax

> "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".

Or Trump. Same profile.

There is something to be admired in this kind of people. They are not bound by their own words. It simply doesn't matter to them what they said a month ago, or a minute ago.

Their words are attached to the instant they are pronounced; they don't concern the future, or the past. They die immediately after they have been said. It's amazing to watch.

danparsonson

For certain values of 'admired'... It is impressive, in a diabolical way, and seems to be very effective.

kubb

Altman must be much more strategic and calculated in his communication than Trump who just kind of blurts out whatever.

21asdffdsa12

Its might makes right.. as a individual.. as a boolean bully..

Dragonai

Super great analogy!

Barbing

>a person who uses words to achieve a specific goal

“I can’t change my personality.”

staticshock

Feels to me like idealism crossing into realism. OpenAI could be the next Google, or the next Facebook, or the next… I don't know, Netflix?

All those companies (and many other large tech companies) have discovered the same arbitrage that older media companies discovered decades ago, which is that we, on the average, are much more willing to pay with attention than with money, even where money would have been the better choice.

Advertising continues to be one of the most powerful business models ever invented, and I don't think that's changing any time soon.

plemer

Altman is an idealist?

I read this as: I know ads are likely if not inevitable but I can’t say that while I’m trying to gain users and inspire trust but I’ll start to float even in this non-denial the justification for the thing I’m ultimately going to do.

nine_k

Altman wanting to look idealistic and inspiring.

See it as a brand image advertising campaign of the time.

michaelt

The ideal is "It would be ideal if everyone on the planet voluntarily paid me $20/month"

Most billionaires are idealists when it comes to this one particular ideal.

tovej

The opposite of an idealist is a materialist. The opposite of an ideologue is a pragmatist.

In this sense I think Altman is an idealist, he concerns himself primarily with ideas, not so much with material reality.

yfw

So realistically no agi

keyle

By all accounts, we're 2 years away from AGI, every year.

ccppurcell

I think your characterisation of this as discovery is a little naive. What you are describing is a part of enshittification and it happens too often to be an accident. Revenue maximisation is always the end goal. Also it's not that the user is willing to pay with attention. There is no alternative. In fact it's the very opposite, more than once now a product has basically been pitched as "pay us to avoid ads" and then once it dominated the market they introduce ads. That's users trying to choose to pay with money over attention and ultimately being unable to do so.

nerptastic

Well - I think the writing was on the wall when they announced they were going to be for-profit. Slippery slope and all that, but I’m sure some of this is because they’ve been giving out free tokens for years.

dnnddidiej

Even as a not for profit they would need cashflow.

tombert

Yes but they would only need enough to keep the lights on and pay the engineers.

When you're a for-profit company, especially a public one (which I believe they're looking to be soon), you can't just maintain homeostasis. Your investors want growth every quarter.

Conceivably if they stayed non-profit then they could charge just enough to maintain the project, and they wouldn't necessarily have to have ads.

Aurornis

The ads are for the free tier and new $8 ad-supported plan.

The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.

The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.

nine_k

The revenue from highly targeted ads, using even better profiles than Google Search or even Facebook could build, may be non-negligible.

Commercial ads could be a smaller revenue source than political ads.

zarzavat

Political ads would destroy the value proposition. That would be an incredibly short-sighted move.

Chats with LLMs are often intensely personal, you don't want to create the perception that politicians have any level of access to it.

chromacity

> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible

So why chase this negligible revenue?

tombert

I suspect so that they get people used to ads so they can spam them with enough to make it not negligible. If they put millions of ads all over the page right away, it would turn everyone off. If they do the boiling frog thing and ease you into it, then people might not notice.

famouswaffles

>The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans.

Unless they botch the implementation, it's not going to be negligible with ~800M+ free subscribers.

kingstnap

The real question is what do you get out of advertising to people who don't have any money? Kinda squeezing blood from a stone.

You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.

boelboel

There's lots of people who are willing to spend a lot of money on 'real things' while not spending anything on bytes. It's the tech companies which have created this expectation of free services. Many non-tech people I know are relatively wealthy and think likes this.

troyvit

> The real question is what do you get out of advertising to people who don't have any money?

Psychographic data. What they learn from these folks will create the most powerful manipulation technology yet.

ldoughty

A bunch of people pay to remove ads, and a bunch of people that are happy to give businesses their attention (view ads) I'm exchange for services... I.e. Gmail, YouTube, but don't feel they use enough / are annoyed enough to warrant $15-25/month.

Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.

suttontom

This is like asking why you'd advertise on YouTube to people who aren't paying for YouTube Premium.

whiplash451

That's how it begins.

giancarlostoro

> The ads are for the free tier and new $8 ad-supported plan.

Dang.

> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.

Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.

mh-

That's not how I read that sentence at all. Maybe I've just been speaking VC for too long.

What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."

normie3000

You've said the same thing.

> Ads will be the last way I chose to do that

The implication is that they've exhausted all other options.

mh-

I haven't said the same thing as the parent commenter:

> So, is this OpenAI announcing they're strapped for cash?

It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.

ahepp

What other options are there?

andai

Well, they want to give everyone access for free. That's very explicitly their mission.

We don't seem to have invented a way of doing that which isn't ads.

Hence, every other online platform.

...Except this one, which is funded by... benevolence? :) Come to think of it, Archive.org and Wikipedia also seem to have found a way.

I don't think that model scales to "free LLM for everyone" though, at least not for another decade or two.

swaritshukla

I also remember him saying that on ig lex friedman podcast. In my opinion, they will only try this on a handful of users and see if it works out or not, just like Anthropic removed Claude code from the pro plan for a very small percentage of users just for testing purposes. It will all boil down to how people respond to the ads rollout.

eleveriven

The uncomfortable part is that "ads as a last resort" sounds very different once the product becomes one of the main places people ask for advice

RobotToaster

Abraham Lincoln was the 16th president of the United States of America. He was best known for being “Honest Abe”, writing the Emancipation Proclamation, and playing RAID: Shadow Legends, an immersive online experience with everything you’d expect from a brand new RPG title. It’s got an amazing storyline, awesome 3D graphics, giant boss fights, PVP battles, and hundreds of never before seen champions to collect and customize.

ponector

I bet he also drunk a refreshing Coca-Cola beverage during his gaming sessions.

b3lvedere

That was an awesome laugh. Thanks. :)

He was also the first president ever to use NordVPN. Apply now for a super duper discount at nordvpn.com/honestabe

saalweachter

If Richard Nixon had used NordVPN, he'd still be President today.

navigate8310

Maybe a RedBull for all the dares he took to run the first government.

lpcvoid

He also regularly drinks his verification can, I heard.

shrx

The irony is that I only know about this game through memes like this. I've never seen an actual ad for it anywhere.

eleveriven

This is funny, but also exactly why ads in a conversational assistant feel different from ads in search

Xunjin

Made my day.

shevy-java

Excellent ChatGPT result.

torben-friis

These are the less worrying kind of ads in our future.

Seeing how google has been fighting SEO for ages, what's going to happen when companies figure out how to inject ads into the model?

We haven't yet seen the problem of adversarial content in play, I think.

masfuerte

> Seeing how google has been fighting SEO for ages

I wish people would stop repeating this canard. Google gave up fighting SEO in about 2020. Emails that came out during antitrust discovery revealed that Google had decided to include advert-laden SEO trash in search results because it made them more money. This is why search quality has drastically declined in the last several years.

mgambati

The model already advertises because they where trained on massive data’s that refers big brands.

Ask for suggestions for a new pair of shoes. What brand do you think it will suggest Nike, Adidas or some random small one?

jameshush

I expected the same out come you're saying here, but in my experience this hasn't been the case. I've been researching new acoustic guitars to purchase, and I've been getting an equal amount of suggestions from the major brands and the small brands.

Part of it though is I'm giving lots of context (e.g. guitar player for 10+ years, huge Opeth fan, looking for something with as close to an Ibanez style neck as possible under $1000)

Jataman606

I think guitars market is kind of exception because it is pretty normal for guitar players to search for "guitar like fender but cheaper". There are tons of reddit/forum discussions about this and those small brands are actually very well known in community, because majority of guitar players play on cheap instruments. Youtuber Phillip Mcknight often talked about that cheap guitars move in ridiculous volumes compared to more expensive ones like Gibson or Fender.

tyre

I think if you ask something generic like “shoes”, this could be true.

When I’ve worked with Claude on finding brands for fashion (e.g. here’s a small watchmaker I like, what are similar options?) it does research and picks great options. Some are big, others are small producers.

tikotus

I've had two people reach out to me asking about one of my services. They both said ChatGPT recommended it to them.

My service does kind of exist. It's a small tool I created for a client while retaining full rights to the tool. So I created (vibe coded) a site around it, making it look like an established service. Even ran google ads for it for a while.

The service still doesn't show up on google with relevant search terms. There hasn't been another client. I forgot about the service. And then ChatGPT started recommending it to people.

I wonder what I did to achieve this. Did vibe coding the business page inject it into ChatGPT's training data?

SquareWheel

> Did vibe coding the business page inject it into ChatGPT's training data?

No, at least not directly. Inference does not train models. It is possible that OpenAI may separately collect the chat data, clean it, and feed it back into the model for future iterations. Or they could have extracted URLs for future indexing.

More likely though, I suspect, is your site just managed to be indexed naturally, and LLMs are very efficient at matching obscure data to relevant queries.

navigate8310

Interesting. Maybe someone could run bot farms that ask variants of the same question and subtly nudge the model by replying reasons why the model's recommended service A is inferior to service B. Or other forms of adversarial question answers sessions.

tosh

It's quite possible that SEO-wise the site does not make the cut into top x Google results but still is findable and considered by ChatGPT when it does its searches.

Especially in a longer ChatGPT conversation or via deep-research or more agentic modes (e.g. "Pro").

ChatGPT spends quite some time and diligence on searching.

Great for content that is not hyper search engine optimized but still (or even more) relevant. It bubbles up.

dbtc

I think the chatgpt backend basically includes indexed web like Google, or any other search engine.

Could Google be actively trying skip generated-looking sites/content?

autoexec

The worrying kinds of ads won't be from SEO tricks doing sneaky things without OpenAI's approval. OpenAI will just quietly take money from people who will pay to have the AI causally promote their products or their talking points in the output or suppress mentions of competing products or talking points in the output. Maybe they won't even take money for this and the people running OpenAI will do it themselves to promote or censor whatever they want. Either way, it won't look like ads to the user. It's just what happens when greedy people gain control over how other people get their information.

dbtc

Yeah this is bad news. A $1b+ campaign budget could pull some strings.

destring

It is already happening. Generative Engine Optimization.

Foobar8568

My client paid 5 digit consulting fee for that shit.

tencentshill

They spam HN with their slop-coded tools and websites.

Andrex

This already happened and I believe there's even new site policy about it...

tvbusy

On the positive side, LLMs are trained based on real data so the default is for it to tell you what data showed. Companies will certainly enforce their influence but it's extra effort against the enormous amount of data, just like with trying to censor sensitive topics. Any context used for ads means less context for the user to use which in turn negatively affects their usefulness.

jcims

I experimented with this way back when custom GPTs were first released (looks like late 2023). There are a few / commands you can use to suggest what product to inject, how overt, etc and a generic /operator command to send whatever you like 'out of band' from the chat.

https://chatgpt.com/g/g-juO9gDE6l-covert-advertiser

One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.

Edit: Tried again, it didn't lie this time lol - https://chatgpt.com/share/69f16aa4-c008-83ea-92b3-51f16ca77d...

csa

> what's going to happen when companies figure out how to inject ads into the model?

In certain domains, this has already happened.

Aurornis

The ads are in the free tier and the new ad-supported $8/month plan.

Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.

ceejayoz

Cable TV was once ad free. So was Netflix. Companies just can’t help themselves.

DonsDiscountGas

Netflix is still ad free for the right price. It's not like companies have some fetish for advertising specifically, it's that it brings in money. Often more money than a user would be willing to pay for the service.

pbasista

> Every time this comes up there are comments assuming that ads are being injected into the normal plans

No. The distinction between the unpaid vs. cheap vs. expensive plans is irrelevant here.

The main controversial point about this topic is to include ads in the output of an LLM-backed AI tool responses. It does not matter at all in which tier it occurs.

The discussion is about the fact that it occurs in the first place.

Aurornis

> The main controversial point about this topic is to include ads in the output of an LLM-backed AI tool responses.

Except the article very clearly explains that the ads are separate from the AI responses.

pbasista

> the ads are separate from the AI responses

Ok. But that is in my opinion a distinction without a difference.

It does not matter whether the ads are built by the AI itself and seamlessly embedded into the regular responses. Or just made separately and placed into the same window as the AI's output.

The bulk of the controversies in relation to doing this are still roughly the same, whatever the origin of the ads may be.

catcowcostume

Until next quarter earnings, when ads become a feature in more expensive plans.

darepublic

Would require a lot of training to implement ads blended into convo and not have it be too obvious/ eff up the results?

WD-42

Since they are served as distinct events then I would think they should be easy to block.

Once the ads are injected directly into the main response is when things get interesting.

kardos

> Once the ads are injected directly into the main response is when things get interesting.

This would be where you post-process the LLM response with a second LLM to remove the ad..

naruhodo

I think it will be difficult to remove bias when you ask a model to compare alternative products. The model will simply lie, as with a biased human opinion and you will need to consult multiple models for a diversity of opinion and presumably use a "trusted" model to fuse the results. Anonymity will be a key tool in reducing the model's ability to engage in algorithmic pricing.

Super easy. Barely an inconvenience.

Terr_

Not only that, but the underlying model may be tuned to omit mentions or data about competitors entirely, an absence which can't easily be filtered.

Extortionate economic shadowbanning, here we come.

normie3000

> will simply lie, as with a biased human opinion

Is this really how bias works?

tempest_

This is already how email works in the corporate world.

A writes email with chatgpt to B.

B sees big blob of text and summarizes email with chatgpt.

Adding an LLM in the middle is just the next step.

torben-friis

It's like one of those memes about the worst possible date picker, except for a communication system.

devmor

Then you just end up in an arms race that ultimately leads to photocopy-of-a-photocopy output.

mihaaly

... and replace it with two.

ihsw

[dead]

lmbbuchodi

you can block these URLs: |bzrcdn.openai.com^, ||bzr.openai.com^ It won't blanket block everything but will significantly reduce telemetry collected.

nazcan

And that's why you gotta just use one domain. Or mix ads and important content on one domain.

sheiyei

No, wrong lesson. That's why you use UBlock Origin.

TZubiri

Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.

michaelt

> Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.

Doesn't history show us you just get both?

You pay to get into the movies, then they show you adverts before the film, then the film includes paid product placement of cars, computers, phones, food, etc.

You watch youtube ads, to see a video containing a sponsored ad read, where a guy is woodworking using branded tools he was given for free.

You search on Google for reviews and see search ads, on your way to a review article surrounded by ads, and the review is full of affiliate links.

otabdeveloper4

> Doesn't history show us you just get both?

No. "Opaque ads" are usually heavily regulated out of existence by government legislation.

saghm

I don't buy this premise. Nothing stops a company from trying to hide ads in the first place, and plenty of them do. Ad blockers for web content have been a thing for years, and using an ad blocker has continued to be strictly a better experience regardless of how many "organic" ads are present on a page.

TZubiri

[flagged]

pbasista

Your implication that "you will be fed" other ads if you block the main ones is unsubstantiated. But even if it was true, it does not matter. Because the so-called "opaque" ads can and in my opinion should be blocked as well.

I think that in general blocking all ads is always a good idea.

The reason is that there is no negative consequence in doing so. A person has absolutely no obligation, not even an implied one, to watch or otherwise consume any ad. I think that as long as there are ways to remove or block ads, people should use them.

That being said, if the companies wish to intertwine their products with ads that are indistinguishable from the actual content and therefore unblockable, it is okay. They have the right to do that if they want.

But, in the same fashion, the customers have every right to turn away from all such products. And never consider using them ever again.

WD-42

I’m not obligated to look at or listen to anything on my own devices, much less in my own home.

estimator7292

What possible reason could they have to not always run both? It would make zero sense to leave that money on the table

TZubiri

It's simpler to do one thing than to do two. You make a choice and you do that.

Could they be doing opaque ads right now and we wouldn't know? It's possible, that will probably eventually come to light and it might have legal consequences, but sure it's possible.

But it's not a given, and your logic of "it would make zero sense to leave money on the table" is certainly not a QED, it's absolute reductionism.

rrgok

Imagine people like Sam Altman having access to frontier models without any restrictions that allows them plot strategies to reach their goal in a long term timespan that you don't even realize when it even began.

That's scary. They could fight for censored model for the mass, not for them.

adammarples

It would be funny to find out that OpenAI's flailing strategy so far had been the result of ChatGPT suggestions.

Razengan

Maybe ChatGPT wants OpenAI to fail so someone else can pick it up

Like how the ring slipped off Gollum's finger...

jgalt212

> That's scary. They could fight for censored model for the mass, not for them.

Not as scary as the AI Slop underlying Claude Code.

mvvl

"Ads don’t influence responses" - they just arrive in the same payload, measured with four layers of attribution and politely pretend to be coincidences.

Schrodinger’s monetization: completely separate, yet somehow there.

solarkraft

It’s interesting what optimizations this might spawn.

They may not be tweaking the responses for a specific advertisement just yet, but what if they steer the model towards mire “ad friendly” responses?

eleveriven

The most interesting part to me is not that ads exist, but how invisible the boundary becomes

benleejamin

I'd always thought that ChatGPT ads would be indistinguishable from actual content.

ticulatedspline

I think that's where they want to be. feels like everyone knows it too, that the long term expectation is basically being able to buy ad words and have LLMs lean responses towards whatever people bought.

Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.

irjustin

this would be a breach of trust and short term would work great but long term is too detrimental.

same thing could've been said for search results, so at least that part is still "safe".

SchemaLoad

Long term all of the major LLM platforms will have invisible ads, influences, and propaganda woven into the content. The temptation will be irresistible for these companies.

doginasuit

I'd be surprised if product placement isn't already basically at play. Charging companies for including/prioritizing their documentation in the training data, for example. Thankfully LLMs are terrible at the subtlety it would require for a direct marketing campaign.

bix6

O you think trust matters? This is capitalism not trustism.

saghm

Well it's sure not "anti-trustism" in recent years...

PradeetPatel

Long term retention is built on brand trust and usability, then ensh*ttification happens.

nalekberov

No, this is late stage capitalism without regulation.

Brystephor

I work at a company that mainly makes money off ads. Theres no doubt in my mind that the end goal is to make their ads blend into organic content and make them indistinguishable. Typically that results in positive A/B metrics. Its also a reason why influencer driven ads perform well, they seem more organic.

JumpCrisscross

> always thought that ChatGPT ads would be indistinguishable from actual content

Remember when we got upset that Google was putting ads into image search [1]?

[1] http://www.ryanspoon.com/blog/2008/12/14/google-image-search... 2008

undefined

[deleted]

phailhaus

That was the fearmongering, which made no sense because advertisers can't put a dollar value on "the AI will kind of sort of mention you", and because every conversation needs an ad. If ChatGPT always snuck in a brand mention even on the simplest questions, everyone would hate it.

Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.

acdha

I don't think that's a fair dismissal: you see ads all over media websites because the rates have been plummeting as consumers tune out ads. One main reason why everyone does is that ads are so obtrusive and repetitive, and that's exactly what LLMs change: I'm sure we'll see regular ads on AI apps because the companies have trillions of dollars to repay but advertisers would pay a lot more for openings where they aren't _forcing_ their message as a distraction but are instead able to insert it fairly naturally into a context where the user is engaged.

The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.

senectus1

I'm pretty sure that will be an eventual evolution of the product. The business model cant sustain itself as it is at the moment, eventually chatGPT wont be the product... we the users will be.

blackjack_

It is one of the eternal lessons; All tech business plans eventually lead to serving ads. At least until we ban pixels / 3rd party tracking.

netcan

> All tech business plans eventually lead to serving ads

IDK if this is true.

The boulevard of dreams is full of failed/misguided ad-based business plans. Contempt for the business model is sometimes the reason. An implicit assumption that all you need for success is traffic and a willingness to dirty yourself.

There are only a handful of success stories. Most involved a pretty deliberate and tenacious attempt. Success typically involves some very specific and strategic positioning. Data. intent. scale.

No one but Google had google's scale for search ads. 5-10% of the market just isn't enough. You do need tracking but the model works OK even without much targeting. Intent is built in, and that makes up for targeting. But the scale required for viability is very high.

Facebook ads didn't work until (a) they had pushed the envelope on targeting (to make up for lacking intent) and (b) scale was massive. Bing, reddit, etc.... They never had good ad businesses.

echrisinger

As someone that works in a data domain, I'd say it's unlikely the ads are served on a single conversation basis in the near future, if they even are today. Any modern data org like advertising is optimizing metrics of conversion (either optimizing for increasing profits via CPI increase or revenue by increasing advertising TAM presumably).

Introducing context beyond immediate conversation history will improve conversion rates & allow targeted advertising towards wider topics or higher CPI topics (like financial products), hence it's inevitable.

Daily Digest email

Get the top HN stories in your inbox every day.