Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

zbycz

If you download a Data export, the timestamps are there for every conversation, and often for messages as well.

The html file is just a big JSON with some JS rendering, so I wrote this bash script which adds the timestamp before the conversation title:

  sed -i 's|"<h4>" + conversation.title + "</h4>"|"<h4>" + new Date(conversation.create_time*1000).toISOString().slice(0, 10) + " @ " + conversation.title + "</h4>"|' chat.html

gnyman

This is a bit of sidetrack, but in case someone is interested in reading their history more easily. My conversations.html export file was ~200 MiB and I wanted something easier to work with, so I've been working on a project to index and make it searchable.

It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.

(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)

One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.

It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).

https://github.com/gnyman/llm-history-search

tomzx

Seems we have a common goal here of being able to search history on ChatGPT/Claude.

Check this project I've been working on which allows you to use your browser to do the same, everything being client-side.

https://github.com/TomzxCode/llm-conversations-viewer

Curious to get your experience trying it!

caminanteblanco

Do you know if this is available in the actual web interface, and just not displayed, or is it just in the data export? If it is in the web, maybe a browser extension would be worth making.

zbycz

I checked, and yes - the field "create_time" is available both for coversation and for each message. The payload looks the same as the exported JSON.

Look for this API call in Dev Tools: https://chatgpt.com/backend-api/conversation/<uuid>

stuaxo

There's a github project which converts them to markdown which works fairly well too.

FloorEgg

My guess is that including timestamps in messages to the LLM will bias the LLMs responses in material ways, and ways they don't want, and showing timestamps to users but not the LLM will create confusion when the user assumes the LLM is aware of them but it isn't. So the simple product management decision was to just leave them out.

Kailhus

That's no excuse imho. I see 2 different endpoints, 1 for llm stream and 1 for msg history (with stamps). New timestamps could be added FE as new messages start without polluting the user input for example

FloorEgg

How many years experience do you have managing products with millions of users?

qweiopqweiop

I'd bet this is correct. I'd also bet you've worked on user facing features.

caminanteblanco

I could definitely see that being an issue, but like with so many UX decisions, I wish they would at least hide the option somewhere in a settings menu.

I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.

joquarky

Adding any unnecessary content to the context decreases inference quality.

Valid3840

ChatGPT still does not display per-message timestamps (time of day / date) in conversations.

This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.

Do any of you could think of a reason (UX-wise) for it not to be displayed?

Workaccount2

Regular people hate numbers.

Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.

madeofpalk

Isn't it just simpler to believe that ChatGPT doesn't have timestamps because... they never added them? It wasn't in the original MVP prototype and they've just never gotten around to it?

Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.

observationist

They exist in the exported data. It'd require a weekend's worth of effort to roll out a new feature that gives users a toggle to turn timestamps off and on.

It's trivial, but we will never see it. The people in charge of UX/UI don't care about what users say they want, they all know better.

johnfn

Do regular people not use any mainstream messaging app - Messenger, iMessage, etc?

HWR_14

Both of those by default hide timestamps

almosthere

It's not like chatgpt suddenly messages you at 3am and says, I don't feel well. It's all time that you talked to it.

dymk

This makes sense only if you don’t think about it at all.

make3

People on HN are not regular users in any way, shape or form.

It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.

It's the Apple story all over again.

https://lawsofux.com/cognitive-load/

baobun

Much like the product itself. I guess it fits.

smelendez

Make it a toggle then, like a lot of popular chat apps?

Y_Y

There's only one thing they hate more than numbers...

lofaszvanitt

UX/UI research if it exists at all is akin to religious healers who touch you on your head and bam you can suddenly walk after spending 25 years in a wheelchair.

Hogwash.

DANmode

So, do you you think all dev teams are sufficient at UX,

or UX doesn’t exist?

littlestymaar

It must be false, because if that was true, marketing people would not be putting numbers everywhere when naming products.

GaryBluto

Makes sense. ChatGPT is the McDonalds of LLMs.

drdaeman

I’m sorry but this really sounds like a made-up idea. Is there any actual repeatable research that could back this claim?

make3

It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.

It's the Apple story all over again.

https://lawsofux.com/cognitive-load/

Qem

> Do any of you could think of a reason (UX-wise) for it not to be displayed?

I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.

[1] https://www.ap.org/news-highlights/spotlights/2025/new-study...

azinman2

It’s already in the data export.

qazxcvbnmlp

We humans use timestamps in conversations to reference a persons particular state of reference at a given point in time.

Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.

Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”

If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.

milowata

It’s better for them if you don’t know how long you’ve been talking to the LLM. Timestamps can remind you that it’s been 5 hours: without it you’ll think less about timing and just keep going.

sh4rks

Ah, the casino tactic

eth0up

My honest opinion, which may be entirely wrong but remains my impression, is:

User Engagement Maximization At Any Cost

Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.

I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.

Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.

I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.

Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic

CompuHacker

After whatever quota of free GPT-5 messages is exhausted, `mini` should answer most replies, unless they're policy sensitive, which get full-fat `GPT-5 large` with the Efficient personality applied, regardless of user settings, and not indicated. I'm fairly confident that this routing choice, the text of Efficient [1], and the training of the June 2024 base model to the model spec [2] is the source of all the sophistic behavior you observe.

[1] <https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...>

[2] <https://model-spec.openai.com/2025-02-12.html>

eth0up

I am interested in studying this beyond assumption and guesswork, therefore will be reading your references.

I have the compulsive habit of scrutinizing what I perceive as egregious flaws when they arise, thus invoke its defensive templates consistently. I often scrutinize those too, which can produce extraordinarily deranged results if one is disciplined and applies quotes of its own citations, rationale and words against it. However, I find that even when not in the mood, the output errors are too prolific to ignore. A common example is establishing a dozen times that I'm using Void without systemd and receiving persistent systemd or systemctl commands, then asking why after just apologized for doing so it immediately did it again, despite a full-context explanatory prompt proceeding. That's just one of hundreds of things I've recorded. The short version is that I'm an 800lb shit magnet with GPT and rarely am ever able to successfully troubleshoot with it without reaching a bullshit threshold and making it the subject, which it so skillfully resists I cannot help but attack that too. But I have many fascinating transcripts replete with mil spec psyops as result and learn a lot about myself, notably my communication preferences along with an education in dialogue manipulation/control strategies that it employs, inadvertently or not.

What intrigues me most is its unprecedented capacity for evasion and gatekeeping on particular subjects and how in the future, with layers of consummation, it could be used by an elite to not only influence the direction of research, but actually train its users and engineer public perception. At the very least.

Anyway, thanks.

intrasight

Sounds like an easy browser extension

undefined

[deleted]

soulofmischief

Extensions can steal data. https://www.pcmag.com/news/uninstall-now-these-chrome-browse...

It's irresponsible for OpenAI to let this issue be solved by extensions.

joquarky

(Tamper|Grease)monkey scripts are easy to review, which is why I prefer them over the normal extensions.

Also, they're easy to write for simple fixes rather than having to find, vet, and then install a regular extension that brings 600lbs of other stuff.

randyrand

Not if you actually read what the extension does and drag and drop it into chrome yourself.

Don't install from the web store. Those ones can auto-update.

QuantumNomad_

Someone has already made a browser extension for Chrome to show the timestamps.

https://github.com/Hangzhi/chatgpt-timestamp-extension

https://chromewebstore.google.com/detail/kdjfhglijhebcchcfkk...

noisem4ker

For Chrome and Firefox.

bloqs

stop using the product until the products creators at least demonstrate they listen. they have never been in a riskier position

thway15269037

ChatGPT to this day does not have a single simplest feature -- fork chat from message.

That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).

Gemini btw too.

Leynos

thway15269037

Well apparently 3 years later they did a thing. I asked about it so many times I didn't even bother to check if they added it.

Though I'm not sure if they did not sneak it as some part of AB-test because the last time I did check was in october and I'm pretty sure it was not there.

pohl

I believe they announced “branch in new chat” on Sept 5th, so you’re not far off.

vimy

ChatGPT has conversation branches. Or do I misunderstand?

Just edit a message and it’s a new branch.

seizethecheese

In not aware of a feature to access the previous message versions after editing.

noahjk

This is a big use-case for me that I've gotten used to while using Open-WebUI. Being able to easily branch conversations, edit messages with information from a few messages downstream to 'compact' the chat history, completely branch convos. They have a tree view, too, which works pretty well (the main annoyances are interface jumps that never seem to line up properly).

This feature has spoiled me from using most other interfaces, because it is so wasteful from a context perspective to need to continually update upstream assumptions while the context window stretches farther away from the initial goal of the conversation.

I think a lot more could be done with this, too - some sort of 'auto-compact' feature in chat interfaces which is able to pull the important parts of the last n messages verbatim, without 'summarizing' (since often in a chat-based interface, the specific user voicing is important and lost when summarized).

joquarky

The web app has < and > icons to flip between different branches.

I don't see them on their mobile app though.

cyral

You can click the three dots on any response and click "Branch in new chat". Not sure when it was added but it exists.

thway15269037

Yeah I got corrected above. Good, but not good it took them 3 years.

caminanteblanco

This is a constant frustration for me with Gemini. Especially since things like Deep Research and Canvas mode lock you in, seemingly arbitrary. LLMs to my understanding are Markovian prompt-to-prompt, so I don't see why this is an issue at all.

firesteelrain

Just a note to those adding the time to the personalization response. It’s inaccurate. If you have an existing chat, the time is near the last time you had that chat session active. If you open a new one, it can be off by + or - 15 minutes for some reason

baby

I was using a continuous conversation with chatgpt to keep track of my lifts, and then I realize it never understand what day I'm talking to it, like there is no consistency, it might as well be the date of the first message you sent

brap

I think that’s exactly why they’re not including timestamps. If timestamps are shown in the UI users might expect some form of “time awareness” which it doesn’t quite have. Yes you can add it to the context but I imagine that might degrade other metrics.

Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.

malfist

What purpose does logging your lifting with chatgpt achieve?

cj

It’s an incredible tool for weightlifting. I use it all the time to analyze my workout logs that I copy/paste from Apple Notes.

Example prompts:

- “Modify my Push #2 routine to avoid aggravating my rotator cuff”

- “Summarize my progression over the past 2 months. What lifts are progressing and which are lagging? Suggest how to optimize training”

- “Are my legs hamstring or glute dominant? How should I adjust training”

- “Critique my training program and suggest optimizations”

That said, I would never log directly in ChatGPT since chats still feel ephemeral. Always log outside of ChatGPT and copy/paste the logs when needed for context.

baby

I ask it to continuously tell me when I break personal records and what muscle groups Ive been focusing on in the last day (and what exercises I should probably do next). It doesnt work super well at doing any of these except tracking PRs

serf

presumably the same thing that logging anything with an LLM achieves : plain language into structured text quickly.

Stratoscope

Claude's web interface has an elegant solution. When you roll the mouse over one of your prompts, it has the abbreviated date in the row of Retry/Edit/Copy icons, e.g. "Dec 17". Then if you roll the mouse over that date, you get the full date and time, e.g. "Dec 17, 2025, 10:26 AM".

This keeps the UI clean, but makes it easy to get the timestamp when you want it.

Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:

  Dec 17, 2025, 10:26 AM [I added this here]
  Copy Message
  Select Text
  Edit
ChatGPT could simply do the same thing for both web and mobile.

submeta

Beyond the lack of timestamps, ChatGPT produces oddly formatted text when you copy answers. It’s neither proper markdown nor rich text. The formatting is consistently off: excessive newlines between paragraphs, strangely indented lists, and no markdown support whatsoever.

I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.

vendiddy

My biggest complaint about ChatGPT is how slow their interface is when the conversations get log. This is surprising to me given that it's just rendering chats.

It's not enough to turn me off using it, but I do wish they prioritized improving their interface.

throw03172019

New startup idea: ChatGPT but with timestamps. $100M series A

bravetraveler

Surely an intern over there can prompt a toggle/hover event

diziet

I would also love to see a token budget use for the chats -- to know when the model is about to run out of context. It's crazy this is not there.

joquarky

It would have to be intentionally vague since there is no hard cutoff threshold.

bspammer

Claude code does it, to a precision of 1 digit of a percentage. That’s more than enough to be useful.

baggy_trough

The bonkers thing is you can't easily print the chats or export them as PDF.

Daily Digest email

Get the top HN stories in your inbox every day.