Cool URIs Don't Change (1998)

Cool URIs Don't Change (1998)

76 comments

·December 4, 2021

userbinator

Microsoft is probably one of the worst offenders, especially in the past few years. It seems like they're actively destroying documentation and making it hard to find important information, so much that I often use archive.org instead.

lelandfe

Apple may be worse. They move documentation to the Documentation Archive and don't replace it. The Archive is a giant mass of "outdated", no longer updated documents, each assigned to 1 category. The Archive only has a title search now; full text search broke years ago.

All documentation on Help Books was archived, for instance. It's been 7 years since they've seen an update and they now contain inaccuracies – but there are no other official guides. Check out that UI: https://developer.apple.com/library/archive/documentation/Ca...

This is a technology that is still used. Nearly all of Apple's own apps have Help Books, including new ones like Shortcuts. Yet they have absolutely no official documentation on using that technology.

seba_dos1

> Check out that UI

Off-topic, but damn - tune down that candyness a bit and it looks much better and cleaner than what's there today.

cunthorpe

That page looks great, I don't know what you're talking about. Are you bothered by 2 very plain gradients? I don't see candy at all

hereforphone

I don't know what it is that makes Microsoft so inelegant. Not only what you've said, but their APIs / programming environment in general is ugly and (I presume) unwieldy. Their apps (I just switched to Excel / OneNote / etc. from Google) have bugs that don't exist with competitors. The other day I couldn't use OneNote because my Internet went down (?!). Same for Excel, it doesn't reload immediately upon reconnect like Google Sheets did.

I don't get Microsoft. They're huge. They hire a lot of people. Their products are kludges.

leetcrew

backwards compatibility is a major reason why their APIs are so ugly. I always assumed that was a core company value. ironic that they turn around and break links to their own docs.

SavantIdiot

There are over a thousand redirects in an Apache config file for a company I contracted with. The website was 20 when I worked there, it is now 26 years and AFAIK they still stick to this principle. And it's still a creaky old LAMP stack. It can be done, but only if this equation holds:

  URL indexing discipline > number of site URLs
(There was no CMS, every page was hand-written PHP. And to be frank, maintenance was FAR simpler than the SPA frameworks I work with today.)

grumbel

So what happen to that URN discussion? It has been 20 years. Have there been any results I can actually use on the Web today? I am aware that BitTorrent, Freenet and IPFS use hash based URIs, though none of them are really part of the actual Web. There is also rfc6920, but I don't think I have ever seen that one in the wild.

Hashes aside, allowing linking to a book by it's ISBN doesn't seem to exist either as far as I am aware, at least not without using Wikipedia's or books.google.com's services.

spc476

Twenty years on, and I can still link to any item at Amazon as long as I have it's ASIN, and using the template:

    https://www.amazon.com/exec/obidos/ASIN/<asin id>
Say what you will about Amazon (and Jeff Bezos), but I don't think they've broken a URL to any product of theirs ever.

Causality1

Not broken perhaps, but I regularly click a link to a product and get a page about a totally different product.

nikisweeting

My understanding is that product pages aren’t immutable, so sellers sometimes change their content instead of making a new page to keep the SEO the original page accumulated.

Having public edit history + permalinks to specific immutable revisions of product pages would be nice, but I could see how they’re not incentivized to add it because they don’t want ppl demanding an older price / feature that got edited out later on.

paleogizmo

IEEE Xplore at least uses DOIs for research papers. Don't know if anyone else does, though.

pmyteh

Everyone uses DOIs for research papers, and https://doi.org/<DOI> will take you there. In fact, I think the URI form is now the preferred way of printing DOIs.

dredmorbius

Cool rules of thumb don't run contrary to human behaviour and/or rules of nature.

If what you want is a library and a persistent namespace, you'll need to create institutions which enforce those. Collective behaviour on its own won't deliver, and chastisement won't help.

(I'd fought this fight for a few decades. I was wrong. I admit it.)

derefr

People can know what good behaviour is, and not do good; that doesn't mean it isn't helpful to disseminate (widely-agreed-upon!) ideas about what is good. The point is to give the people who want to do good, the information they need in order to do good.

It's all just the Golden Rule in the end; but the Golden Rule needs an accompaniment of knowledge about what struggles people tend to encounter in the world—what invisible problems you might be introducing for others, that you won't notice because they haven't happened to you yet.

"Clicking on links to stuff you needed only to find them broken" is one such struggle; and so "not breaking your own URLs, such that, under the veil of ignorance, you might encounter fewer broken links in the world" is one such corollary to the Golden Rule.

dredmorbius

In this case ... it's all but certainly a losing battle.

Keep in mind that when this was written, the Web had been in general release for about 7 years. The rant itself was a response to the emergent phenomenon that URIs were not static and unchanging. The Web as a whole was a small fraction of its present size --- the online population was (roughly) 100x smaller, and it looks as if the number of Internet domains has grown by about the same (1.3 million ~1997 vs. > 140 million in 2019Q3, growing by about 1.5 million per year). The total number of websites in 2021 depends on what and how you count, but is around 200 million active and 1.7 billion total.

https://www.nic.funet.fi/index/FUNET/history/internet/en/kas...

https://makeawebsitehub.com/how-many-domains-are-there/

https://websitesetup.org/news/how-many-websites-are-there/

And we've got thirty years of experience telling us that the mean life of a URL is on the order of months, not decades.

If your goal is stable and preserved URLs and references, you're gonna need another plan, 'coz this one? It ain't workin' sunshine.

What's good, in this case, is to provide a mechanism for archival, preferably multiple, and a means of searching that archive to find specific content of interest.

Andrex

Are losing battles still worth fighting?

Personally I believe yes, because there are still those that benefit in the interim. Compare that to not bothering to fight at all in the first place.

tjoff

For the absolute vast majority of cases this is a quite a simple problem. Most don't care though.

A rewrite of a site does not rewrite all the content. It is still there, the database might have been migrated but all the information is still there and the conversion process has everything it needs to do the final step of preserving one ID, or one string, that trivially can map an old URL to a new one that contains the same actual content.

Sure, some intermediate pages might get lost but that is not something that is particularly valuable anyway and not something one usually links directly to. Don't let perfect get in the way of good enough.

serverholic

Collective behavior can work if it’s incentivized.

dredmorbius

Not where alternative incentives are stronger.

Preservation for infinity is competing with current imperatives. The future virtually always loses that fight.

greyface-

June 17, 2021, 309 points, 140 comments https://news.ycombinator.com/item?id=27537840

July 17, 2020, 387 points, 156 comments https://news.ycombinator.com/item?id=23865484

May 17, 2016, 297 points, 122 comments https://news.ycombinator.com/item?id=11712449

June 25, 2012, 187 points, 84 comments https://news.ycombinator.com/item?id=4154927

April 28, 2011, 115 points, 26 comments https://news.ycombinator.com/item?id=2492566

April 28, 2008, 33 points, 9 comments https://news.ycombinator.com/item?id=175199

(and a few more that didn't take off)

emmanueloga_

I know it seems to be part of HN culture to make these lists, but not sure why. There's a "past" link with every story that provide a comprehensive search for anyone that is interested in whatever past discussions :-/

dredmorbius

Immediacy and curation have value.

Note that dang will post these as well. He's got an automated tool to generate the lists, which ... would be nice to share if it's shareable.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

scoot

I love that people think that @dangbot is a person. (Yes, of course there's a person behind the bot...)

null

[deleted]

null

[deleted]

amenghra

    After the creation date, putting any information in the name is asking for trouble one way or another.
Clearly these suggestions predate SEO.

mro_name

and postdate :-)

tingletech

that URL changed; it used to start `http:`-- now it starts `https:` -- not cool!

detaro

The HTTP url works fine still, it sends you to the right place.

laristine

Not exactly though, it only redirects you to the HTTPS version if it was set up that way. Otherwise, it will show a broken page.

detaro

but the entire point of the rule is that you should set up your sites so that old URLs continue to work.

mro_name

I can't follow – does it or doesn't it?

null

[deleted]

nicbou

This is a big problem for me. I cite sources on my website and people frequently use them, but the German government seems hell-bent on rotating their URL scheme at least once a year for no reason. URLs to pages that still exist keep changing. I struggle to refer to anything on their websites.

Groxx

Any favorite strategies for achieving this in practice, e.g. across site infrastructure migrations? (changing CMS, static site generators, etc)

Personally about the only thing that has worked for me has been UUID/SHA/random ID links (awful for humans, but it's relatively easy to migrate a database) or hand-maintaining a list of all pages hosted, and hand-checking them on changes. Neither of which is a Good Solution™ imo: one's human-unfriendly, and one's impossible to scale, has a high failure rate, and rarely survives migrating between humans.

nikisweeting

On the personal level: For every markdown article/README I write, I pipe it through ArchiveBox / Archive.org and put (mirror) links next to the originals in case they go down.

At the organizational level: my company has a dedicated “old urls” section in our urls.py routes files where we redirect old URLs to their new locations. Then on critical projects we also have a unit test that appends all URLs ever used to a tracked file in CI and checks that they still resolve or redirect on new deployments. Any 404 for a legacy URL is considered a release-blocking bug.

bkeating

Yer darn right they don't change and this is one cool URL because I remember reading this years ago in exactly the same place.

Here is a relevant Long Bet that I think about often (only has one year left to go!) https://longbets.org/601/ "The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years."

Jiro

The reason this advice hasn't been taken in the last 23 years is that with all the questions and answers in this page, there's one question that was missing. "Doing the things you advised us to do in the other questions costs $X. (Or costs time and our time is worth $X.) Will you be paying us $X?"

RotaryTelephone

Them: There seems to have been a misunderstanding. We thought it was clear that we'll be equal partners in this startup. You get 3% equity which is MORE than generous given that this is my life's dream I'm sharing with you (here's a 5yr NDA btw restricting you from working anywhere else in the same industry for that duration of time). So go ahead and start asap. We have all the ideas and you do all the coding, for free, this is called skin in the game. Then after the product works we'll all make lots of money! That's why none of us take any salary. We all work. Us by providing ideas and you by coding hard and giving 110%. Also please sign here stating that you are not an employee but a contractor, even though we'll call you an employee and treat you as such.

mrloba

Uri's are hierarchical, and hierarchies are notoriously difficult to get right. Even if you get them right, the definition of right often changes over time. In my experience, things have to be flat to be static. IDs seem to work pretty well