Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

susam

A little shell function I have in my ~/.zshrc:

  pages() { for _ in {1..5}; do curl -sSw '%header{location}\n' https://indieblog.page/random | sed 's/.utm.*//'; done }
Here is an example output:

  $ pages
  https://alanpearce.eu/post/scriptura/
  https://jmablog.com/post/numberones/
  https://www.closingtags.com/blog/home-networking
  https://www.unsungnovelty.org/gallery/layers/
  https://thoughts.uncountable.uk/now/
On macOS, we can also automatically open the random pages in the default web browser with:

  $ open $(pages)
Another nice place to discover independently maintained personal websites is: https://kagi.com/smallweb

unsungNovelty

Hey!!!!!

That is my website! To be fair, the hard part is hard to keep a personal website regularly updated without making people think it's abandoned. I don't have a regular post cadence. So it looks like I don't touch the website at all for months. But I regularly update my posts and other sections event if there isn't any new posts.

I also wrote something similar to OP - https://www.unsungnovelty.org/posts/10/2024/life-of-a-blog-b...

And I'd like to also mention https://marginalia-search.com/ which is a small OSS search engine I have been using more and more theese days. I find it great to find IndieWeb / Small Web content.

thesuitonym

For my part, if I come across a personal site that hasn't been updated in a few months, I don't assume it's abandoned, just that the person hasn't had anything to say for a while. I'd rather see a site with updates every few months, or even once or twice a year, than one with an update every other week saying "Sorry I haven't updated."

SyneRyder

Not sure if this will be considered helpful, but if you include:

<link rel="alternate" type="application/atom+xml" href="https://www.unsungnovelty.org/index.xml" />

in the HEAD of the pages on your website, it makes autodiscovery of the RSS feed a bit easier - not just for crawlers, but also for people with RSS plugins in their browser. It will make the RSS icon appear in their browser's URL field for easy subscription. Took me a while to find the RSS link at the bottom of your pages!

unsungNovelty

Thanks. Lemme look into that and will take the necessary actions.

mikestorrent

Now this is what makes me feel the Small Web... creators randomly showing up like this on HN. I feel like I used to see this kind of thing more.

sylware

Sadely this search engine is now javascript only. So the "small" web...

SyneRyder

If that's an issue, and if you don't mind building something out yourself, Marginalia have an excellent API that you can connect to from your own personal non-Javascript meta-search engine. I did that, and I find Marginalia awesome to deal with. They're one of my favorite internet projects.

(Also, thanks for reminding me that it was time I donated something to the Marginalia project: https://buymeacoffee.com/marginalia.nu )

marginalia_nu

It shouldn't be. Where are you having issues?

unsungNovelty

Couple of things.

1. No. It's not javascript only. https://old-search.marginalia.nu/ is still available. It is also mentioned in https://about.marginalia-search.com/article/redesign/ as gonna be there for a very long time.

2. I don't think just because it uses javascript make it bad. It's a very nice site now. I prefer it better than old version. My website doesn't use JS for any functionality yet. But I've never said never either. The reason hasn't arised that I need to use JS. The day it does, I will use it.

But I understand the sentiment though. I used to be a no js guy before. But I've been softened by the need to use it professionally only to think --- hmmm, not bad.

Noumenon72

They barely mentioned your website (fourth in five urls, mainly talking about indieblog.com and kagi.com/smallweb), so "That is my website!" is confusing and makes it seem like you're autoresponding to a keyword.

unsungNovelty

Why should I auto-respond to a keyword? Just curious seeing it here buddy. Breathe easy.

mikestorrent

Get over yourself

ddtaylor

Anyone curious this is the same for Linux, except use xdg-open like this:

  $ xdg-open $(pages)

sdoering

This is so lovely. Just adopted it for arch. And set it up, so that I can just type `indy n` (with "n" being any number) and it opens n pages in my browser.

Thanks for sharing.

robgibbons

`indy 500`

drob518

And I thought I keep a lot of tabs open...

matheusmoreira

These curated discovery services require RSS and Atom feeds. My site doesn't even have those. Looks like I'm too small for the small web.

gzread

Same here, but I'm considering adding it. I already have HTML, so it can't be that hard to add another format. More onerous is needing to write a blog post at least once a week.

And woe betide thee whose website isn't a blog.

thesuitonym

That's big web thinking. A small website doesn't need to be discoverable. It's supposed to be for you, and if someone else stumbles upon it, and finds it useful or entertaining, that's a bonus.

encom

I get the sentiment, but it has to discoverable to some extent, otherwise there's no real point in publishing it on a webserver.

oooyay

Caveat that Kagi gates that repo such that it doesn't allow self-submissions so you're only going to see a chunk of websites that other people have submitted that also know about the Kagi repo.

mxuribe

But per the instructions, it seems like that if one wants to add your own website, then one needs to add 2 other small websites (that are not on the list already)...so technically it does open things up to those who are not aware of the repo...assuming their site is pulled in when someone wants to add their own website. Obviously this scale is slow...but i think that's kinda the point, eh? Nevertheless, for every 1 person wanting to add their stuff, 2 others would technically get added i guess.

See: https://github.com/kagisearch/smallweb?tab=readme-ov-file#%E...

skrtskrt

Yeah I’ve added my own site along with 3 others and the PR was merged in an hour.

Honestly the hard part was that a lot of the sites I wanted to submit were already there!

ramblin_ray

Thanks for the info!

If anyone wants to join up and add our sites together, here's mine:

https://yesteryearforever.xyz/

undefined

[deleted]

viscousviolin

That's a lovely bit of automation.

deadbabe

What do you mean automation, he’s not even using any AI agent!

dwedge

That top one only updates once a year. Not saying that as a criticism, just how lucky he was to update recently enough to end up in this top comment

postalcoder

Multiple layers of curation works really well. Specifically, using HN as a curation layer for kagi's small web list. I implemented this on https://hcker.news. People who have small web blogs should post them on HN, a lot of people follow that list!

varun_ch

A fun trend on the "small web" is the use of 88x31 badges that link to friends websites or in webrings. I have a few on my website, and you can browse a ton of small web websites that way.

https://varun.ch (at the bottom of the page)

There's also a couple directories/network graphs https://matdoes.dev/buttons https://eightyeightthirty.one/

101008

A beautiful trend that has been going for 30 years ;-)

One of the happiest moments of my childhood (I'm exagerating) was when my button was placed in that website that I loved to visit everyday. It was one of the best validations I ever received :)

skciva

What inspired me to pursue computer related fields was making little badges and forum signatures in Photoshop as a teen. Heartwarming to see this tradition has persisted

Terr_

I can't be the only one with an ancient collection of artistically-mismatched "under construction" graphics.

technothrasher

My similar happy childhood moment was when my home page made the Netscape "Rants and Raves" page for my extensive tribute to Lindsey Wagner (the actress who played the Bionic Woman), and that leading to my local newspaper interviewing me for an article on what the heck this "World Wide Web" was. I went on and on about how the web was revolutionary as an equalizer, allowing anybody to publish and actually be heard without the old barriers to entry. Sounded good, but the web hasn't exactly fully lived up to my vision.

Terretta

Pamphleteering has a storied tradition. Self-publishing remains accessible today.

What confuses me are the reflexive "why would I publish if I'm not getting the ad revenue" and "why would anyone take their time w/o getting paid" type remarks.

Same comments about music: nobody will record songs without getting paid. And games: what's even the point in playing a shooter without dropping loot?

The last one encapsulates the whole problem well.

Over on /r/division2 a majority of players are baffled by a one month only "Realism" mode (all March, worth trying!) that turns off loot boxes and loot drops from tangos. You can solo or co-op the Division 2 Warlords of New York expansion, set in Manhattan, receiving a couple additional base weapons and weapon mods each mission completed. It's refreshing to enjoy beating scenarios while liberated from opening every scrap pile on the street then sorting through inventory for hours.

Gamers on reddit seem universally convinced the gameplay loop for a tactical PvE shooter should be about getting the next loot, rather than executing a mission cleanly or enjoying a strategically cooperative evening with friends defeating a zip code and its boss.

"I won't play a game that's not rewarding." "I won't write a song that doesn't make me a millionaire." "I won't capture my thoughts on a subject unless I get $0.003 an eyeball."

Somewhere we lost just enjoying the play.

NooneAtAll3

my main problem with such links is... how often do you update them? how often do you check those websites to see that they're still active?

I remember going through all the blogs linked on terry tao's blog - out of like 50 there were only 8-ish still alive :(

susam

I don't use 88x31 buttons but I do maintain an old-fashioned blogroll on my personal website: https://susam.net/roll.html

I follow the same set of websites with my feed reader too. There is an OPML file at the end of that page that I use with my feed reader. I keep the list intentionally small so that I can realistically read every post that appears on these websites on a regular basis.

Although I usually read new posts in my feed reader, I still visit each website on the list at least, roughly, once a month, just to see these personal sites in their full glory. These are blogs I have been following for years, in fact some of them for a couple of decades now! So when a new post appears on one of these websites, I make time to read it. It is one of the joys of browsing the Web that I have cherished ever since I first got online on the information superhighway.

Keeping the list small also makes it easy for me to notice when a website goes defunct. Over all these years a few websites did indeed sadly disappear, which I then removed from my list.

undefined

[deleted]

bung

gotta be using pixel fonts for those, they been around for 25 years, actually readable at 8px lol

dudefeliciano

but why link to apple.com and vercel.com?

8organicbits

One objection I have to the kagi smallweb approach is the avoidance of infrequently updated sites. Some of my favorite blogs post very rarely; but when they post it's a great read. When I discover a great new blog that hasn't been updated in years I'm excited to add it to my feed reader, because it's a really good signal that when they publish again it will be worth reading.

pixodaros

One of the many things I disagree with Scott Alexander on is that to me, frequent blog updates signal poor quality not excellent writing. Its hard to come up with an independent, evidence-based opinion on something worth sharing every week, but easy to post about what you read lots of angry or scary posts about. People who post a lot also tend to have trouble finding useful things to do in their offline life. It is very unusual that he managed to be both a psychiatrist and a prolific blogger and he quit the psychiatry job before he had children or other care responsibilities.

chrneu

I have a "frequent post" section of my blog and a "deeper" section. Unless you're interested in the frequent posts they aren't in your face on my blog. It's kind of a best of both worlds type thing.

The frequent posts also let me quickly try out new methods of telling stories or presenting information or new techniques. I think this tends to speed up how often I post larger effort things cuz I can practice skills with frequent posts.

A good comparison would be a youtuber with a patreon. The youtube gets the produced media, whereas the patreon gets "cell phone in the moment" updates.

but i totally agree that when folks are finding things to post about that can be problematic and annoying.

baud147258

> frequent blog updates signal poor quality not excellent writing

it might be true, but there are exceptions, like acoup (history-focused), which is written by ancient history professor.

oopsiremembered

I'm with you. Also, sometimes I'm specifically looking for some dusty old site that has long been forgotten about. Maybe I'm trying to find something I remember from ages ago. Or maybe I'm trying to deeply research something.

There's a lot more to fixing search than prioritizing recency. In fact, I think recency bias sometimes makes search worse.

freediver

To clarify criteria is less than 2 years since last blog post.

senko

You may want to clarify that on https://github.com/kagisearch/smallweb because the README there says:

> Blog has recent posts (<7 days old)

This may be different than inclusion criteria for websites in general, but on first read it looks like it has to be very active.

I might have missed something while skimming it, but would assume others would miss it as well.

SyneRyder

There's two criteria, I agree it's hard to skim:

* The blog must have a recent post, no older than 12 months, to meet the recency criteria for inclusion.

* Criteria for posts to show on the website: Blog has recent posts (<7 days old), The website can appear in an iframe

The latter criteria is for the website / post to appear in Kagi's random Small Web feature, where they display the blog post in an iframe. (So I think only posts from the last week are displayed there.) Being on the list should ensure that any new posts could be displayed in Small Web though, and presumably that the website is indexed in Kagi's Teclis index as well. At least, I really hope that the Teclis index is including all of those old blog posts too, and not discarding them.

EDIT: I just realized freediver actually is Vladimir - I'd love to know if Teclis does index all those older blog posts too. I assume it does index everything that is still present in the RSS feeds?

est

also kagi exclude non-English sites. Sad for mixed language blogs like mine.

8organicbits

I know kagi doesn't do it, but it is possible to specify language in the feed (xml:lang) such that a feed reader can filter languages the user doesn't understand out of multi-language feeds. One challenge is that lots of bloggers forget to add that tag.

gzread

start a small web directory for your language!

freediver

Kagi Small Web has about 32K sites and I'd like to think that we have captured most of (english speaking) personal blogs out there (we are adding about 10 per day and a significant effort went into discovering/fidning them).

It is kind of sad that the entire size of this small web is only 30k sites these days.

flir

Suspect there's a long tail/iceberg you still haven't captured (source: you haven't found me yet and I'm not hiding, I'm just not chasing SEO).

eichin

Same - but mine are also primarily so I can hand out links to specific articles - they're not hidden but they're not advertised either (and they're static sites with almost zero logging, so I wouldn't really notice either except that this site has a published list :-)

freediver

I am happy to hear this.

flir

Hi, I took a quick look around the niche I'm interested in, and there's a lot of local history blogs you're missing. One of the bigger examples: https://threadinburgh.scot/

On reflection, maybe you've captured the bulk of the "Small Web Movement" (the technology-leaning bit of the blogosphere that is self-consciously part of a reactionary movement against the corporate web) but you haven't captured the bulk of the still-active blogosphere?

So I've got a question: What's the mission statement for kagisearch/smallweb - a curated list of Small Web sites, or a curated list of active blogosphere sites?

Because the current strategy for adding sites seems heavily biased towards the small web movement to me.

jopsen

> I'd like to think that we have captured most of (english speaking) personal blogs

I think that's naive.

But maybe thats just because my blog wasn't on the list :)

boxedemp

Neither was either of mine, but I don't advertise them and specifically don't post them on social media

krapp

Neither is mine, But that's fine with me.

freediver

That is about to change :)

aquova

What methods are you using to find them? I notice my own doesn't appear, although it does show up well under some (very niche) Google search terms. I suspect there's the potential for an order of magnitude more sites than have been found.

freediver

Checking HN every day to see if something interesting surfaces :)

famahar

I noticed that Kagi Small Web tends to lean towards more tech focused blogs. So it feels more like you've captured that subset of the small web, especially if your main source is hackernews.

Not sure if you've used this as a source too but there's a lot of tiny personal sites in this directory too. https://melonland.net/surf-club

savolai

Does this use frames or iframe? https://kagi.com/smallweb

I would expect a raw link in the top bar to the page shown, to be able to bookmark it etc.

susam

There is a '↗'-shaped icon in the navigation bar at the top. If you click on that it takes you to the original post in a new tab. On Firefox and Safari, you can also right click that icon and add the original post to the bookmarks.

savolai

Not visible on iphone xs/13 mini.

gzread

FYI frames don't exist any more. They're not supported by browsers.

zahlman

Does this concept of "personal blog" include people periodically sharing, say, random knowledge on technical topics? Or is it specifically people writing about their day-to-day lives?

How would I check if my site is included?

susam

You can check: <https://github.com/kagisearch/smallweb/blob/main/smallweb.tx...>. I can see that your RSS URL is listed there.

But it currently does not appear in the search results here: <https://kagi.com/smallweb/?search=zahlman>. The reason appears to be this:

"If the blog is included in small web feed list (which means it has content in English, it is informational/educational by nature and it is not trying to sell anything) we check for these two things to show it on the site: • Blog has recent posts (<7 days old) [...]"

(Source: https://github.com/kagisearch/smallweb#criteria-for-posts-to...)

mattlondon

Why would you only include blogs in your small web index? That must be a minute fraction of what is out there?

I can't think of a single blog that I read these days (small or not), yet there are loads of small "old school" sites out there that are still going strong.

gzread

I think it includes anything that's in the form of a chronological list of posts and noncommercial.

If you made a website instead of a blog, well... you're excluded. It's the small blogosphere, not the small web

squidhunter

It’s not 30k, it’s well over a million: https://screenshots.nry.me/

undefined

[deleted]

Cyan488

I'm noticing sites that break the rules. I report (flag) them, is that useful or should I just PR to remove them?

freediver

PR is better!

afisxisto

Cool to see Gemini mentioned here. A few years back I created Station, Gemini's first "social network" of sorts, still running today: https://martinrue.com/station

danhite

Isn't this a simple compute opportunity? ...

> March 15 there were 1,251 updates [from feed of small websites ...] too active, to publish all the updates on a single page, even for just one day. Well, I could publish them, but nobody has time to read them all.

if the reader accumulates a small set of whitelist keywords, perhaps selected via optionally generating a tag cloud ui, then that est. 1,251 likely drops to ~ single page (most days)

if you wish to serve that as noscript it would suffice to partition in/visible content eg by <section class="keywords ..." and let the user apply css (or script by extension or bookmarklet/s) to reveal just their locally known interests

8organicbits

The tag cloud part may be a challenge. Web feeds don't always tag their content.

I have a blog filter that does something similar (https://alexsci.com/rss-blogroll-network/discover/), but the UI I ended up with isn't great and too many things are uncategorized.

danhite

Kudos on your site effort and I immediately see your point.

In fact I took your topmost entry with no helpful site/update tags and dove in a little to try to understand why a RSS friendly blogger might not be passing along ~ tags for better reader discovery.

Turns out my scarce info test case blogger has a mastodon that immediately lists all these tags about himself [I've stripped it down] ...

#FrontEnd Developer #CSS #Halifax #London #Singapore Technical writer and rabbit-hole deep-diver Former Organiser for https://londonwebstandards.org & https://stateofthebrowser.com Interests: #Bushcraft #Outdoors #DnD #Fantasy #SciFi #HipHop #CSS #Eleventy #IndieWeb #OpenSource #OpenWeb

I conclude if he knew such site and post tags getting to RSS would be of use, he'd probably make the tiny effort to wire the descriptions.

Nonetheless I merely crawled links for a minute to found this info, so I imagine something like the free tier of the Cloudflare crawling api might suffice over time for a simplistic automated fix to hint decorate blog sites.

I mean, given that we're not trying to recreate pagerank, but just trying to tip the balance in favor of desirable initial discovery.

8organicbits

Very cool.

Crawling related sites for tags could work (open graph tags on the website are another good source). I'm wary of mixing data across contexts though. A blog and a Mastodon profile may intend to present a different face to the world or could discuss different topics.

shermantanktop

This is a specific definition of "small web" which is even narrower than the one I normally think of. But reading about Gemini, it does make me wonder if the original sin is client-side dynamism.

We could say: that's Javascript. But some Javascript operates only on the DOM. It's really XHR/fetch and friends that are the problem.

We could say: CSS is ok. But CSS can fetch remote resources and if JS isn't there, I wonder how long it would take for ad vendors to have CSS-only solutions...or maybe they do already?

AdamN

I would put it all on cookies. No third party cookies (at all) - good. JS and CSS and even autoplay video is fine as long as there are no third party cookies.

That would make the Small Web bigger but it would get to the main point. I'd be fine with a site like the New Yorker that has more bells and whistles be included as long as I could experience it without a tracked ad from DoubleClick.

Right now any serious outfit simply cannot be included in the Small Web but we really need companies there.

fbilhaut

Totally agree. I run a few professional websites/apps that deliberately avoid tracking technologies. They only use first-party session cookies and minimal server logs for operational purposes.

Interestingly, I’ve noticed that some users find this suspicious because there's no cookie banner ! People may have become so used to seeing them that a site without one can look dubious or unprofessional. And I'm pretty sure some maintainers include them just to conform with common practice or due to legal uncertainty.

Maybe a simple, community-driven, public declaration might help. Something like a "No-Tracking Web Declaration". It could be a short document describing fair practices that websites could reference, such as "only first-party session cookies", "server logs used only for operational purposes", etc.

A website could then display a small statement such as "This site follows the No-Tracking Web Declaration v1.0". This might help legitimate the approach, and give visitors and operators confidence that avoiding usual bells and whistles can actually be compliant with applicable regulations.

I (and AI) drafted something here, contributions would be highly welcomed: https://github.com/fbilhaut/no-tracking

akkartik

Yeah, CSS is Turing Complete: https://lyra.horse/x86css

zahlman

I wonder: what's the least that could be removed from CSS to avoid Turing-completeness?

gzread

Most of the problem with ads isn't even the ads these days, but the bloat. Static image ads would be a huge improvement.

mattlondon

You need to go more tin-foil-hat

Its not just JavaScript, it's cookies, it's "auto loading" resources (e.g. 1x1 pixels with per-request unique URLs), it's third-party http requests to other domains (which might art cookies too).

I think the XKCD comic about encryption-vs-wrench has never been more apt for Gemini the protocol...

upboundspiral

I think the article briefly touches on an important part: people still write blogs, but they are buried by Google that now optimizes their algorithm for monetization and not usefulness.

Anyone interested in seeing what the web when the search engines selects for real people and not SEO optimized slop should check out https://marginalia-search.com .

It's a search engine with the goal of finding exactly that - blogs, writings, all by real people. I am always fascinated by what it unearths when using it, and it really is a breath of fresh air.

It's currently funded by NLNet (temporarily) and the project's scope is really promising. It's one of those projects that I really hope succeeds long term.

The old web is not dead, just buried, and it can be unearthed. In my opinion an independent non monetized search engine is a public good as valuable as the internet archive.

So far as I know marginalia is the only project that instead of just taking google's index and massaging it a bit (like all the other search engines) is truly seeking to be independent and practical in its scope and goals.

marginalia_nu

Thanks for shilling.

Regarding the financials, even though the second nlnet grant runs out in a few weeks, I've got enough of a war chest to work full time probably a good bit into 2029 (modulo additional inflation shocks). The operational bit is self-funding now, and it's relatively low maintenance, so if worse comes to worst I'll have to get a job (if jobs still exist in 2029, otherwise I guess I'll live in the shameful cardboard box of those who were NGMI ;-).

boxedemp

I think that's a cool project, though I found the results to be less relevant than Google.

janalsncm

Whether the results are less relevant or not depends massively on what you searched and whether the best results even exist in the Marginalia search index or not.

If Google is ranking small web results better than Marginalia, that’s actionable.

If the best result isn’t in the index and it should be, that’s actionable.

marginalia_nu

Well to be fair, Marginalia is also developed by 1 guy (me), and Google has like 10K people and infinite compute they can throw at the problem. There has been definite improvements, and will be more improvements still, but Google's still got hands.

gzread

I've used Marginalia to search for technical documentation before, unironically. Whatever it does find is pretty much guaranteed to be non-slop.

lich_king

> Google that now optimizes their algorithm for monetization and not usefulness.

I don't think they do that. Instead, "usefulness" is mostly synonymous with commercial intent: searching for <x> often means "I want to buy <x>".

Even for non-commercial queries, I think the sad reality is that most people subconsciously prefer LLM-generated or content-farmed stuff too. It looks more professional, has nice images (never mind that they're stock photos or AI-generated), etc. Your average student looking for an explanation of why the sky is blue is more interested in a TikTok-style short than some white-on-black or black-on-gray webpage that gives them 1990s vibes.

TL;DR: I think that Google gives the average person exactly the results they want. It might be not what a small minority on HN wants.

marginalia_nu

Google and most search engines optimize for what is most likely to be clicked on. This works poorly and creates a huge popularity bias at scale because it starts feeding on its own tail: What major search engines show you is after all a large contributor to what's most likely to be clicked on.

The reason Marginalia (for some queries) feels like it shows such refreshing results is that it simply does not take popularity into account.

BrenBarn

> I think that Google gives the average person exactly the results they want.

There is some truth in this, but to me it's similar to saying that a drug dealer gives their customers exactly what they want. People "want" those things because Google and its ilk have conditioned them to want those things.

sdenton4

On the one hand, a search engine is not heroin... It's a pretty broken analogy.

On the other hand, we could probably convince Cory Doctorow to write a piece about how fentanyl is really about the enshitification of opiates.

627467

I read alot against monetization in the comments. I think because we are used monetization being so exploitative, filled with dark patterns and bad incentives on the Big Web.

But it doesnt need to be thia way: small web can also be about sustainable monetization. In fact there's a whole page on that on https://indieweb.org/business-models

There's nothing wrong with "publishers" aspiring to get paid.

ardeaver

I also think equating good = "no monetization" is exactly how we've ended up in a situation where everything is controlled by a few giant mega corps, hordes of MBAs, and unethical ad networks.

We should want indie developers, writers, etc to make money so that the only game in town doesn't end up being those who didn't care about being ethical. </rant>

UqWBcuFx6NV4r

Yep. People have very short memories. I remember that ethical ad network in the late 2000s that all the cool tech bloggers would use.

wink

I don't want to be part of the "small web" - I want to be part of the web. If my stuff can't be found in a sea of a million ad-ridden whatever sites so be it, but I am not going out of my way to submit stuff to special search engines or web rings, I've been there in the 90s.

DeathArrow

My point also: I rather see the existing web transformed than being part of some obscure circles that not many people care about.

plewd

I doubt the web will allow itself to be transformed into our idealized version of it, so the question seems to just be: do you want to be part of the obscure circle or not?

Neither choice is right or wrong, but I like the idea of a cool community amidst the enshittification of the rest of the web.

jmclnx

I moved my site to Gemini on sdf.org, I find it far easier to use and maintain. I also mirror it on gopher. Maintaining both is still easier than dealing with *panels or hosting my own. There is a lot of good content out there, for example:

gemini://gemi.dev/

FWIW, dillo now has plugins for both Gemini and Gopher and the plugins work find on the various BSDs.

Peteragain

I'm very keen on public libraries. I'm fortunate in that our village has a community run one, there is the county one, and I can get to The British Library. Why do these entities exist? A real question - not rhetorical. Whatever the answer, I am sure the same mechanism could "pay for" public hosting.

zeusdclxvi

Are you asking why public libraries exist?

Peteragain

More how they exist.

pipeline_peak

You want the government to fund the small web?

Peteragain

The government funds some libraries, and for some publicly acceptable reason. That reason should apply to web infrastructure, and indeed the small web. Other libraries are community run. Again whatever motivates that probably applies to small web stuff.

Daily Digest email

Get the top HN stories in your inbox every day.

The “small web” is bigger than you might think - Hacker News