Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

nonrandomstring

Well done to the author for writing this up.

Having tried fringe technologies over the years, spun up a server and run them for a few months, struggled and seen all the rough edges and loose threads, I often come to the point of feeling - this technology is good, but it's not ready yet. Time to let more determined people carry the torch.

The upside is:

- you tried, and so contributed to the ecosystem

- you told people what needs improving

Just quitting and not writing about your experience seems a waste for everyone, so good to know why web hosting on IPFS is still rough.

b_fiive

Totally biased founder here, but I work on https://github.com/n0-computer/iroh, a thing that started off as an IPFS implementation in rust, but we broke out & ended up doing our own thing. We're not at the point where the iroh implements "the full IPFS experience" (some parts border on impossible to do while keeping a decentralized promise), but we're getting closer to the "p2p website hosting" use case each week.

gabesullice

Super intriguing. Thanks for sharing!

It reminds me a bit of an early Go project called Upspin [1]. And also a bit of Solid [2]. Did you take any inspiration from them?

What excites me about your project is that you're addressing the elephant in the room when it comes to data sovereignty (~nobody wants to self-host a personal database but their personal devices aren't publicly accessible) in an elegant way.

By storing the data on my personal device and (presumably?) paying for a managed relay (and maybe an encrypted backup), I can keep my data in my physical possession, but I won't have to host anything on my own. Is that the idea?

[1] https://upspin.io/ [2] https://solidproject.org/

b_fiive

Ah <3 Upspin! It's been a minute. I've personally read through & love Upspin. I always found solid a little too tied to RDF & the semantic web push. The solid project is/was super valiant effort, but these days I feel like the semantic web push peaked with HTML & schema.org.

> By storing the data on my personal device and (presumably?) paying for a managed relay (and maybe an encrypted backup), I can keep my data in my physical possession, but I won't have to host anything on my own. Is that the idea?

We're hoping to give that exact setup to app developers (maybe that's you :). We still have work to do on encryption at rest to keep the hosted server "dumb", and more plumbing into existing app development ecosystems like flutter, expo, tauri, etc. but yes, that's the hope. Give developers tools to ship apps that renegotiate the "user social contract".

ChadNauseam

I feel like this type of project is a natural fit as the transport layer for CRDT-based applications. Something like: each user/device has an append-only log of CRDT events, then applications merge events from multiple logs to create a collaborative experience. (I have no idea if iroh supports append-only logs, but it seems like a common thing for projects in this space to support.) What do you think?

b_fiive

yep! Iroh documents [1] give you a very nice primitive that is technically a CRDT, but in practice most people use it as a key-value store. We really wanted a mutable solution that would support real deletions (instead of tombstones), and starting with append-only logs locks you out of that choice.

With Iroh + CRDTs you have three choices: 1. Use iroh's connection & gossip layers in conjunction with a mature CRDT library like Automerge or Y.js. 2. Build a more sophisticated CRDT on top of iroh documents. 3. Worry a little less about weather your data structures form a semilattice & build on a last-writer wins key-value store (basically: just use documents)

We've seen uses for all three. Hope that helps!

[1] https://iroh.computer/docs/layers/documents

hot_gril

Is it named after the Avatar character?

b_fiive

I can neither confirm nor deny, but oh boy does uncle iroh seem cool

joshspankit

Yes, and that makes me happy every time I see it.

koito17

> it’s quite an inconvenience to run your own IPFS node. But even if you do run your own node, the fact you access a website doesn’t mean you pin it. Not at all.

This has always been my major UX gripe with IPFS. The fact that `ipfs add` in the command line does little but generate a hash and you need to actually pin things in order to "seed" them, so to speak. So "adding a file to IPFS", in the sense of "adding a file to the network", requires the user to know that (1) the "add" in `ipfs add` does not add a file to the network, and (2) you must pin everything you want to replicate manually. I remember as recently as 2021 having to manually pin each file in a directory since pinning the directory does not recursively pin files. Doing this by hand for small folders is okay, but large folders? Not so much.

More importantly, the BitTorrent duplication problems that IPFS has solved are also solved in BitTorrent v2, and BitTorrent v2 IMO solves these problems in a much better way (you can create "hybrid torrents" which allows a great deal of backwards compatibility with existing torrent software).

This isn't a UX issue, but another thing that makes it hard for me to recommend IPFS to friends is the increasing association with "Web3" and cryptocurrency. I don't have any strong opinions on "Web3", but to many people, it's an instant deal-breaker.

flashm

‘ipfs add’ pins the file by default, not sure if that’s recent behaviour though.

diggan

It's not recent, been like that since at least 2015, if not earlier.

hot_gril

IPFS provides nice stable links to media, and there are HTTP->IPFS gateways if needed. That seems useful for embedding content on multiple apps/sites. Yeah it happens to fit NFTs particularly well, then again we all know what BitTorrent is known for. And yes I agree IPFS has some UI problems.

Would BitTorrent also be suitable for hosting embeddable content? I haven't seen that yet. A magnet URL is mainly a file hash and doesn't seem to encode a particular peer server, kinda like IPFS. But every time I've torrented Ubuntu, it's taken half a minute just to find the peers.

mvdtnz

> IPFS provides nice stable links to media

Anyone who has tried to torrent an old movie or lesser-known television show knows this is simply not true.

hot_gril

I mean it's not like HTTP where all URLs are tied to a particular webserver and can even be changed on that server. If someone different starts seeding, you'll get the same data again at the same URL, with built-in checksumming.

rakoo

> Would BitTorrent also be suitable for hosting embeddable content?

Same as IPFS: gateways can exist. It's not specific to Bittorrent, or IPFS.

> A magnet URL is mainly a file hash and doesn't seem to encode a particular peer server, kinda like IPFS.

Magnet links can include a HTTP server that also hosts the content

hot_gril

I'm sure a BitTorrent gateway can exist, but I'm wondering why it doesn't seem to "be a thing." I've never seen one used, nor do I see an obvious public one when searching. Whereas IPFS gateways are so mainstream that even Cloudflare runs a public one.

Hendrikto

> I remember as recently as 2021 having to manually pin each file in a directory since pinning the directory does not recursively pin files. Doing this by hand for small folders is okay, but large folders? Not so much.

Can‘t you just use a glob?

lindig

Filecoin, which is based on IPFS, creates a market for unused storage. I think that idea is great but for adoption it needs to be as simple as Dropbox to store files. But visit https://filecoin.io/ and the dropbox-like app that you could be willing to try is nowhere to be found. So maybe it is an enterprise solution? That isn't spelled out either. So I am not surprised that this has little traction and the article further confirms the impression.

diggan

> to be as simple as Dropbox to store files. But visit https://filecoin.io/ and the dropbox-like app that you could be willing to try is nowhere to be found

I agree with this fully. But as said elsewhere, it's kind of far away from that, and also slightly misdirected.

Imagine asking someone to get started with web development by sending them to https://www.ietf.org/rfc/rfc793.txt (the TCP specification). Filecoin is just the protocol, and won't ever solve that particular problem, as it's not focused on solving that particular problem, it's up to client implementations to solve.

But the ecosystem is for sure missing an easy to use / end-user application like Dropbox for storing files in a decentralized and secure way.

poorman

That flagship app you are looking for seems to be https://nft.storage/ (by Protocol Labs).

ahmedfromtunis

This is, in my opinion, is the first and only "solution" to a real problem built using the blockchain.

Distributed file storage, if done correctly, can be a transformative technology. And it can be even more revolutionary implemented at the OS level.

chrisco255

Fileverse is an app built on ipfs and it is very user friendly: https://fileverse.io/

pierat

Here's your $.10/day for that 1GB with bandwidth... but running the filecoin stack will cost you a $50/mo server.

That fucker's a PIG on cpu and ram.

kkielhofner

IPFS is as well.

Clearly much more going on but take a machine that can serve 10k req/s with [insert 100 things here] without flinching and watch it maybe, just maybe, do 10 with IPFS.

I'm not kidding.

nickstinemates

this is what storj.io does.

p4bl0

I'm surprised by the beginning of the post talking about pioneering in 2019. Maybe it is the case for ENS (I never cared for it), but regarding IPFS, my website was available over IPFS 3 years before that in 2016 [1]. Granted, I was never IPFS only. I also started publishing a series of article about censorship-resistant internet (Tor, I2P, IPFS, and ZeroNet) in 2600 Magazine – The Hacker Quarterly – back in 2017 [2].

Anyway, I came to the same conclusion as the author, but several years ago: in the end, nothing is actually decentralized, and maintaining this illusion of decentralization is actually costly, for no real purpose (other than the initial enjoyment of playing with a new tech, that is).

So I stopped maintaining it a few years ago. That decision was also because of the growing involvement of some of these projects with blockchain tech that I never wanted to be a part of. This is also why I cancelled my article series in 2600 before publishing those on IPFS and ZeroNet.

[1] See for example this archive of my HN profile page from 2016 with the link to it: https://web.archive.org/web/20161122210110/https://news.ycom...

[2] https://pablo.rauzy.name/outreach.html#2600

chaxor

I never fully understood the use of ipfs/iroh for websites, but I really like the idea for data caching in data science packages.

It makes me more sense to me that someone would be much more willing to serve large databases and neural network weights that they actually use everyday, rather than 'that one guys website they went to that one time'.

I'm very surprised it's not as popular, if not more popular to just have @iroh-memoize decorators everywhere in people's database ETL code.

That's a better use case (sense the user has a vested interest in keeping the data up) than helping people host we sites.

wharvle

IMO the case for something like IPFS gets worse and worse the larger proportion of clients are on battery. This makes it a really poor choice for the modern, public Web, where a whole lot of traffic comes from mobile devices.

Serving things that are mostly or nigh-exclusively used by machines connected to the power grid (and, ideally, great and consistent Internet connections) is a much better use case.

kmeisthax

This is half the reason why P2P died in the late 2000s. Mobile clients need to leech off a server to function.

The other reason why it died is privacy. Participating in a P2P network reveals your IP address, which can be used to get a subscriber address via DMCA subpoenas, which is how the RIAA, MPAA, and later Prenda Law attacked the shit out of Gnutella and BitTorrent. Centralized systems don't usually expose their users to legal risk like P2P does.

I have to wonder: how does IPFS protect people from learning what websites I've been on, or have pinned, without compromising the security of the network?

wuiheerfoj

Desci Labs (1) do something like this - papers and all the associated data (raw data, code, analyses, peer review comments) are published and linked together - along with all the citations to other papers - in a big DAG.

I believe their data is stored in a p2p - it might interest you!

1. https://desci.com/

r3trohack3r

> Anyway, I came to the same conclusion as the author, but several years ago: in the end, nothing is actually decentralized, and maintaining this illusion of decentralization is actually costly, for no real purpose (other than the initial enjoyment of playing with a new tech, that is).

Do you have any writing (blog posts, HN comments, etc.) where you explore this thought more? I'm in the thick of building p2p software, very interested in what you came to know during that time.

p4bl0

The main thing is that "true" (in the sense of absolute) decentralization does not actually work. It doesn't work technically, and it doesn't work socially/politically. We always need some form of collective control over what's going on. We need moderation tools, we need to me able to make errors and fix them, etc. Centralized systems tend to be authoritarians, but the pursue of absolute decentralization always end up being very individualistic (and some kind of wrongly placed elitism). There are middle grounds: federated systems for example, like Mastodon or emails, actually work.

That is not to say that all p2p software is bad, especially since we call p2p a lot of things that are not entirely p2p. For example, BitTorrent is a p2p software, but its actual usage by humans relies on multiple more-or-less centralized point, trackers and torrent search engine.

r3trohack3r

I think you and I would have a very interesting conversation.

I agree with a lot of your points, but not your conclusions.

> We always need some form of collective control over what's going on. We need moderation tools

I agree with this, and also think it is possible in peer-to-peer systems. Ideally the collective is self-governed. Particularly, when it comes to moderation, the closer the moderation controls are to being under the control of the user consuming the content, the more just the system.

> , we need to me able to make errors and fix them, etc.

Yes, 100%. Equity < Cryptography. It's far more important that your equity be rock solid than it is for your cryptography to be rock solid. If someone steals your property in a cryptographically sound way, equity should always trump cryptography. It's far more important that you have the title to your vehicle than it is that you have the keys to your vehicle.

I feel like many p2p systems have gotten this one backwards.

> There are middle grounds: federated systems for example, like Mastodon or emails, actually work.

I consider these centralized systems, just N copies of the same problem. I don't feel like the power imbalance between the users and the system administrators are addressed in a sufficient way on the fediverse as it is currently implemented.

These systems have a classist, hierarchical, system where system administrators belong to a privileged class while users are second class citizens on the web.

---

I feel one of the issues in the current centralized architectures is equity. When you go about moving through the world, you generate a large volume of valuable property (your data). But, today, you give away nearly all equity in that data. Centralized providers accumulate equity in their user's data, and that equity is how they pay the bills.

I do believe that, in a very meaningful way, equity and privacy are nearly synonymous when it comes to corporations respecting the privacy of their users. Just because Netflix delivers a video to your smartphone doesn't mean you can turn around and sell that video to your friend. You have been granted access to the video, but it is not your property. The inverse needs to be true too. Just because you share your viewing habits with Netflix doesn't mean they can sell that data to Warner Brothers, that's not their property (I mean, today it is, but it shouldn't be). If users had equity in their data, the data broker market as it exists today would be piracy.

P2P systems have failed to create a world where humans, their content, and their devices are meaningfully addressable for the web in a way that expresses equity as a first class citizen.

A decentralized world is possible, it's just not possible on the internet. The internet, as it exists, is insufficient for expressing the concepts of the modern web in a way that is possible without centralized servers.

We don't need web3, we need internet2.

> That is not to say that all p2p software is bad, especially since we call p2p a lot of things that are not entirely p2p. For example, BitTorrent is a p2p software, but its actual usage by humans relies on multiple more-or-less centralized point, trackers and torrent search engine.

libp2p and scuttlebutt are pretty cool too. Both with their problems, but those problems seems solvable. Both seem more like internet2 than web3.

p2p needs a new overlay network on top of the internet, just like the internet started as an overlay network on top of the telephony system.

pphysch

True P2P networks don't scale, because every node has to store an (accurate if partial) representation of the whole network. Growing the network is easy, but growing every single node is virtually impossible (unless you control them all...). Any attempt to tackle that tends to increase centralization, e.g. in the form of routing authorities.

And if you try to tackle centralization directly (like banning routers or something), you will often create an anti-centralization regulator, which is, you guessed it, another form of centralization.

So your decentralized P2P network is either small and works good, medium and works not so good, or large and not actually decentralized.

The best P2P networks know their limits and don't try to scale infinitely. For human-oriented network, Dunbar's Number (N=~150) is a decent rule of thumb; any P2P network larger than that almost certainly has some form of centralization (like trusted bootstrapping/coordination server addresses that are hard-coded in every client install, etc.)

KMag

> True P2P networks don't scale, because every node has to store an (accurate if partial) representation of the whole network

Former LimeWire dev here... which P2P networks use a fully meshed topology? LimeWire and other Gnutella clients just have a random mesh with a fixed number of (ultra)peers. If the network gets too large, then your constrained broadcast queries hit their hop count before reaching the edge of the network, but that seems fine.

Last I checked, Freenet used a variation on a random mesh.

Kademlia's routing tables take O(log(N)) space and traffic per-peer to maintain (so O(N log(N)) for global total network space and traffic). Same for Chord (though, twice as much traffic due to not using a symmetric distance metric like XOR).

There are plenty of "True" (non-centralized) P2P networks that aren't fully meshed.

Almondsetat

Your software cannot be more decentralized than your hardware.

For example, true p2p can only happen if you meet with someone and use a cable, bluetooth or local wifi. Anything over the internet needs to pass through routers and *poof* decentralization's gone and you now need to trust servers to a varying level of degrees

colordrops

"varying" is a pretty wide range here. If you mean "trust" as in trust to maintain connectivity, yes, but beyond that there are established ways to create secure channels over untrusted networks. Could you provide specifics about what you mean if anything beyond basic connectivity?

Barrin92

>Anything over the internet needs to pass through routers and poof decentralization's gone

That's not true. Yes the strict p2p connection is gone but decentralization is what the name says, a network of connections without a single center. The internet and it's routing systems are decentralized. Of course every decentralized system can also be stressed to a point of failure and not every node is automatically trustworthy.

undefined

[deleted]

unethical_ban

I'll add to what others have said better.

Decentralized, globally accessible resources still take some kind of coordination to discover those resources, and a shit-ton of redundant nodes and data and routing. There is always some coordination or consensus.

At least, that's my take on it. Does not tor have official records for exit and bridge nodes?

int_19h

I may be missing something, but name resolution has been touted as one of the more legitimate and sensible uses for blockchain for a very long time. Could you clarify what your issues with it in IPFS context are?

edent

It isn't. Unless you want a long incomprehensible string.

Someone is always going to want a short, unique, and memorable name. And when two people share the same name (McDonald, Nissan, etc) there needs to be a way to disambiguate them.

If people die and are unable to release a desirable name, that just makes the whole system less desirable.

I know one of the canonical hard problems in Computer Science is "naming things" and this is a prime example!

bombcar

And if you want a long incomprehensible string we already have that .onion sites work without a blockchain, too.

patmorgan23

Namecoin has existed for a long time. It acts just like a traditional domain registar. First person to register a name gets it, and they have to pay a maintenance fee to keep it. Don't pay the maintenance fee and then someone else can register the name.

null0pointer

ENS (which is what the GP refers to) has human readable names. But it doesn't have support for A/AAAA records today (does anyone know why? A-record support was in the original spec). Aside from that the only reason you wouldn't be able to type "mycoolsite.eth" into your browsers URL bar and have it resolve to an IP is because no browser has an ENS resolver built in. Yet.

ShamelessC

> but name resolution has been touted as one of the more legitimate and sensible uses for blockchain for a very long time.

Blockchain enthusiasts have a history of talking out of their ass and being susceptible to the lies of others.

ShamelessC

Downvotes, nice. Whatever helps you sleep at night.

p4bl0

Well, I do not actually believe that blockchains can do name resolution correctly. First and foremost, the essential thing to understand about blockchains is that the only thing that is guaranteed by writing an information on a blockchain is that this information is written on this blockchain. And that's it. If the writing itself is not performative, in that it's mere existence performs what it describes, then nothing has been done. It works for crypto-assets because what makes a transaction happen is that it is written on the blockchain where that crypto-asset lives and where people look to see what happened with that crypto-asset.

But for any other usage, it cannot work, blockchain are useless. Someone or something somewhere has to make sure either that what's written on the blockchain corresponds to the real world, or to make the real worlds corresponds to what's written on the blockchain. Either way you need to have a kind of central authority, or at least trusted third parties. And that means you don't actually need a blockchain in the first place. We have better, more secured, more efficient, less costly alternatives to using a blockchain.

Back to name resolutions.

Virtually no one is going to actually host locally the blockchain where all names are stored. That would be way too big and could only get bigger and bigger, as a blockchain stores transactions (i.e., diffs) rather than current state. So in practice people and their devices would ask resolvers, just like they currently do with DNS. These resolvers would need to keep a database of the state of all names up-to-date because querying a blockchain is way too inefficient, running such a resolvers would be a lot more costly than running a DNS servers so there would be less of them. Here we just lost decentralization which was the point of the system. But that's just a technical problem. There is more: what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it? Either we are able to enforce this decision and it means the system is not actually decentralized (so, we don't need a blockchain), or we can't, and that's a problem. What if a private key is lost, the associated names are gone forever? What if your private key is leaked by mistake and someone hostile take control of your name?

Using a blockchain for names resolution doesn't actually work, not for a human society.

mikegreenberg

> Either way you need to have a kind of central authority, or at least trusted third parties.

You lost me here. Couldn't the local user ('s process) reference the same block chain instead of another trusted party?

null0pointer

> Either way you need to have a kind of central authority, or at least trusted third parties.

Not everyone needs to run a node, and not everyone could, but it is totally feasible for an individual to run their own if they decide they can't trust anyone else for whatever reason. Especially if you were running a node specifically for the purpose of name resolution you could discard the vast, vast majority of data on the Ethereum blockchain (for example).

> what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it? [...] and that's a problem.

No, that is a feature of a decentralized system. Individual node operators would be able to decide whether or not to serve particular records, but censorship resistance is one of the core features of blockchains in the first place.

> What if a private key is lost, the associated names are gone forever?

The association wouldn't be gone, it would just be unchangeable until it eventually expires. This is a known tradeoff if you are choosing ENS over traditional domain name registration.

> What if your private key is leaked by mistake and someone hostile take control of your name?

As opposed to today where someone hostile, like for instance the Afghani government (The Taliban), can seize your domain for any reason or no reason at all?

---

I think we just have a fundamental disagreement about what types of features and use cases a name resolution system should have. That's completely fine, you're entitled to your own believes. You can use the system that most closely resembles your beliefs, and I'll use the one that most closely resembles mine. Fortunately for us different name resolution system can peacefully coexist due to the nature of name mappings. At least for now, none that I know of collide in namespace.

hexage1814

>There is more: what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it?[...]or we can't, and that's a problem

That's a feature.

teeray

> I also started publishing a series of article about censorship-resistant internet (Tor, I2P, IPFS, and ZeroNet) in 2600 Magazine – The Hacker Quarterly – back in 2017

I very much enjoyed your articles on Tor and I2P :) I2P was entirely new to me, so I found that particularly interesting. I did idly wonder when the next article was coming, so I’m glad I didn’t just miss it in some issue. Totally understand where you’re coming from.

p4bl0

Thanks! It's always great to have feedback on paper-published content :).

neiman

> Maybe it is the case for ENS

Oh yeah, I was referring strictly to IPFS+ENS websites. I have been working with it for several years so my mind goes for this use-case automatically.

DEDLINE

When evaluating use-cases where blockchain technology is leveraged to disintermediate, I came to your same conclusions. Technically novel? Yes, sure. But, for what?

hanniabu

For incentive alignment, consensus, trustless, etc

axegon_

I see where the author is coming from but I find something else strange: Considering that the blog is in practice a collection of static files, I don't see the benefit of paying for a server at all. Host it on github, if github gets killed off for whatever reason, switch to something else and move on. Seems like an unnecessary overhead to me.

neiman

I get told that a lot! xD

My original aim was to write an IPFS blogging engine for my personal use, so I needed some dynamic loading from IPFS there.

Now I switched to Jekyll, and it would be easier to host the blog on Github indeed, but I'm kind of playing a quixotic game of trying to minimize the presence of Google/Microsoft/Amazon and other big-tech in my life.

walterbell

Free tier of indie https://neocities.org supports static sites like Jekyll.

rapnie

https://codeberg.page .. similar idea to Github Pages.

hot_gril

Same. IPFS seems far more useful for hosting static content that might be embedded in multiple websites.

MenhirMike

> the more readers it has, the faster it is to use it since some readers help to spread the content (Scalable).

In other words: Once a few big websites are established, no small website will ever be able to gain traction again because the big websites are simply easier to reach and thus more attractive to use. And just like an unpopular old torrent, eventually you run out of seeders and your site is lost forever.

One can argue about the value of low traffic websites, but I got to wonder: Who in their right mind thinks "Yeah, I want to make a website and then have others decide if it's allowed to live". Then again, maybe that kind of "survival of the fittest" is appealing to some folks.

As far as I am concerned, it sounds like a stupid idea. (Which the author goes into more detail, so that's a good write up)

fodkodrasz

This is a false dilemma. Why would you not "seed" (pin) your own site, and be at others' mercy? You pin it, and when others also do so, the readers get faster and more redundant service.

kimixa

For "unpopular" sites having a single origin somewhat removes the advantages of IPFS, it's not decentralized, not censorship resilient, and still costs the publisher for ongoing infrastructure to host it. Yet still had the disadvantages and complexity of IPFS vs a static http server.

So if you're not going to be publishing something that will always have multiple copies floating around, why use IPFS?

fodkodrasz

1. to give a chance to avoid being slashdotted. 2. to allow anybody who finds it valuable to archive it, or parts of it?

The complexity of IPFS is another thing, which should be solved. However popular or unpopular your site might be, you must host is somewhere somehow, if you wish to be sure it sticks around. It is simple as that.

cle

It helps to use more specific terms than "decentralized" and "censorship resilient", there are a lot of attack vectors for both. IPFS certainly does address some of the attack vectors, but not all. For example if the "centralized" thing you're worried about is DNS and certificate authorities, then you can avoid those authorities entirely with IPFS. Replication is one aspect of centralization, and IPFS doesn't completely fail at it, it's just more expensive (you can guarantee two node replication, you will have to run two nodes though). And there are other aspects not addressed by IPFS at all like its reliance on IP address routing infrastructure.

p4bl0

If you need to pin your content anyway, it's actually faster and less expensive to host a normal website then. And if you want to get it to readers faster, there are a lot a cheap or free CDN available, but that's generally not even an issue with the kind of website we're talking about here when they're served normally, over the web.

fodkodrasz

Yes, that is the state of affairs now. I can use cloudfront for my site, but cannot use it to pin my ipfs site (should I have one) as far as I know.

You are fighting a strawman. If you don't take of your site, but expect others to take care of it (pin it), then it is not your site. You must ensure it has at least one pinned version. Others might o might not pin it, it depends on the popularity, or the accessibility of the stack, which is lacking right now according to the article.

kevincox

It is also worth noting that most IPFS peers will cache content for some period of time even if not explicitly pinned. So if you site hits the top of Hacker News (and everyone was using a native IPFS browser) you would suddenly have thousands of peers with the content available. So in theory your one node can serve infinite users since once you serve one copy to the first user that user can replicate it to others. (The real world is of course more complicated but the idea works decently well in practice.)

lelandbatey

It's not up to others alone; you get a say too because you can seed your own content and that can be fast. In the worst case of no interest, then it's approximately the same as you hosting your own website in the world of today. This doesn't exonerate the shortfall of the "old torrent" pattern though, as you say.

sharperguy

I think the main difference between IPFS and bittorrent in terms of usage patterns is that IPFS is being used to host content that could easily by just a regular HTTP server, whereas bittorrent is hosting data which is highly desired and would be impossible or very expensive to host on HTTP.

And so naturally relays pop up, and the relays end up being more convenient than actually using the underlying protocol.

diggan

Key difference between a regular HTTP server and IPFS is that you can always try a different gateway/relay to get the very same content, and you can be sure it's the same. Cannot really do that with HTTP today as it's usually tied to locations, in one or another way.

alucart

I'm exploring a similar project, having a "decentralized" website (hosted on github) which saves users' data in the blockchain itself and then provides that same data to other users through public APIs and/or from actual blockchain clients.

Wonder if there is actual use or need for such thing.

nikisweeting

Is there anything that allows one to mount an IPFS dir as a read/write FUSE drive yet? Once they have that, I'm all in, even if it's slow...

ianopolous

We have a FUSE mount in Peergos[0][1] (built on IPFS). It lets you mount any folder read or write (within your access).

[0] https://github.com/peergos/peergos [1] https://peergos.org

willscott

https://github.com/djdv/go-filesystem-utils/pull/40 lets you interact with IPFS as an NFS mount

wyck

I build a blog in IPFS, its basically reliant on several centralized services to actually work in browsers (DNS , GitHub, Fleek, etc). I wrote about how I build it here, the experaince was underwhelming. https://labcabin.eth.limo/writing/how-site-built.html

mawise

What about a cross between IPFS/Nostr and RSS? RSS (or atom) already provides a widely-adopted syndication structure. All that's missing are signatures so that a middleman can re-broadcast the same content. Maybe with signatures that's really reinventing SSB[1]. But if we think of the network in a more social sense, where you're connecting to trusted peers (think: irl friends), maybe the signatures aren't even that important. All that's left then is to separate the identifier for the feed from the location--today those are both URL--so you can fetch a feed from a non-authoritative peer.

[1]: https://en.wikipedia.org/wiki/Secure_Scuttlebutt

Daily Digest email

Get the top HN stories in your inbox every day.

I moved my blog from IPFS to a server - Hacker News