Get the top HN stories in your inbox every day.
freakynit
faangguyindia
One thing people are not taking into account is that many developers now have less time and are working a lot more because AI makes it seem it should be possible to hit those deadlines, etc.
Also, many programers have spent their entire funds on tokens, so neither are left with extra money nor time.
film42
Acquisitions change priorities and layoffs put the squeeze on people. AI is for sure in the mix there, but open source decay is a result of no room in budgets for anything but maximizing revenue.
2ndorderthought
I really wish projects like this didn't fall through the cracks and continued to be funded. The struggles of OSS are too real.
freakynit
True.. I truly wish wish we had better open-source license and more open-source projects adopt it..
Tiered pricing license... tiering based upon annual company revenues... should start super low for small companies (free for individuals), and jump to thousands of dollars per year for 10+ milion revenue companies.
I understand that this might not fully be in the spirit of open-source, but, what's happening currently is way worse.. where giant companies rip off the hardwork of open-source software maintainers without compsensating them adequately.
lelanthran
> Tiered pricing license... tiering based upon annual company revenues... should start super low for small companies (free for individuals), and jump to thousands of dollars per year for 10+ milion revenue companies.
Too complicated. Make it GPL (not MIT) and offer dual licensing.
Those corps that need it but are GPL-phobic can have a different license, and can pay for it.
topham
Sigh. Bane of my existence is any service which does this.
My org theoretically makes hundreds of millions, unfortunately none of that money is ours. So I get forced into a procurement process for anything that costs more than (ridiculously small limit), and get stuck using the worst in class because it's cheaper.
didgetmaster
The project is being abandoned because the maintainer is tired of working for free. They said that they hoped someone would fork it, change the name, and pick up where it was left off.
Why would anyone do that? If the person who was most passionate about it for over a dozen years has given up because it was never worth the trouble; what fool would think things will be different going forward?
This is the curse of OSS.
shevy-java
> what fool would think things will be different going forward?
> This is the curse of OSS.
There are examples of failing forks. And there are examples of forks that became better than the original. It is not possible to generalize this into one or the other solely via a curse-of-OSS conclusion. Funding will always be an issue; but funding is not necessarily the main or only criterium as to whether a project fails or succeeds.
cortesoft
An alternative reading is that after 13 years dedicated to a single project, the original author is simply burnt out on it, but a new maintainer can start with fresh passion that will last a number of years.
Just because someone gets tired of working on something eventually doesn't mean everyone else will immediately feel the same way.
jrochkind1
They said they imagined it would (I read as "might") be forked, and if it were, please don't use their name for it.
I don't think they are "hoping" someone else will take it, exactly. They're just done with it. That's how I read it, they liked working on it, but it wasn't financially sustainable, the project is now over, and my reading is they are sad about it.
tclancy
While I tend to agree with the line of thinking in this thread that the ethos of open source (and the web writ large) have been taken advantage of by capitalism, I can't quite see this: things belong to a time and place in one's life. The creator feels like his time with this project is at an end, but why would that be an impediment to someone who needs a package like this stepping up and maintaining it? Better to do that than build a replacement from scratch (most likely). And more likely to attract new sponsorship by being a reliable steward of a known name (albeit with a suffix or something).
jumpconc
The struggles of living in an economic system while completely rejecting that system and pretending it isn't there.
AndyNemmity
There is no evidence of any of that.
He was paid to work on it. That stopped, he continued to work on it in the hopes he could find someone who would hire him to work on it.
That wasn’t true, no one has funded it.
So due to the economic system he no longer maintains it.
That’s your economic system at work. No one is pretending it isn’t there, this is the outcome of it
imtringued
That's actually not the problem. The problem is that the conventional funding model for open source does not make sense and nobody has the resources to provide a financial product that actually works, since the projects with a single maintainer are too small of a market to be worth serving for classic financial institutions like banks.
The business model is as follows: Open source maintenance produces recurring costs (developer salary, infrastructure costs, etc) but these costs are fixed and do not scale with the number of users, only with the development effort. This means the ideal financing structure would be a cost plus system where the maintainer gets paid a salary and the customers (businesses) are spreading the cost among each other so that each business ends up paying less than if they had built or maintained the project in-house.
The problem here is that the costs are variable and depend on the number of participants and their individual willingness to spend money and how that effects the viability of the project as a whole. Participating businesses need some sort of guarantee that they won't be stuck with all of the costs and that there are other participants who will chip in. At the same time, once there is a sufficient number of participants, the participating businesses don't want to overpay. They may commit to a monthly worst case bill of $5000, but if the total bill is $10000 and there are 100 participating businesses so that each business could only pay $100, said big spender would want the option to lower their spending down to $100 if possible and let others carry more of the financial burden.
With this sort of arrangement, funding open source software would be rational, since the amount you save by freeloading is insignificant compared to the risk of the project being discontinued due to freeloading.
bdcravens
"so sad to see this"
The source is still available. Maintaining your own copy and/or paying someone to do it is an option.
While you're at it, look at all the projects you depend on that you would similarly be sad about losing, and set up those donations today.
thinkingtoilet
This is the right attitude. All the "this is sad" comments make me want to ask, "How sad are you? Sad enough to donate?"
spockz
For me the sad part about the story is that someone who clearly knows what they are doing wasn’t able to find a job that would have permitted him to continue working on the project and that there were insufficient sponsors from companies.
Not the fact that he made the decision he made.
manquer
Database backup tools are used primarily in enterprise context. (In)Ability to donate is not a function of personal spending preferences
A fair amount of people work here at orgs on here would absolutely be able to swing couple of hundred bucks per month in sponsorship or licensing or donations for a critical tool in their infra toolkit without lot of effort.
Particularly so, with the rising frequency of AI deleted my prod posts.
dijit
Wow! pgbackrest was definitely the premier backup solution for postgres when I last looked at the ecosystem properly.
It was the only solution that seemed to take restoring and validating as seriously as “taking a backup” which lead to an unfortunate situation with my employer. (details here: https://blog.dijit.sh/that-time-my-manager-spend-1m-on-a-bac...)
This is really a major loss. :(
Nelkins
Wow, this is pretty surprising, I was under the impression that this is the leading PG backup/recovery tool.
Anybody know how WAL-G and Barman compare?
zie
I dunno how they compare, but we have been using barman for a long time very happily. We test our backups every night, by restoring from barman into a _nightly DB. which we then give out to users as a training/testing spot, so that we know when it breaks. It hasn't broken in many years now. <3
__s
I'm one of many wal-g maintainers, it's comparable. I've been inactive for past few years, but back in managed postgres game. Hoping to get support for pg17 incremental backups alongside wal-g's existing delta backups where wal-g compares blocks itself. Be sure to use daemon mode
Sad to see competitor go, I think there's lots of room for improvement here, & C over Golang is particularly nice when postgres wants to run on system without overcommit
andruby
We've been happy with WAL-E and now WAL-G (successor). The streaming PITR nature of these won over pgbackrest when we did the analysis ~9 years ago.
fabian2k
Are you using WAL archiving? As far as I understand, pgbackrest and Barman can also use direct streaming from the DB (same mechanism as replication), I didn't find any mention of this in the WAL-G documentation.
With WAL archiving you need to wait for a WAL segment to finish before it's backed up. With streaming backups the deadtime is minimized. At least that's as far as I understand this, I didn't get to try this out in practice yet.
andruby
WAL-G's PITR backups are insurance against data loss through erroneous data manipulations (eg: accidental DELETE/DROP/UPDATE). WAL-G's streaming approach (using pg_receivewal or similar) sends WAL records to backup storage continuously as they're generated, rather than waiting for a full segment to complete.
On top of that, for availability (and minimizing deadtime), we have 2 replicas using streaming replication. If the lead PG crashes, one of the replicas is promoted to lead (and starts accepting writes), and we "only" lose the writes that haven't been sent over the streaming replication.
You can fully eliminate that window of data loss with synchronous replication (vs the default asynchronous replication - which we use). The write slowdown (replica network round trip + 2nd write at replica) isn't worth it for us
noosphr
>Wow, this is pretty surprising, I was under the impression that this is the leading PG backup/recovery tool.
j1elo
Open Source has worked fine here. The author doesn't find financial support for the work, so they just want to change winds and that's a perfectly fine path forward.
If this is really much more than a personal project "for fun, on my leisure time", and it became an actually serious product-level project that provides good value in commercial environments for people, there's clearly an opportunity for a for-profit company to step in and cover that niche. But that'd require that users became customers and actually departed from their money to pay for it :)
I guess most will switch instead to asking who's the next project maintainer to work on it, to whom the new bug reports and complaints can continue to be sent for free. But if there's money to be made by using a tool, there should be money paid for using it too. We "just" need to find the new generation of FOSS Financial Sustainability solutions that actually work! Donations don't make the cut.
hosh
Something I learned about being a part of an ecosystem: if you want it, you need to support it and help it stay alive.
That applies to local shops as it does open source projects.
briffle
The project has never even had a donation button on its page, only a link with a few sponsors.
SwellJoe
The effort to setup donations is almost always more trouble than the donations that result are worth. Better spent looking for a job, or working on a commercial project that will make money. People simply don't donate to open source projects at a level that matters.
I've been working on Open Source software for 30+ years. There's no money in it, if your idea for making money is "accept donations". I don't like it, but it's a fact. If you want to make money, you have to make something that isn't free (and even then, if you give away the most valuable parts, as in "open core" licensing, you probably still won't make enough money to make the development worth it).
When I was young and driven by idealism and optimism, I assumed that with enough users I'd be able to ring the cash register somehow. Turns out not so much. We got the users, the money never came. There are a few outliers, but there probably aren't a lot of opportunities to found a Red Hat today.
hosh
It is not necessarily monetary or even transactional form of support. Reciprocity builds relationships.
Not necessarily even code contributions. It could be professional networking. It is a bit different if the person is not a stranger.
spockz
I wonder whether the author has considered taking the product to a paid level and what would be necessary for it.
Obviously, all contributors have some form of copyright, which may or may not have been waived depending on whether there was an ACL in place and jurisdiction. So he would need to get permission from the copyright holders, maybe in exchange for a percentage of the profit.
j1elo
Changing the license of already existing code? You might not be able to do that without permission from other contributors, I agree.
But it's MIT license. We can open a company tomorrow, take that code, and start selling it. Further development and improvements of the code could be trivially done openly or behind closed doors. FWIW the author themselves could do that if they wanted.
pasc1878
ANd that gets rather looked on here as the authors being deceitful and not really Open Source doing a bait and switch.
tracker1
I've been working on a software package I'm hoping to release in a few months... I'm really torn on either split FLOSS with commercial extensions, or just going fully private... I was planning on a pretty generous free tier, but hoping to make a bit on the side from commercial customers.
It's a bit of a niche as it is, so that's going to be rough in any kind of pricing model, as a large part of that niche is either homebrew types and the other commercial industry that will likely require some more integrations and customization.
joshmn
I have a moderately sized 2TB production database I have enjoyed using pgBackRest on, and was—this week—going to set it up on another 8TB database we have.
What's the next-closest thing? wal-g? barman? databasus? I only get to cosplay as a DBA.
drcongo
I can beat you on the timing - I'd never used pgBackRest before, but started setting it up on a project about 2 hours ago, by the time I'd finished the README had been updated.
sgarland
I've used barman on somewhat large-ish DBs (30+ TB), and had no complaints with it. I am a DBRE, if that holds any weight.
briffle
We recently moved from Barman to pgBackrest. Our main complaints with barman were that incremental backups utilized hardlinks. Which was great, we could have our 7TB database backed up, and the next day, only 20GB in changes. But, when replicating that data to cloud storage, there is no concept of hardlinks, so now we had to push 14TB to cloud storage. Also, at least last time we looked a while back, file compression was only the WAL files, unless you used the newer barman-cloud-backup tool, which we did not.
Also, pgBackrest lets you do the majority of the backup from a physical standby, which is VERY nice for removing the load off production.
None of these seemed like issues, until we looked at pgBarman, and suddenly realized how nice that would be.
sgarland
We just piped the backups through pigz for compression; rapidgzip also exists for parallelized decompression (or any other compression algorithm you’d like to use, of course).
joshmn
barman seems to cover "Natural disaster" in their docs. Seems good.
I'll take a look. Thanks!
ramraj07
Backing up multi terabyte production postgres databases is not merely cos playing ha ha
3manuek
The "closest" would be using Barman with hook scripts (https://docs.pgbarman.org/release/3.18.0/user_guide/hook_scr...) if you rely on cloud storage for storing backups.
https://github.com/aiven-open/pghoard seems like a good option too, but I haven’t tested it yet to have a solid opinion.
infinet
Anyone put the standby on ZFS or other filesystems that can take snapshots for backup?
skibbityboop
Not for PostgreSQL, but for MariaDB we run replicas in FreeBSD jails on a server with lots of ZFS space. The jailed Maria instances just stop every hour (so the DB flushes everything to disk), the host snapshots all of their data volumes, and then starts the jails back up. Within a minute or so they're fully caught up to the primaries again. Gives us months and months of recovery checkpoints.
It's great because it's a completely clean save from a shutdown state, so when we need a scratch copy of a database it only takes as long as cloning whatever snapshot we want (depending on how far back we need to to), then starting a scratch jail that runs from those clone filesystems. When finished, just shutdown scratch and delete the clones, it's like it never happened.
abrookewood
A previous company I was at did this on the primary. It always seemed to work, but no one was really comfortable with it, largely because there wasn't too much ZFS experience at the time and also because the process did not coalesce the database before doing it. I think it's still a valid strategy, but not one I have had time to verify thoroughly.
hosteur
databasus does not do PITR.
zigzag312
Is that info up-to-date? Their readme states:
**Backup types**
- **Logical** — Native dump of the database in its engine-specific binary format. Compressed and streamed directly to storage with no intermediate files
- **Physical** — File-level copy of the entire database cluster. Faster backup and restore for large datasets compared to logical dumps
- **Incremental** — Physical base backup combined with continuous WAL segment archiving. **Enables Point-in-time recovery (PITR)** — restore to any second between backups. Designed for disaster recovery and near-zero data loss requirements
EDIT: It seem PITR has been added this March (for PostgreSQL)zigzag312
pg_probackup seems to be another one.
colesantiago
> Since Crunchy Data was sold, I have been maintaining pgBackRest and looking for a position that would allow me to continue the work, but so far I have not been successful. Likewise, my efforts to secure sponsorship have also fallen far short of what I need to make the project viable.
So this was the problem, I thought Snowflake would pick up the sponsorship of this project but since it is a competing database it doesn't really make much sense.
I really wish many critical OSS projects get the sponsorship they need to continue.
Otherwise the software industry is in real trouble.
Forking it just passes the buck onto another maintainer with the same problem, this time without the original creator maintaining it.
wg0
Very simple. Name it to pgbackrest-AI and add the line:
"AI driven backups with smartest world class models optimizing every byte stored via deep AI analysis."
With that added, a million dollars is just chimp change. YC alone would be adding them to all the seasons multiple times over summer, winter and monsoon etc.
voidmain0001
Even with sponsorship, it's not always appreciated such as Vercel backing Svelte, Vue, etc. https://www.reddit.com/r/reactjs/comments/1g4lu5p/am_i_seein...
colesantiago
The responses in there are dumb and childish.
I doubt that they have sponsored an OSS project or made it sustainable.
nijave
Postgres doesn't compete with Snowflake. Snowflake recently announced a Postgres DBaaS offering that integrates with Snowflake (actually has competitive pricing with AWS RDS Postgres)
They're two non competing verticals. It's a shame Snowflake decided to shrink Crunchy Data's community presence.
saadn92
[flagged]
elAhmo
It didn't go dark, and doesn't seem that critical in general.
General idea still stands, but it is not like this just disappeared and backups will stop working.
nazcan
The favourite model I've seen is the main branch is free, licensed MIT or whatever, but if you want release artifacts that are tested - then you pay for it. You can always compile your own.
rowanG077
Why does sqlite not suffer from the same risk?
cornstalks
SQLite doesn’t depend on donations. They have a consortium, sell licenses (it is open source but some companies like the explicit CYA), sell support contracts, sell an aviation-grade test harness, and sell extensions.
Of course there is always the risk it goes out of business like any other company, but it’s not funded like your typical small open source project and doesn’t even allow open contributions (not necessarily a bad thing IMO but it’s just a totally different type of project).
alexpadula
They don't make much off this, its known.
registeredcorn
Is there a reason why more OSS projects don't follow this model? It sounds like you are saying that there are clear advantages here that other OSS projects lack.
rowanG077
pgbackrest also was part of an organization from what I understood from the post. The organization got acquired. I don't see how sqlite is shielded (or any project really). They could get acquired. They could not have enough customers. They could go the wrong directions and lose customers. They might have a few high profile bugs so that customers lose faith in them.
doctoboggan
Its an LLM comment, don't search too deeply for logical consistency
lmm
They have more sponsors/clients so a single company changing direction wouldn't kill them. They also sell directly if you want to buy from them. But ultimately the risk still exists.
feike
pgbackrest is the most versatile piece of backup technology for PostgreSQL and in my experience the other products do not come close.
I am therefore quite sad to see this happen. It won't be easy to get feature parity with this great product.
I sincerely hope this is a reversible decision, or perhaps the postgres project could even absorb it into contrib.
dcchambers
We're going to see a lot of this over the next 1-2 years.
Software Engineers suddenly feel like they're fighting for their lives for employment, and time won't be "wasted" maintaining OSS for free.
We all need to eat.
aetherspawn
It still works, you can just keep using it.
I think that’s what the author would want. People to keep using it until it doesn’t work anymore.
spockz
And hopefully someone wants to stand up then. Not sure whether it needs to be a fork or that they can join as contributor on the repo.
fabian2k
I was about to set up Postgres backups with pgbackrest very soon. It looked like the most mature solution for my use case. What I was aiming for was continuous backups to an object storage provider, without a central DB server but the backup tool directly installed on the Postgres server.
I'll have to look at the alternatives again, I think that was mostly WAL-G and Barman. It looks like Barman doesn't support direct backup to object storage, unfortunately. And I find the WAL-G documentation very confusing. What I'm looking for is WAL streaming and object storage support, to minimize the amount of data that can be lost and so I don't have to run my own backup server.
drcongo
This is exactly what I was setting it up to do this morning. My research came down to this and WAL-G for the same reasons, and I picked pgBackRest over WAL-G because the documentation was clearer.
Get the top HN stories in your inbox every day.
So sad to see this happening..
I had just last year prepared a detailed guide for reliable postgre backups to local volume as well as cloud storage, using pgBackRest, for my own projects.. pgBackRest have worked so well for me
https://github.com/freakynit/postgre-backup-and-restore-guid...
Thanks to the author for all the time and effort he put into this project..