Skip to content(if available)orjump to list(if available)

SQLite: QEMU All over Again?

SQLite: QEMU All over Again?


·October 4, 2022


> The few core developers they have do not work with modern tools like git and collaboration tools like Github, and don’t accept contributions, although they may or may not accept your suggestion for a new feature request.

The funny thing about this comment is that SQLite is as close to the gold standard of software quality that we have in the open source world. SQLite is the only program that I've ever used that reliably gets better with every release and never regresses. Could it be that this is precisely because they don't use "modern tools" and accept outside contributions?


I feel like a lot of fantastic software is made by a small number of people whose explicit culture is a mix of abnormally strong opinionatedness plus the dedication to execute on that by developing the tools and flow that feel just right.

Much like a lot of other "eccentric" artists in other realms, that eccentricity is, at least in part, a bravery of knowing what one wants and making that a reality, usually with compromises that others might not be comfortable making (efficiency, time, social interaction from a larger group, etc).


Totally agree.

It is just allowing human element that creates quality craft.

When you are following the best practices, you remove that human element (hyperbole, I know).

When you force certain rules, jiras, stand-up, you increase predictability, but the cost is the lower quality, lower happiness and higher attrition.


SQLite's quality is due to the DO-178B compliance that has been achieved with "test harness 3" (TH3).

Dr. Hipp's efforts to perfect TH3 likely did lower his happiness, but all the Android users stopped reporting bugs.

"The 100% MCD tests, that’s called TH3. That’s proprietary. I had the idea that we would sell those tests to avionics manufacturers and make money that way. We’ve sold exactly zero copies of that so that didn’t really work out... We crashed Oracle, including commercial versions of Oracle. We crashed DB2. Anything we could get our hands on, we tried it and we managed to crash it... I was just getting so tired of this because with this sort of thing, it’s the old joke of, you get 95% of the functionality with the first 95% of your budget, and the last 5% on the second 95% of your budget. It’s kind of the same thing. It’s pretty easy to get up to 90 or 95% test coverage. Getting that last 5% is really, really hard and it took about a year for me to get there, but once we got to that point, we stopped getting bug reports from Android."


it's not that "best practices" or any of those things are what causes trouble; it's failing to recognize that they're just tools, and people will still be the ones doing the work. And people should never be treated as merely tools.

You can use all of those things as to enable people to do things better and with less friction, but you also need to keep in mind that if a tool becomes more of a hindrance than a help, you should go looking for a new one.


I don't think you need "abnormally strong opinionatedness" or anything else special: all you need is a certain (long-term) dedication to the project and willingness to just put in the work.

Almost every project is an exercise in trade-offs; including every possible feature is almost always impossible, and never mind that it's the (usually small) group of core devs who need to actually maintain all those features.


I interpreted "opinionatedness" as meaning they have a clear definition of what sqlite is and isn't, including the vision of where it's headed. That would result in a team with very strong opinions about which changes and implementations are a good or bad fit for sqlite.

Can a project consistently make the right trade-offs without having strong opinions like that?


I see these informations especially in light of the theory of constraints when working on a platform:

These devs provide a platform and any change to a platform has a huge impact for the users. They have a plan they follow, and in every project are layers. Constraints can be good, when and if applied correctly like in this case.


Well, and while they don't use git, they do use Fossil. Their explanation for why doesn't make Fossil seem less modern.


Fossil is not less modern than Git, just less popular.

Under the hood it seems a lot like Git. The UI is more Hg-like. I disagree with D. Richard Hipp's dislike of rebasing, but he's entitled to that opinion, and a lot of people agree with him.

Calling Fossil "not modern" is a real turn-off. TFA seems to be arguing for buzzwords over substance.


Perhaps it should pick a better name :P


When the retelling of the history of virtualisation ignored everything before Xen, I questioned the value of the essay.

When it got to asserting that Fossil isn't modern, I discarded it. Fossil's a DVCS, but unlike git it chooses to integrated project tooling for things like bug management with the code repo. You can argue about whether you like the approach. But 'not modern' is an absurd statement.


Actually leaves me seriously considering fossil rather than git for my next project.


Agree, after reading up about fossil. Except for one thing: I don't want the "closed team" culture that was intentionally baked into the tool.

When git replaced SVN, it was so empowering that I, as an individual, was able to use the full maintainer workflow without the blessing of a maintainer, privately by default.

Before git, we saved the output of "svn diff" into a .patch file for tracking personal changes. When submitting a patch, the maintainer had to write a commit message for you. With some luck, you even got credited. For sharing a sensible feature-branch, you had to become a maintainer with full access. This higher bar has advantages (it tends to create more full-access maintainers, for one). However, it sends this message: "Yes open source, but you are not one of us."

Yes, fossil has this great feature of showing "what was everyone up to after release X". I miss that in git. (Closest workaround: git fetch --all && gitk --all.) But if "everyone" means just the core devs, then I'm out.


It's really nice for small personal projects where you are not expecting outside contributions.

I use fossil for my personal projects and git for work.


Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.

A single person can develop and release extremely high quality software, and as long as it meets the needs of the users (it's not missing a lot of features that a taking a long time to deliver), a single person in absolute control and writing all the code is probably a benefit in keeping it high quality and with less bugs.

It may not follow that the same can be said a few years from now, or even a few months from now, since the bus factor of that project is one, and if "bus events" includes "I don't want to work on that anymore but nobody else knows it well at all" then for some users that's a problem (and for others not so much).

On situation isn't necessarily better or worse than the other (and it's probably something in-between anyway), it really just depends on the project and the audience it's intended for. That audience might be somewhat self-selected by the style of development though.


> Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.

I think in this area SQLite has most other open source software beat. SQLite is used in the Airbus A350 and has support contracts for the life of the airframe.

Other places I have see that they have support contracts through 2050.

This is far more future support than any open source software or likely most commercial software.


Fair points, although the bus factor is more like 3 or 4 for SQLite as far as I know. The question though is that if the entire team vanished from the face of the earth, what would the impact be? My guess is that either SQLite would be good enough as is for 99% of use cases and it wouldn't need much development apart from maybe some minor platform specific use cases or, if new functionality truly is needed, then it would be better for a new team to rewrite a similar program from scratch using SQLite as more of a POC than as a blueprint.


SQLite is supported until 2050, and will likely outlast many other platforms if this goal is attained.

I hope the bus factor is high enough to reach the goal.

"Every machine-code branch instruction is tested in both directions. Multiple times. On multiple platforms and with multiple compilers. This helps make the code robust for future migrations. The intense testing also means that new developers can make experimental enhancements to SQLite and, assuming legacy tests all pass, be reasonably sure that the enhancement does not break legacy."


I was speaking less to the SQLite situation specifically and more to the general idea of "Could it be that this is precisely because they don't use "modern tools" and accept outside contributions?" and how I think teams that are very small and not very accepting of outside help/influence might affect that.

To that end I purposefully compares extremes, and tried to allude to the fact that most situations fall between those extremes in some way. SQLite is more towards one end than the other, but it's obviously not a single developer releasing binaries to the world, which is about as far to that extreme as you can go. The other end would probably be something like Debian.

That's not to say either of those situations have to be horrible at what the other excels at. That singular person could have set things in place such that all their code and the history of it gets released on their death, and Debian obviously has a working process for releasing a high quality distribution.


> It may not follow that the same can be said a few years from now, or even a few months from now, since the bus factor of that project is one, and if "bus events" includes "I don't want to work on that anymore but nobody else knows it well at all" then for some users that's a problem (and for others not so much).

You may have been speaking generally, and you'd be right, but specifically the bus factor of the SQLite team and the SQLite Consortium is larger than 1, and they could hire more team members if need be.

If and when the SQLite team is no longer able to keep the project moving forwards, then I think we'd see one or more forks or rewrites or competitors take over SQLite's market share.


Yes, I was speaking generally. Specific development models have advantages and disadvantages, but those can often be countered by non-development model actions taken to limit those disadvantages. For example, and extremely open development model is likely prone to more bugs and quality problems, as well as a harder to read and work in code base. There are steps to combat that, such as style guides and automatic style converters, numerous reviewers that can go through code to fund bugs and make suggestions for better quality, etc.

It's not so much that one model over the other will have those problems I've mentioned for each, as much as I think those are common things those projects should be cognizant of and take steps to combat.

As you noted elsewhere, it sounds like SQLite has done a lot that mitigates what I see as the inherent disadvantages of their development model, which is laudable. At the same time I doubt the average SQLite developer is as easily and quickly replaced as the average Linux kernel contributor, even if there are specific kernel developers which would be hard to deal with their loss. Sometimes all you can do is mitigate the harm of a problem, not remove it entirely.


> Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.

SQLite has both.


And so does the Linux kernel. There are numerous cases of successes and failures at both ends of that spectrum.

My point wasn't to imply that you can only pick one, but that in some cases choices to maximize one aspect can negatively affect the other if care is not taken, and depending on audience high quality released software is not the only thing under consideration in open source projects. Keeping the developer group small and being extremely selective of what outside code or ideas are allowed might bring benefits in quality, but if not carefully considered could yield long term problems that ultimately harm a project.


> Extremely well written and maintained and high quality as of now ...

As I see it, the quality comes from the fixed need, tight scope and small professional team

Fixed need -> Somebody always needs a database

Smaller scope -> Less features, less code, less bugs

Professional team -> No newbies around to break things

Now, to be sure, this kind of development model can only work for some projects.


Also tooling focused on their dev flow.

In difference to the article implying that they use out-date project tooling they don't(). They VCS isn't out-date but in some way more modern then git, it's just focused on dev flows similar to theirs to a point where using typical git dev flows will not work well. Similar not using GitHub is a more then right decisions for how the project is manage, github is too much focused on open contribution projects.

(): You could argue they use "not modern" tools like C and similar to do modern things like fuzz testing. But the articles author clearly puts a focus on the project tooling highlighting git/GitHub so reading it as implying that their VCS is "not modern" i.e. outdated i.e. bad seem very reasonable IMHO.


> Extremely well written and maintained and high quality as of now and having a plan to make sure that can continue in the future are sometimes entirely different things with needs that oppose each other.

Software that's truly "extremely well written and maintained and high quality as of now" has the option of a plan like:

> "At the time of my death, it is my intention that the then-current versions of TEX and METAFONT be forever left unchanged, except that the final version numbers to be reported in the “banner” lines of the programs should become [pi and e] respectively. From that moment on, all “bugs” will be permanent “features.” (

If your software needs perpetual maintenance, that's a good sign that it's probably not that high quality.


> If your software needs perpetual maintenance, that's a good sign that it's probably not that high quality

The problem in a lot of cases is not the software per se, but changing environments. Windows upholds backwards compatibility to a ridiculous degree (you can still run a lot of Win95 era games or business software on Win10), macOS tends to do major overhauls of subsystems every five-ish years that require sometimes substantial changes (e.g. they completely killed off 32-bit app support), but the Linux space is hell.

Anything that needs special kernel modules has no guarantees at all unless the driver is mainlined into the official kernel tree (which can be ridiculously hard to achieve). Userspace is bleh (if you're willing to stick to CLI and statically linking everything sans libc) to horrible (for anything involving GUI or heaven forbid games, or when linking to other libraries dynamically).

The worst of all offenders however is the entire NodeJS environment. The words "backwards compatibility" simply do not exist in that world, so if you want even a chance at keeping up with security updates you have an awful lot of churn work simply because stuff breaks left and right at each "npm update".


When NetBSD imported sqlite and sqlite3 into their base system that was a signal to me that SQLite is no-nonsense and reliable. That was many years ago, around 2011 I think. Not sure why SQLite is getting all the attention on HN lately. Usually more attention means more pressure to adopt so-called "modern" practices and other BS.

SQLite is interesting to me because like djb's software its author is not interested in copyrights.^1

1. ^2

Here is how Hipp abandons his copyrights:

The author disclaims copyright to this source code. In place of a legal notice, here is a blessing:

   May you do good and not evil.
   May you find forgiveness for yourself and forgive others.
   May you share freely, never taking more than you give.
Apparently this is not be enough to convince some folks they can use the code (maybe they really are doing evil), and so there is also a strange set of "reassurances" on the website:

It seems that is still not enough and so there is actually an option to pay for a "license" to software that is in the public domain. Don't laugh.

2. I seem to recall an open source OS project or two making a fuss about djb's software being public domain but perhaps I am remembering incorrectly.


Public Domain doesn't exist in some countries, so people/companies from those countries want an assurance that their country's laws understands.


This part also rung some alarm bells for me. It makes me think the author is unable to see outside his bubble, and that feeling is only reinforced by the comments about Rust and the CoC in the Readme.


Yeah the guy clearly has some serious NIH syndrome.


I think you are right.

Their rational for not using git seems reasonable to me


I'm all for minimizing friction for contributors, but when I read things like: "The few core developers they have do not work with modern tools like git and collaboration tools like Github", I wonder if the collaboration from someone who refuses to send a patch to a mailing list (because it is not what they are used to and don't care to learn how to do) is really worth considering. I mean: if someone is not wanting to move a few millimeters out of their comfort zone to make a contribution is, very likely, someone who has very little commitment or will try to force their opinions and methods onto others.


The irony is that sqlite uses fossil which is more modern than git.

But really, I agree, the elephant in the room is that any time someone use the term "old" or "not modern enough" or "legacy" it means they have a system they don't under stand that they want to get rid of. software does not "wear out".


SQLite does not accept any patches from anyone.


It does but you have to go through the maintainers and they have to be in line with the core principles of SQLite and have the necessary code quality etc.

I.e. it's hard to a point you can just say it's impossible for most people.

But what the author of the article fails to mention is that many of the things libsql wants to add to sqlite are in direct conflict with the core principles of sqlite.

E.g. SQLite: Max portability by depending on a _extreme_ small set of C-Standard C-Functions. libSQL: lets add io-uring a Linux specific functionality more complex then all the C-Standard C-Functions Sqlite depends on together.

E.g. SQLite: Strongly focused on simplicity and avoidance of race conditions by having a serialized & snapshot isolation level without fork-the-world semantics (i.e. globally exclusive write lock). libSql: Lets make it distributed (which is fundamental in conflict with the transaction model, if you want to make it work well).

E.g. SQLite: Small compact code base. libSql: Lets include a WASM runtime (which also is in conflict with max portability, and simplicity/project focus).


Personally I find Fossil a really compelling VCS. I plan to use it for my next personal project.



If you want to say something, say it. "I don't like that my contributions aren't accepted, so I'm forking the codebase". That's going to generate some discussion, but it's fine. Public domain and all.


"look what happened to qemu, same thing will happen to sqlite (and I'm contributing to the problem and forking it)". One can't say "no contributions led to fragmentation" while _at the same time_ contributing to fragmentation by making a hard fork!


> "However, edge computing also means that your code will be running in many geographical locations, as close as possible to the user for the better possible latency. Which means that the data that hits a single SQLite instance now needs to be replicated to all the others."

Then replicate away. Leave that stuff out of SQLite. If that's a really important use-case, go use couchdb or something similar.


It's wonderful that there is a fork of SQLite. It's good to see new ideas.

Is Airbus going to use it in the A350? No.

Why not? It's not compliant with DO-178B, because it has not been confirmed correct with "Test Harness 3" (TH3).


people are not annoyed by anyone forking SQLite

they are annoyed by people misrepresenting facts in a subtle manipulative way to make their fork and the reasons around it look like something it isn't

what the article intentionally or unintentionally conveys in its tone and formulation is: "sqlite is badly outdate beyond saving and needs to be replaced"

what actually is the case: "we want something like SQLite but with some core changes incompatible with SQLites core principles which should be API compatible enough to allow reusing a lot of DB tooling".


There is something distasteful about this announcement but I can’t quite pinpoint it. Maybe it feels like a bait and switch announcing their own fork after the whole qemu commentary, or the wording about the code of conduct. I don’t know.

One of the greatest things about SQLite is how easy it is to embed in random targets/build systems/languages: a .c and .h and you are all set. Moving away from that model will turn off many people away so I hope they retain that model.


Trust your gut. It's a public shaming campaign disguised as a history lesson. Glauber Costa is trying to seem welcoming in one breath and, in another breath, is sniping at the same people he claims to want to join.

> We [Glauber Costa] take our code of conduct seriously, and unlike SQLite, we do not substitute it with an unclear alternative. We strive to foster a community that values diversity, equity, and inclusion. We encourage others to speak up if they feel uncomfortable.

With zero new code to justify this fork, this article is little more than a silly power trip by a flailing startup.


They do have clear goals of what to add, their goals make somewhat sense for some use-cases. But pretty much all of them direct conflict with at least one and sometimes multiple SQLlite core principles...

What the author wants as far as I can tell is a embedded distributed (edge) database with some C-API and SQL compatibility to SQLite so that you can use it with existing tooling (like e.g. by linking against it instead of sqlite) but not necessary as a drop in replacement (different transaction semantics), which doesn't need the same degree of portability as sqlite.

Through that is not quite what the author ends up communicating in the article. I wonder what degree of the formulations are intentionally manipulative compared to accidentally badly formulated.


Regardless of intention, the inability of the author to adequately explain why a fork was necessary, apart from some hand wavy arguments that some other solutions were inadequate, does not inspire confidence. I find it very hard to be generous with the author given the way that he paints himself as a visionary while casually dismissing the work and perspective of the people who actually did the real visionary work of seeding the technologies that he has hitched his wagon to.


The README has "Use Rust for new features" in it, so I doubt it will retain the same simplicity.

As much as I like Rust, and despite mixing Rust and C++ being the clear path forward for Mozilla, I'm not so sure it's the winning approach here. Part of the beauty of SQLite is the single .c/.h thing.

That said, I can see that maybe they're trying to expand the use cases of libsql compared to SQLite. That seems to be the whole idea with adding support for e.g. distributed databases, which is something SQLite just doesn't bother with at all, and would introduce a ton of external interfacing regarding networking. SQLite uses only the standard C library, and even then barely scratches its surface.

Also, SQLite's VFS API can do a lot there already. For example, I remember seeing SQLite compiled to WASM, using a VFS that downloads a remote database on S3 using HTTP Range requests. (I don't think it supported writing to the database, but it was still a really cool way of allowing complex client-side querying of a dataset that is static but too big to transfer).


> Part of the beauty of SQLite is the single .c/.h thing.

Noting that it's not _developed_ that way. Its many files are compounded together by the build process to produce the amalgamation build. In a mixed-language project (Rust for some, C for others) a single-file distribution literally won't be possible and it will require as many toolchains to build as there are languages involved.


> a single-file distribution literally won't be possible


You could compile the Rust into Wasm, then the Wasm into C.

Firefox did this last year [1], so the tools exist and it's neither totally impossible nor totally stupid.

The penalty is less than JITting or interpreting Wasm.

wasm2c is the tool they apparently used in 2021. [2]



> wasm2c takes a WebAssembly module and produces an equivalent C source and header.


I wonder if mrustc [0] would be sufficient to retain the amalgamated build even if Rust were adopted. The regular Rust tool chain would be needed for development still, but if simply depending on the library the Rust components could be transpiled to C…

[0] -


TH3 requires the components to be separate for testing.

But that is not how it is meant to be used.


> Also, SQLite's VFS API can do a lot there already. For example, I remember seeing SQLite compiled to WASM, using a VFS that downloads a remote database on S3 using HTTP Range requests. (I don't think it supported writing to the database, but it was still a really cool way of allowing complex client-side querying of a dataset that is static but too big to transfer).



It also feels gross to me, but the only thing I can put my finger on is citing webshit as the reason to completely change the direction of a well-loved project.


But they aren't compelling the SQLite devs to do anything.

If a fork catches on, then it's what people wanted. If it doesn't catch on, then let it fail.


yup but more likely their changes will make it unusable for a bunch of SQLite use-cases, so if it catches on it will probably exist in parallel to SQLite and once it's realized they won't replace SQLite should increasingly diverge by focusing on it's core target audience.


> There is something distasteful about this announcement but I can’t quite pinpoint it.

I think it's a general lack of gratefulness. The author considers sqlite as something to be taken without even saying thank you to drh. All he has to say is, that's a nice project you have there, it would be shame if something happened to it. Like to qemu.


> The author considers sqlite as something to be taken without even saying thank you to drh.

Well, it is in the public domain. Respecting and thanking people is of course a very nice thing to do but the fact is there are zero restrictions on what can be done with this software.

> that's a nice project you have there, it would be shame if something happened to it

That's always possible in all free and open source software development. Anyone can show up and just start working harder than whoever's currently in charge. Corporations can show up with paid full time developers and completely displace a project's leadership. Eventually the fork will accumulate so many improvements it will become the de facto upstream.

It remains to be seen if this is what will happen to SQLite and this new libSQL. Who knows, right? I don't expect SQLite to go anywhere though.


Just a note that LiteFS isn't a distributed filesystem; despite the name, it's not a filesystem at all, but rather just a filesystem proxy. It does essentially the same thing that the VFS layer in SQLite itself does, but it does it with a FUSE filesystem, so you don't have to change SQLite's configuration in your application. As for the "distributed" part of it: LiteFS has a single-writer multi-reader architecture; it's the same strategy you'd use to scale out Postgres.

It's a little ironic to see LiteFS brought up in relation to a SQLite fork, since the premise of LiteFS is not changing the SQLite code you link into your application. Much of the work in LiteFS is about cooperating with standard SQLite.

At any rate: it seems somewhat unlikely that a hard fork of SQLite is going to succeed, in that part of the reason so many teams are building SQLite tooling is the trust they have in the current SQLite team.


>LiteFS has a single-writer multi-reader architecture; it's the same strategy you'd use to scale out Postgres

I'm curious about how such a strategy deals with applications that update a value and then read the updated value back to the user, since there might be a replication delay between the write (that goes to the master) and the read (that comes from the closest replica). Do you make optimistic updates on the client or do you club (write-then-read) operations into a transaction?


LiteFS author here. It depends on the application. You can check the replication position on the replicas and simply wait until the replica catches up. That requires maintaining that position on the client though.

An easier approach that works for a lot of apps is to simply have the client read from the primary for a period of time (e.g. 5 seconds) after they issue a write. That's easy to implement with a cookie and it's typically "good enough" consistency for many read-heavy applications.


This is an application (and frequently data type) dependent decision. Some data is safe to return on acceptance others need to wait for acknowledged writes.

The most naive solution is to just make all writes slow but acknowledged.


Has sqlite been forked before? This is the first true fork and re-license attempt that's caught my eye. The others I've seen are "ports" and "modifications", but always pointing people back upstream.

It's possible that the appetite for a SQLite fork is there, but nobody has provided it.


> Has sqlite been forked before?

There is a fork called SQLCipher with native encryption. Probably lots of others too, but this one is the one that I remember.


I do remember the author of LMDB [0] porting some parts of sqlite to use lmdb instead, and then talking about the results. A quick googling doesn't seem to give me a result though.

[0] -


TFA lists forks. rqlite comes to mind.


rqlite[1] author here. I wouldn't consider rqlite a fork in any sense, just so we're clear. That rqlite uses plain vanilla SQLite is one of its key advantages IMHO. Users have no concerns they're not running real SQLite source.

That said, there are some things that would be much easier to do with some changes to the SQLite source. But I think the message that rqlite sits on top of pure SQLite makes is still the right choice.



Is rqlite a fork? It strikes me more as an application that uses SQLite as a dependency, rather than a fork of SQLite itself.


> LiteFS has a single-writer multi-reader architecture;

Note that SQLite transactions are also fundamentally single writer, multiple reader (isolation level serializized with snapshot isolation and no work-forking).

So if you want to make SQLite distributed you will have to end up with not just a single writable replicate but a global write log on transaction level. Which is very very bad for performance for many use cases. (E.g. in PostgreSQL if you use serialized & snapshot transaction isolation it's "forking the world" for parallel write transactions and if multiple transactions have no overlap they can complete in parallel without a problem (in parallel but on the same single write enabled replica). This is good enough for quite a bunch of applications and can still have a decent throughput).

As far as I can tell many use-cases which could profit from a distributed SQLite would not really like a per-transaction global lock but would be okay with something like what Postgres does. Through there are always some exceptions.


SQLite only works as a concept because it is not networked. Nobody truly understands the vast and unsolveable problem that is random shit going wrong within the communication of an application over vast distances. SQLite works great because it rejects the dogma that having one piece of software deal with all of that shit is in any way a good idea.

Back your dinky microservice with SQLite, run multiple copies, have them talk to each other and fumble about trying to get consensus over the data they contain in a very loose way. That will be much, much less difficult than managing a distributed decentralized database (I speak from experience). It's good enough for 90% of cases.

Remember P2P applications? That was basically the same thing. A single process running on thousands of computers with their own independent storage, shuffling around information about other nodes and advertising searches until two nodes "found each other" and shared their data (aw, love at first byte!). It's not great, but it works, and is a lot less trouble than a real distributed database.


Amen. The first rule of distributed objects is "don't distribute your objects". It's much easier to reason about a bunch of different actors, each with their own copies of objects, trying to converge on consensus, than it is to actually have a distributed database that obeys ACID and does all the databasey things.

A subtle distinction, but an important one.


A lot of words, manifesto decrying sqlite somehow striffling innovation and a battle-cry for all to join … but backed with very little code.

SQLite is public domain. Anyone is free to claim it as their work and do anything they wish.

No need to disparage. You never were prevented from doing what you wanted to do. Take the code, improve it and if it’s any good it may get recognition.

The fact is sqlite is good enough for 99.9% case. Everything else is fighting for the niche and the poster seems to be simply pissed that sqlite has the mindshare even without having their favorite features.


> Rejoin core SQLite if its policy changes

> We are strong believers in open source that is also open to community contributions. If and when SQLite changes its policy to accept contributions, we will gladly merge our work back into the core product and continue in that space.

If libSQL is going Apache-2.0¹ rather than public domain, that seems extremely unlikely. The public domain nature of SQLite is rather important for its deployments. And in fact licensing is a rather important part of why SQLite is closed to contributions (though with some administrative overhead it could be opened to contributions from people from some countries). The fact that this announcement and project documentation seems to make absolutely no mention of the licensing situation perplexes me.


¹ As they state in the text; but is entirely insufficient, not constituting application of the license. I find this a bad sign, in stark contrast to the meticulous care SQLite has taken to copyright matters.


Choosing a license that is not as good as public domain is a way to preclude "rejoining core SQLite".


> do not work with modern tools like git

That is VERY misleading and intentionally so, doesn't put a good light on the author.

Sure SQLite isn't open for contribution but their tolling (VCS, issue tracking) is in no way "less modern" then git+github, it is just focused on a different approach.

Putting a widely used open source but not open contribution project on GitHub is a nightmare. Similar if your are having no open contribution and only a small trusted team then you can use change flows not viable otherwise and their VCS is designed for such design flows for which git has quite a bunch of unnecessary overhead and food guns.

Lastly for many project choosing no open contribution is a very sane approach properly maintaining contributions for a widely used project is a lot of work basically forcing you to delegate a lot of work to people you hardly know. If you don't have time for this or don't feel good with trusting people you hardly know then open contribution can easily become a night mare. The fact that you AFIK have to be very disciplined to write C doesn't make this better.

Putting this aside another thing I found of putting in the article is that the author fails to mention that basically all of the things he wants to have in SQLit are in direct conflict with core design principles of SQLite. So they need a fork anyway independent of weather or not SQLite accepts contributions, their changes wouldn't be accepted anyway...

Through without question for their use cases a new database is needed which is similar to SQLite but also in subtle but very fundamental ways very different. Starting it with a fork doesn't seem a bad idea. But they really should not say/imply it's SQLite it's not sqlite, it's just very similar and once was forked of SQLite.


By all means, fork the code base for your specific use case, but I'll trust drh and his team over some rando any day.

sqlite isn't the most widely used database by accident - it's installed on practically every mobile phone on earth. It's the result of careful and deliberate design with millions of tests and fuzzing. The sqlite team is very responsive to its users' needs, but some features just don't make the cut. That's a good thing - to keep the library small, fast and reliable.


Glauber is definitely not "some rando". He's a very skilled programmer who has worked on many things (including QEMU).


He will have to be extremely skilled to develop and pass a DO-178B test suite.

SQLite does have a "moat" of sorts - it must always be avionics-reliable code, until 2050 or beyond.

Any fork of popularity will be constantly rebasing to the head.


Has he produced any code for the libsql project yet, or only FUD blog posts?


rwmj! Now here's a name I don't hear in forever! Hope you're doing great, man! And I won't take offense, I'm pretty rando, though =p


The headline is needlessly deceiving. The author is making the case for, and subsequently announcing, a hard fork of SQLite for the purposes of meeting the needs of edge computing.


It seems OP wants to "shame" (*) the sqlite devs into supporting this fork. "You're going to miss the boat! History repeats itself!"

(*) the modern word for guilt-tripping.


That's the way it looks to me. All of the SQLite code is already given away for free. What more is there to give? The trademark (owned by Hwaci according to the SQLite Consortium Agreement.) He wants to use politically charged shame tactics to get his hands on not just the SQLite code but the SQLite brand as well.

Why does he want the trademark? Because he knows that his complaints are fringe and few people will give a damn about his fork, unless he has the trademark and can call his fork the official SQLite.


I don't want the SQLite trademark.


Things like rqlite are great and all, but I'm not sure if that's something that should be included in SQLite no matter which development model it uses.

Looking at the libSQL page, it seems they want SQLite to go in quite a different direction in general. That's all fine, but at that point you're working on other problems than what SQLite is intended to solve. People want software to solve every possible problem under the sun, but SQLite is a project with a fairly specific and narrow scope, which is intentional.

The "trick" is to enable things to be extended and patched easily, if need be, so people can build extensions and derivate projects if they want. I don't know to what degree SQLite allows this – I'm not familiar with the SQLite source code.

At any rate, for this to succeed you need at least one person actually writing code for it. Thus far, all I see in the repo are some non-code changes (add license, coc, Makefile twaeks, etc.) I don't know what the exact plans are, but you need more than a GitHub repo with "we accept contributions" and wait for people to submit them, because most people/companies won't. Almost all projects live or die by their core contributors, not the community.


SQLite allows people to write extensions (C based), and load them into the running database.

There are some popular ones around like Libspatialite, which provides GIS functions roughly equivalent to PostGIS:

There are a bunch of others too. And yeah, I agree with you that if things could be done via extensions rather than forking SQLite... that would be better. :)


SQLite's maintainers have a choice. They can spend their time coding, or they can spend their time explaining, over and over again, why they don't want to do a distributed database.

I can see why they don't take contributions.


They have changed their minds on features before. E.g., FULL OUTER JOIN. They should and almost certainly do feel free to change their minds on features relating to distributed DBs.