Get the top HN stories in your inbox every day.
jamest
[Firebase founder] The thing I'm excited about w/Instant is the quad-fecta of offline + real-time + relational queries + open source. The amount of requests we had for relational queries was off-the-charts (and is a hard engineering problem), and, while the Firebase clients are OSS, I failed to open source a reference backend (a longer story).
Good luck, Joe, Stopa and team!
ashconnor
I always assumed that an architectural decision had prevented relational queries in Firebase.
It was jarring to find out that indexes are required for every combination of filters your app applies, but then you quickly realize that Firebase solves a particular problem and you're attempted to shoehorn into a problem-space better solved by something like Supabase.
It's not too dissimilar to DynamoDB vs RDB.
randomdata
> I always assumed that an architectural decision had prevented relational queries in Firebase.
Seems the biggest problem is that Firebase doesn't have relations. How can you query that which does not exist?
I'm guessing what they really want is SQL? Once upon a time when I was stuck on a Firebase project I built a SQL (subset) engine for Firebase to gain that myself, so I expect that is it.
unsupp0rted
Building a logistics app, I wish I could query in Firebase for items that don’t have a “shipped” field.
But I can’t.
999900000999
Thanks for creating Firebase!
It's really the definition of an managed database/datastore.
Do you see InstantDB as a drop in replacement ?
To be honest I don't want to have to worry about my backend. I want a place to effectively drop JSON docs and retract them later.
This is more than enough for a hobbyist project, though I imagine at scale things get might not work as well.
stopachka
For what it's worth, we designed Instant with this in mind. Schema is optional, and you can save JSON data into a column if you like.
If you wanted to store documents, you could write:
```
useQuery({docs: {}}) // get documents
transact(tx.docs[docId].update({someKey: someValue}); // update keys in a doc
transact(tx.docs[docId].delete()) // delete the doc
```
999900000999
Thanks for the response.
2 questions.
How hard is it to swap our firebase for instant? I've had an amazing time with firebase, but I sorta want to switch to using a completely local solution.
I have a small lyric video generator, and while I don't care about my own songs potentially leaking, I would never want to take responsibility for something else's data. I basically use firebase for the lyrics afterwards I transcribe them .
Second, do you offer your own auth or just integrate with other solutions.
warden_2003
If you only need simple dropping and collecting back, maybe you should consider about AWS S3 or Supabase storage.
999900000999
Ohh, I still need a database, I just need the JSON doc format.
ElFitz
Or a key-value store, if the size is limited and speed is essential.
buggy6257
This is an aside but “trifecta but with four” actually has an awesome name: “Superfecta”!
djeastm
Tetrafecta would be cooler
aitchnyu
Cursory googling says tetra is Greek and perfect is Latin, so its a bastard word like erogenous or television.
SOLAR_FIELDS
I would probably avoid naming Firebase alternatives with a prefix like “Super” at this time.
hyperbrainer
I am dumb. Why? Was there some failed/controversial thing?
wccrawford
https://en.wikipedia.org/wiki/Superfecta
Sounds like someone made up the name to just sound better than trifecta. It's marketing speak.
Also, as the link says, it has been used to mean more than four. And other languages use their own equivalent of "quadfecta" instead.
Plus, I knew exactly what "quadfecta" meant, but would have no idea about "superfecta".
yard2010
"Supafecta"
robertlagrant
You probably heard this a million times but I still remember trying that simple firebase demo of draw in one box; see the results in another and being amazed. That was one of my pushes out of boring enterprise software death by configuration and into software creation based on modern OSS products.
650REDHAIR
Was pretty neat to see your investment/involvement!
Made me feel quite old that Firebase is no longer "modern" though...
Ozzie_osman
Awesome to see this launch and to see James Tamplin backing this project.
nezaj
Thank you James!
bobbywilson0
If we only had doSQL() for everything.
sibeliuss
One bit of feedback: Its always appreciated when code examples on websites are complete. Your example isn't complete -- where's the `transact` import coming from, or `useQuery`? Little minor details that go far as your product scales out to a wider user base.
stopachka
Thank you for the feedback, this makes sense!
I updated the example to include the imports:
```
import { init, tx, id } from "@instantdb/react";
const db = init({ appId: process.env.NEXT_PUBLIC_APP_ID, });
function Chat() {
// 1. Read
const { isLoading, error, data } = db.useQuery({
messages: {},
});
// 2. Write
const addMessage = (message) => {
db.transact(tx.messages[id()].update(message));
};
// 3. Render!
return <UI data={data} onAdd={addMessage} />;
}```
What do you think?
sibeliuss
Much better!
android521
Yes. This gives users the vibe of “ this is obvious, if you don’t know it , you are dumb “ .
ElFitz
Or that the writers were oblivious, and the documentation shouldn’t be relied upon.
iosguyryan
I usually read: you have so few users / care so little that I shouldn’t trust you because you’d have heard this complaint and fixed it. But in this case it was fixed up quickly. Which is great
lelo_tp
lol i was wondering the same thing
krehwell
this is my question too, lol
codersfocus
For those looking for alternatives to the offline first model, I settled on PowerSync. Runner up was WatermelonDB (don't let the name fool you.) ElectricSQL is still too immature, they announced a rewrite this month. CouchDB / PocketDB aren't really up to date anymore.
Unfortunately this area is still immature, and there aren't really great options but PowerSync was the least bad. I'll probably pair it with Supabase for the backend.
ochiba
Co-founder of PowerSync here. Would love to hear what you would like to see improved in PowerSync :) Thanks!
codersfocus
The docs for React Native. I had to piece together how to do stuff from the code examples and a YouTube video tutorial because they're pretty sparse with information, and missing a cohesive tutorial that could get me setup with CRUD locally. Plus the initial setup process from the npm page, which itself was notable for how much was required.
I haven't attempted to setup the backend yet, so that's my feedback so far.
ochiba
Thanks, appreciate the feedback.
satvikpendem
ElectricSQL before their announced rewrite worked fully offline and could sync when the clients became online again. Now, that functionality with their rewrite is somewhat removed, as they expect you to handle clientside writes by yourself, which is what I believe PowerSync does as well, am I correct in that understanding? If I wanted a fully offline clientside database that could then sync to all the other clients when online, what would I do? I am looking for this in the context of a Flutter app, for reference.
ochiba
> Now, that functionality with their rewrite is somewhat removed, as they expect you to handle clientside writes by yourself, which is what I believe PowerSync does as well, am I correct in that understanding?
Yes, that is correct.
> If I wanted a fully offline clientside database that could then sync to all the other clients when online, what would I do? I am looking for this in the context of a Flutter app, for reference.
This is what PowerSync provides by default. If you haven't done so yet, I would suggest starting with our Flutter client SDK docs and example apps — and feel free to ask on our Discord if you have any questions or run into any issues :)
aCoreyJ
I use TinyBase for the client side store, it can sync with pretty much all the technologies people are talking about here
zarathustreal
> CouchDB / PocketDB aren’t really up to date anymore.
Source? I’ve been using CouchDB as my game world DB for years, works fine for me?
WorldMaker
The biggest "source" of vibes that CouchDB/PouchDB is "dead/maintenance mode" is the corporate ecosystem/contributors around it:
- Couchbase has been increasingly moving away from CouchDB compatibility
- Cloudant was one of the more active contributors until it got eaten by IBM and put into a maintenance spiral (what mother can love what IBM "Blue Mix" has done to Cloudant?)
- In general the still growing number of document DBs that are Mongo-compatible but not CouchDB-compatible (AWS and Azure document DB offerings, for instance)
In Open Source the winds of commercial favor aren't always reflective of Open Source contributor passion, but there too the pace of PouchDB seemed to greatly slow down a few years ago, and lost the interest of some major contributors. CouchDB itself seems to have gotten hugely stuck in a bunch of Apache committees over the design of the next semver major version, with a ton of huge breaking changes that don't really seem to be for solving problems but do some architecture battle under the hood, some political war between Erlang and other programming languages for superiority, and some political war between Apache trying to consolidate core functionality with some of the other database-like engines in their ~~graveyard~~ custodianship.
swalsh
I'm wary of stuff like this, probably really useful to rapidly iterate.... but what a maintence nightmare after 10 years and your schema has evolved 100 times, but you have existing customers in various state of completeness. I avoided firebase when it came out for this reason. I had a few bad experiences with maintaining applications built on top of Mongo that made it to production. It was a nightmare.
nezaj
We hear you on the pain for evolving NoSQL schemas. [1]
For what it's worth, we're built on top of Aurora and support relations so evolution should be much easier!
[1] https://mdp.github.io/2017/10/29/prototyping-in-the-age-of-n...
EasyMark
This is why I’ve always stayed well behind the bleeding edge but still within earshot if anything comes along that sounds like it’s of interest to me. I usually code to work and not for pleasure, although I do a little web programming for friends, but I still use jQuery and Typescript for that, I think the only “new”thing I use is tailwind, which is a bit of a game changer for what I like to do, I never liked CSS but it worked well enough for my needs.
mixmastamyk
Did they say schemas aren’t supported, or is that implied by the firebase label?
nezaj
We support schemas! You can build them in the GUI or manage them as code [1]
blixt
I saw the reference to “apps like Figma” and as one of the people that worked on Framer’s (also a canvas based app) database which is also local+multiplayer I find it hard to imagine how to effectively synchronize canvas data with a relational database like Postgres effectively. Users will frequently work on thousands of nodes in parallel and perform dragging updates that occur at 60 FPS and should at least be propagated to other clients frequently.
Does Instant have a way to merge many frequent updates into fewer Postgres transactions while maintaining high frequency for multiplayer?
Regardless this is super cool for so many other things where you’re modifying more regular app data. Apps often have bugs when attempting to synchronize data across multiple endpoints and tend to drift over time when data mutation logic is spread across the code base. Just being able to treat the data as one big object usually helps even if it seems to go against some principles (like microservices but don’t get me started on why that fails more often than not due to the discipline it requires).
shunia_huang
Good point on the update frequency, I believe it is a must to batch the requests and responds for any of this type of lib/service to work in a production environment, a performance report/comparison is still required for ppl to get the idea if this is good to support their business model.
About the synchronized data though I think it's not about the database but the data types designed to sync the data? I worked on multiple-player canvas games and we didn't really care that much about relational db or document db, they worked both fine. I would love to know what's the difference and the challanges.
nezaj
We do indeed batch frequent updates! Still many opportunities for improvements there, but we have a working demo of a team-oriented tldraw [1]
stopachka
We would love to hear more about the architecture you used at Framer. Would you be up for a coffee? My email is stopa@instantdb.com
Palmik
Would love to hear how you went about doing things at Framer!
lewisl9029
Congrats on the launch! :)
Apparently I signed up for Instant previously but completely forgot about it. Only realized I had an account when I went to the dashboard to find myself still logged in. I dug up the sign up email and apparently I signed up back in 2022, so some kind of default invalidation period on your auth tokens would definitely make me a bit more comfortable.
Regardless, I'm still as excited about the idea of a client-side, offline-first, realtime syncing db as ever, especially now that the space has really been picking up steam with new entrants showing up every few weeks.
One thing I was curious about is how well the system currently supports users with multiple emails? GitHub popularized this pattern, and these days it's pretty much table stakes in the dev tools space to be able to sign in once and use the same account across personal accounts and orgs associated with different emails.
Looking at the docs I'm getting the sense that there might be an assumption of 1 email per user in the user model currently. Is that correct? If so, any plans to evolve the model to become more flexible?
stopachka
Noted about the refresh tokens, thank you!
> One thing I was curious about is how well the system currently supports users with multiple emails? GitHub popularized this pattern, and these days it's pretty much table stakes in the dev tools space to be able to sign in once and use the same account across personal accounts and orgs associated with different emails
Right now there is an assumption of 1 `user` object per email. You could create an entity like `workspace` inside Instant, and tie multiple users together this way for now.
However, making the `user` support multiple identities, and creating recipes for common data models (like workspaces) is on the near-term roadmap.
coffeemug
Congrats on the launch! I think Firebase was started in 2011, and it's incredible that 13 years later the problem is still unsolved in an open way. We took a shot at this at RethinkDB but fell short. If I were doing this again today, Instant is how I would build it. Rooting for you!
stopachka
I really appreciate your message Slava. Your essays were really influential for us.
antidnan
I've been using Instant for about 6 months and have been very happy. Realtime, relational, and offline were the most important things for us, building out a relatively simple schema (users, files, projects, teams) that also is local first. Tried a few others unsuccessfully and after Instant, haven't looked back.
Congrats team!
stopachka
It's been great iterating with you AJ! Can't wait for what's ahead.
breatheoften
What's the short summary of how the authorization system works for this?
One of the things I find quite nice about firebase is the quite powerful separation between the logic of data retrieval / update and the enforcement of access policy -- if you understand it you can build the prototype on a happy path with barely any authorization enforcement and then add it later and have quite complete confidence that you aren't leaking data between users or allowing them to change something they shouldn't be able to. Although you do need to keep the way this system works in mind as you build and I have found that developers often don't really grasp the shape of these mechanisms at first
From what I can tell -- the instant system is different in that the permission logic is evaluated on the results of queries -- vs firebase which enforces whether the query is safe to run prior to it even being executed ...
stopachka
> What's the short summary of how the authorization system works for this?
We built a permission system on top of Google's CEL [1]. Every object returned in a query is filtered by a 'view' rule. Similarly, every modification of an object goes through a 'create/update/delete' rule.
The docs: https://www.instantdb.com/docs/permissions
The experience is similar to Firebase in three ways:
1. Both languages are based on CEL 2. There's a distinct separation between data retrieval and access policy 3. You can start on a happy path when developing, and lock down later.
AFAIK, Firebase Realtime can be more efficient, as it can tell if a permission check has passed statically. I am not sure if Firestore works this way. We wanted to be more dynamic, to support more nuanced rules down the road (stuff like 'check this http endpoint if an object has permissions'). We took inspiration Facebook's 'EntPrivacy' rules in this respect.
SahAssar
> Every object returned in a query is filtered by a 'view' rule. Similarly, every modification of an object goes through a 'create/update/delete' rule.
Is that efficient for queries that return many rows but each user only has access to a few?
Is there a specific reason to not use something like postgresql RLS that would do the filtering within the database where indexes can help?
Guillaume86
Yes, reading the essay, that seems like the only "red flag" to me, the rest sound like a dream db.
Not being able to leverage permission rules to optimize queries (predicate pushdown) seems like too big a compromise to me. It would be too easy to hit pathological cases, and the workaround would probably be something akin to replicating the permission logic in every query. Is there any plans to improve this?
rockwotj
> Firebase Realtime can be more efficient, as it can tell if a permission check has passed statically. I am not sure if Firestore works this way.
Firestore's rules are also able to prove before the query runs if the query will only return data that the user has access to according to the rules. That's a pretty important property that "rules aren't filters" because it prevents bad actors from DDOSing your system. My former colleague wrote about this: https://medium.com/firebase-developers/what-does-it-mean-tha...
dudus
While it seems inflexible at first this system is surprisingly capable and provides great DX, one of the best things about working with firestore.
the_duke
I've found triple stores to have pretty poor performance when most of your queries fetch full objects, or many fields of the same object, which in the real world seems to be very common.
Postgres also isn't terrible, but also not brilliant for that use case.
How has your experience been in that regard?
jitl
It’s not quite the same thing but nearby:
I built a EAV secondary index system on top of Postgres to accelerate Notion’s user-defined-schema “Databases” feature about a year ago. By secondary index, I mean the EAV table was used for queries that returned IDs, and we hydrated the full objects from another store.
We’d heard that “EAV in Postgres is bad” but wanted to find out for ourselves. Our strategy was to push the whole query down to Postgres and avoid doing query planning in our application code.
When we first turned it on in our dogfood environment, the results looked quite promising; large improvement compared to the baseline system at p75, but above that things looked rough, and at p95 queries would never complete (time out after 60s).
It worked great if you want to filter and sort on the same single attribute. The problem queries were when we tried to query and sort on multiple different attributes. We spent a few weeks fixing the most obviously broken classes of query and learned a lot about common table expressions, all the different join types, and strategies for hinting the Postgres query planner. Performance up to p95 was looking good, but after p95 we still had a lot of timeout queries.
It turns out using an EAV table means Postgres statistics system is totally oblivious to the shape of objects, so the query planner will be very silly sometimes when you JOIN. Things like forget about the Value index and just use a primary key scan for some arms of the join because the index doesn’t look effective enough.
It was clear we’d need to move a lot of query planning to the application, maintain our own “table” statistics, and do app joins instead of Postgres joins if Postgres was going to mess it up. That last part was the last nail in the coffin - we really couldn’t lean on join in PG at all because we had no way to know when the query planner was going to be silly.
It was worth doing for the learning! I merged a PR deleting the EAV code about a month ago, and we rolled out a totally different design to production last week :)
d0100
I really love Postgres, but I'll never not laugh at the fact that duplicating a CTE caused my query to go faster... (60s to 5s)
Postgres really trips up when you start joining tables
Sometimes you can fix it with "(not) materialized" hints, but a lot of the time you just have to create materialized views or de-normalize your data into manual materialized views managed by the application
sroussey
Does postgres not have the ability to hint or force indexes?
Long long time ago, I found that quite helpful with MySQL.
jitl
It does not, and that fact is the #1 downside of Postgres. It is not predictable or controllable at scale, and comes with inherent risk because you cannot “lock into” a good query plan. I have been paged at 3 am a few times because Postgres decided it didn’t like a perfectly reasonable index anymore and wanted to try a full table scan instead :(
evanelias
Nope, weirdly Postgres still doesn't have that ability even today.
benpacker
It’s not in core, but there are multiple extensions that provide this functionality
nostrademons
I've also found triple stores to have terrible performance, but it looks like the intended use-case for this (like Firebase) is rapid development, prototyping, and startups. You aren't going to generate enough traffic when you're building an MVP for this to be an issue.
And it's a hosted service, so the performance issues are for the InstantDB team to worry about, and they can fold it into the price they charge. It does mean that your application architecture will get locked in to something that costs a fortune in server bills when it gets big, but from InstantDB's POV, that's a feature not a bug. From your POV as a startup it may be a feature as well, since if you get to that point you'll like have VC to blow on server bills or use to rewrite your backend.
stopachka
So far we haven't hit intractable problems with query performance. One approach that we could evolving too down the road is similar to Tao [1]. In Tao, there are two tables: objects and references. This has scaled well for Facebook.
We're also working on an individual Postgres adapter. This would replace the underlying triple store with a fully relational Postgres database.
[1] https://www.usenix.org/system/files/conference/atc13/atc13-b...
evanelias
> In Tao, there are two tables: objects and references. This has scaled well for Facebook.
That's a rather tremendous oversimplification, unless something major changed in recent years. When I worked on database infra at FB, MySQL-backed TAO objects and associations were mapped to distinct underlying tables for each major type of entity or relationship. In other words, each UDB shard had hundreds of tables. Also each MySQL instance had a bunch (few dozen?) of shards, and each physical host had multiple MySQL instances. So the end result of that is that each individual table was kept to a quite reasonable size.
Nor was it an EAV / KV pattern at all, since each row represented a full object or association, rather than just a single attribute. And the read workload for associations typically consisted of range scans across an index, which isn't really a thing with EAV.
remolacha
I really want an ActiveRecord-like experience.
In ActiveRecord, I can do this:
```rb
post = Post.find_by(author: "John Smith")
post.author.email = "john@example.com"
post.save
```
In React/Vue/Solid, I want to express things like this:
```jsx
function BlogPostDetailComponent(...) {
// `subscribe` or `useSnapshot` or whatever would be the hook that gives me a reactive post object
const post = subscribe(Posts.find(props.id));
function updateAuthorName(newName) {
// This should handle the join between posts and authors, optimistically update the UI
post.author.name = newName;
// This should attempt to persist any pending changes to browser storage, then
// sync to remote db, rolling back changes if there's a failure, and
// giving me an easy way to show an error toast if the update failed.
post.save();
}
return (
<>
...
</>
)
}```
I don't want to think about joining up-front, and I want the ORM to give me an object-graph-like API, not a SQL-like API.
In ActiveRecord, I can fall back to SQL or build my ORM query with the join specified to avoid N+1s, but in most cases I can just act as if my whole object graph is in memory, which is the ideal DX.
stopachka
Absolutely. Instant has similar design goals to Rails and ActiveRecord
Here are some parallels your example:
A. ActiveRecord:
```
post = Post.find_by(author: "John Smith") post.author.email = "john@example.com" post.save
```
B. Instant:
```
db.transact( tx.users[lookup('author', 'John Smith')].update({ email: 'john@example.com' }), );
```
> In React/Vue/Solid, I want to say express things like this:
Here's what the React/Vue code would look like:
```
function BlogPostDetailComponent(props) {
// `useQuery` is equivelant to the `subscribe` that you mentioned:
const { isLoading, data, error } = db.useQuery({posts: {author: {}, $: {where: { id: props.id }, } })
if (isLoading) return ...
if (error) return ..
function updateAuthorName(newName) {
// `db.transact` does what you mentioned:
// it attempts to persist any pending changes to browser storage, then
// sync to remote db, rolling back changes if there's a failure, and
// gives an easy way to show an error toast if the update failed. (it's awaitable)
db.transact(
tx.authors[author.id].update({name: newName})
)
}
return (
<>
...
</>
)
}```
remolacha
Maybe a dumb question, but why do I have to wrap in `db.transact` and `tx.*`? Why can't I just have a proxy object that handles that stuff under the hood?
Naively, it seems more verbose than necessary.
Also, I like that in Rails, there are ways to mutate just in memory, and then ways to push the change to DB. I can just assign, and then changes are only pushed when I call `save()`. Or if I want to do it all-in-one, I can use something like `.update(..)`.
In the browser context, having this separation feels most useful for input elements. For example, I might have a page where the user can update their username. I want to simply pass in a value for the input element (controlled input)
ex.
```jsx
<input value={user.name} ... />
```
But I only want to push the changes to the db (save) when the user clicks the save button at the bottom of the page.
If any changes go straight to the db, then I have two choices:
1. Use an uncontrolled input element. This is inconvenient if I want to use something like Zod for form validation
2. Create a temporary state for the WIP changes, because in this case I don't want partial, unvalidated/unconfirmed changes written to either my local or remote db.
stopachka
This is a great question. We are working on a more concise transaction API, and are still in the design phase.
Writing a `user.save()` could be a good idea, but it opens up a question about how to do transactions. For example, saving _both_ user and post together).
I could see a variant where we return proxied objects from `useQuery`.
What would your ideal API look like?
gr4vityWall
From what you say, seems like Meteor + React would deliver almost the exact syntax you want, although it's MongoDB instead of SQL.
Reference: https://react-tutorial.meteor.com/simple-todos/02-collection...
TheFragenTaken
Every day, we get closer to what Ember.js did/does.
w10-1
Is the datalog engine exposed? Is there any way to cache parsed queries?
Other datalog engines support recursive queries, which makes my life so much easier. Can I do that now with this? Or is it on the roadmap?
I have fairly large and overlapping rules/queries. Is there any way to store parsed queries and combine them?
Also, why the same name as the (Lutris) Enhydra java database? Your domain is currently listed as a "failed company" from 1997-2000 (actual usage of the Java InstantDB was much longer)
https://dbdb.io/db/instantdb
Given that it's implemented clojure and some other datalog engines are in clojure, can you say anything about antecedents?Some other Clojure datalog implementations, most in open source
- Datomic is the long-standing market leader
- XTDB (MPL): https://github.com/xtdb/xtdb
- Datascript (EPL): https://github.com/tonsky/datascript
- Datalevin ((forking datascript, EPL): https://github.com/juji-io/datalevin
- datahike (forking datascript, EPL): https://github.com/replikativ/datahike
- Naga (EPL): https://github.com/quoll/naga
stopachka
> Is the datalog engine exposed? Is there any way to cache parsed queries?
We don't currently expose the datalog engine. You _technically_ could use it, but that part of the query system changes much more quickly.
Queries results are also cached by default on the client.
> Other datalog engines support recursive queries, which makes my life so much easier. Can I do that now with this?
There's no shorthand for recursive queries yet, but it's on the roadmap. Today if you had a data model like 'blocks have child blocks', you wanted to get 3 levels deep, you could write:
```
useQuery({ blocks: { child: { child: {} } } });
```
> Also, why the same name as the (Lutris) Enhydra java database?
When we first thought of the idea for this project, our 'codename' was Instant. We didn't actually think we could get `instantdb.com` as a real domain name. But, after some sleuthing, we found that the email server for instantdb.com went to a gentleman in New Zealand. Seems like he nabbed it after Lutris shut down. We were about to buy the domain after.
> Given that it's implemented clojure and some other datalog engines are in clojure, can you say anything about antecedents?
Certainly. Datomic has had a huge influence on us. I first used it at a startup in 2014 (wit.ai) and enjoyed it.
Datalog and triples were critical for shipping Instant. The datalog syntax was simple enough that we could write a small query engine for the client. Triples were flexible enough to let us support relations. We wrote a bit about how helpful this was in this essay: https://www.instantdb.com/essays/next_firebase#another-appro...
We studied just about all the codebases you mentioned as we built Instant. Fun fact, datascript actually supports our in-memory cache on the server:
https://github.com/instantdb/instant/blob/main/server/src/in...
nikodotio
Definitely waiting for the datalog query to be exposed before I’d use this.
If it was I would never use another database again.
I think the amount of people coming from datascript/datomic who have to work in js and would prefer to use datalog instead of learning a new query language is big.
stopachka
Noting this feedback, thank you.
apavlo
This is from me. I didn't realize the connection to Lutris + Enhydra. It should be listed as a "Acquired Company" + "Abandoned Project". Wikipedia also says that it lasted until 2001. Usage is different from development/maintenance. I will update the entry for the old InstantDB and add an entry for this new InstantDB.
I think given that the original InstantDB died over two decades okay and is not widely known/remembered, reusing the name is fine.
stopachka
Andy, both my co-founder and I watched your Database course on Youtube. We learned a lot, and it's awesome to see your name pop up :)
Get the top HN stories in your inbox every day.
Hey there HN! We’re Joe and Stopa, and today we’re open sourcing InstantDB, a client-side database that makes it easy to build real-time and collaborative apps like Notion and Figma.
Building modern apps these days involves a lot of schleps. For a basic CRUD app you need to spin up servers, wire up endpoints, integrate auth, add permissions, and then marshal data from the backend to the frontend and back again. If you want to deliver a buttery smooth user experience, you’ll need to add optimistic updates and rollbacks. We do these steps over and over for every feature we build, which can make it difficult to build delightful software. Could it be better?
We were senior and staff engineers at Facebook and Airbnb and had been thinking about this problem for years. In 2021, Stopa wrote an essay talking about how these schleps are actually database problems in disguise [1]. In 2022, Stopa wrote another essay sketching out a solution with a Firebase-like database with support for relations [2]. In the last two years we got the backing of James Tamplin (CEO of Firebase), became a team of 5 engineers, pushed almost ~2k commits, and today became open source.
Making a chat app in Instant is as simple as
Instant gives you a database you can subscribe to directly in the browser. You write relational queries in the shape of the data you want and we handle all the data fetching, permission checking, and offline caching. When you write transactions, optimistic updates and rollbacks are handled for you as well.Under the hood we save data to postgres as triples and wrote a datalog engine for fetching data [3]. We don’t expect you to write datalog queries so we wrote a graphql-like query language that doesn’t require any build step.
Taking inspiration from Asana’s WorldStore and Figma’s LiveGraph, we tail postgres’ WAL to detect novelty and use last-write-win semantics to handle conflicts [4][5]. We also handle websocket connections and persist data to IndexDB on web and AsyncStorage for React Native, giving you multiplayer and offline mode for free.
This is the kind of infrastructure Linear uses to power their sync and build better features faster [6]. Instant gives you this infrastructure so you can focus on what’s important: building a great UX for your users, and doing it quickly. We have auth, permissions, and a dashboard with a suite tools for you to explore and manage your data. We also support ephemeral capabilities like presence (e.g. sharing cursors) and broadcast (e.g. live reactions) [7][8].
We have a free hosted solution where we don’t pause projects, we don’t limit the number of active applications, and we have no restrictions for commercial use. We can do this because our architecture doesn’t require spinning up a separate servers for each app. When you’re ready to grow, we have paid plans that scale with you. And of course you can self host both the backend and the dashboard tools on your own.
Give us a spin today at https://instantdb.com/tutorial and see our code at https://github.com/instantdb/instant
We love feedback :)
[1] https://www.instantdb.com/essays/db_browser
[2] https://www.instantdb.com/essays/next_firebase
[3] https://www.instantdb.com/essays/datalogjs
[4] https://asana.com/inside-asana/worldstore-distributed-cachin...
[5] https://www.figma.com/blog/how-figmas-multiplayer-technology...
[6] https://www.youtube.com/live/WxK11RsLqp4?t=2175s
[7] https://www.joewords.com/posts/cursors
[8] https://www.instantdb.com/examples?#5-reactions