Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

karmelapple

So glad to see PouchDB included. We use it and have generally had a great experience! We use it with CouchDB on the backend, and Couch seems like a fantastic way to go for use cases involving syncing data between devices with an offline mode and syncing between clients. It was built from the ground up with replication in mind.

Biggest bummer of CouchDB? If you’re not hosting it yourself, there’s only one major player in the market that I know of: IBM Cloudant. They contribute much to Apache CouchDB though, and hosting it yourself doesn’t seem too difficult, especially for small, simple use cases.

Anyone else using CouchDB?

knubie

I use the PouchDB/CouchDB combo for exactly the use case your describing. The query language for CouchDB leaves a little to be desired, but ultimately it has worked well for me. I'm self hosting on a digital ocean droplet.

tata71

Is DO offering anything close to enterprise grade?

BackBlast

You probably need to define your actual requirements rather than this nebulous term.

The "DO offering" is a basic VPS and you host/manage the couchdb process yourself. DO does not offer a managed couchdb service. I've found couchdb to be reasonably stable and worry free.

The nature of pouchdb/couchdb and the design philosophy behind it makes it relatively easy to scale to additional servers. Couchdb is master-master, which is a good eventually consistent model that should help in any horizontal scaling. The base process is erlang/elixir which should scale vertically well.

I've been using it a couple of years, but not on any sites that have significant traffic.

kbenson

"enterprise grade" covers a a much larger spectrum of uses and needs with regard to features and stability than non-enterprise grade, making it kinda hard to answer that generally.

They offer VMs, containers, managed DB instance offerings, block storage, multiple regions and datacenters, load balancing, etc. They have an API to control all those things, and modules available in many popular languages. But that's basically what many people consider table stakes for a service like that, and indeed there are competitors like Vultr and Upcloud that offer the same.

Will any of them have quite the same level of offerings that AWS or GCE or Azure offer? Probably not. But for a great many people what they have is all the enterpise level stuff they'll need or use, and it is decidedly easier to just start up a cheap linux VM and take care of it on one of these services compared to AWS or GCE or Azure, so if what you want are somewhat manually managed cloud VMs, I highly recommend one of these services over one of the big names.

I've used all the services named, and I still prefer DO for just throwing up a cheap $5-$10 VM for personal stuff, or to spin up a temporary VM for testing something out. On DO that's a couple second process when you do it manually by clicking around.

fulafel

Not really, at least in the negative sense of the term. Try talking to Oracle or SAP sales for that :)

oblib

There's IBM's Cloudant services if you need that. And you can build your own cluster and connect as many servers as you want.

But DO doesn't have any pre-configured droplets to start off with. Be nice if they did.

xrd

I tried to use couchdb with pouchdb. It was a mess to add the proper authentication layer over it and the fact that even the couchdb team has changed their opinions on the right way to do it was not impressive.

I love RxDB with Hasura behind it. It's incredible and you get a great postgres front end to boot.

oblib

>>the fact that even the couchdb team has changed their opinions on the right way to do it was not impressive

I don't get that. If they're working on improving something I didn't like that's something I'd appreciate.

CouchDB/PouchDB works great for how I'm using them but over the years I've observed that those coming from using SQL DBs can have a hard time with it.

I get that. But CouchDB is not designed to compete or replace SQL.

To me, it feels like CouchDB was not the right tool for the job you were doing. That's not a reason to dismiss it though.

goohle

Do you use «one database per user» model or «all users data in one db» model?

xrd

One database per user. And even disregarding the obvious inability to natively join information across tables, it still was a mess to subscribe to changes across all of them and all the other things you take for granted with a relational database. And to me, it looks like couchdb is on life support as a technology. It's a great idea and revolutionary in it's time, putting everything as documents with views, etc. But, too many gaps and unclear direction.

dynamite-ready

That's interesting. What problems did you encounter?

gadders

>>It was built from the ground up with replication in mind.

Because it was inspired by one of the first document-based, nosql, replicated databases - Lotus Notes.

https://www.wired.com/2012/12/couchdb/

Omnipresent

> Couch seems like a fantastic way to go for use cases involving syncing data between devices with an offline mode and syncing between clients. It was built from the ground up with replication in mind.

Have you come across any simple examples that show offline mode and syncing between clients with replication?

janl

syncing between clients requires a network between clients and normally, clients only have connections to servers. But if you are in a situation where clients can open TCP connections to other clients, CouchDB can sync over that.

gdelfino01

I use CouchDB. I love its multi-master replication capability, HTTP API and its ability to monitor for changes easily.

shane_b

I use couch and pouch frequently. For replication mostly.

I’ve copied airtable data to it in the past.

Recently I implemented an event store CQRS system designed to be usable offline. I considered syncing events to the client via sockets but I needed to implement diffs. So I use couch and pouch as read only side of CQRS with an append only event CouchDB. The actual data is in Postgres.

Authorization is tricky. I do not recommend trying to do document level access control. I simply added an express endpoint that allows only reads and checks the session user for which table they can access. Then pass the request to couch.

Overall works really well. I recently turned off live sync for web and react native and loop over all of my databases on a set timeout interval. I had trouble with many connections at once.

lytefm

> Authorization is tricky. I do not recommend trying to do document level access control.

If you're trying to do that, true. But if you simply let the user sync his data as he pleases, auth is quite easy with the library I maintain (link in my profile).

> I had trouble with many connections at once.

Can you elaborate on how many? And Did you increase max_db_open and the necessary OS limits? I'm currently planning to go the other way, but also a bit worried that too many open connections can cause trouble.

shane_b

Neat project. If the data belongs to only that user it works pretty well. I run into trouble with sharing across users on a team and individual user sharing. In that case, I see most solutions use express as middleware for the couchdb connection.

> Can you elaborate on how many?

For web, I can have as many as I want syncing. I haven’t stress tested it yet tho. I have a CouchDB in prod that throws an error message about connection limit a few times a day. This one writes and reads. It restarts the docker container to recover since I haven’t had time to investigate. You may have given me the answer :)

On React Native, I’ve had odd behavior around 5 connections. That’s where I need to periodically poll for syncing. It works out best since the user is offline most of the time and downloads infrequently.

WorldMaker

> Biggest bummer of CouchDB? If you’re not hosting it yourself, there’s only one major player in the market that I know of: IBM Cloudant.

That's the biggest problem my projects using Pouch/Couch are facing. The tech choice was made when Cloudant still had Azure datacenter support and IBM's multiple confusing changes to their Cloud brands has put it in a situation we aren't entirely happy with and I keep getting asked/pressure if I can move things back to Azure datacenters.

I don't know what I'm going to replace it with and I still wish Azure CosmosDB was more friendly to Couch replication. (It's so close, especially its Changes feed, I feel that the proxy I need probably doesn't need to do all that much I just don't think I have the budget/time to build and test such a proxy.)

karmelapple

I didn't realize Azure and CouchDB/Cloudant got along together for awhile! No more though?

> I just don't think I have the budget/time to build and test such a proxy

Same here, including with the authentication shortcomings we hope get addressed, like per-document security or other improvements.

WorldMaker

Cloudant was a startup that targeted multi-cloud. At one point they supported cluster deployments to AWS and Azure. They were bought by IBM and dropped AWS support but kept Azure support for a bit longer as they built out more of "BlueMix" (early IBM Cloud brand name), and then IBM did its dance of Cloud brand names and datacenters supported and Cloudant dropped Azure support too.

dperalta

I use it every time I can and I freaking love it.

LAC-Tech

Last write wins is not a strategy for conflict resolution, it's a surrender.

So I'm glad to hear something else apart from Pouch actually handles it. Anyone familiar with rxdb and can chime in on how they do it?

netghost

It sounds like rxdb is built on top of pouch, so probably the same set of options with the possibility of some opinionated design or sensible defaults though I can't find anything. obvious.

typingmonkey

Yes RxDB conflict resolution is equal to PouchDBs. At least for now, there are plans to improve from there where you have a global resoluting function instead of listening for conflicts in the changestream.

nyanpasu64

Not directly related to the post (which is focused on which database to host for your app), but I'm writing desktop apps (think DAWs) in C++ and Rust (not JS), and want to synchronize settings through a Dropbox or Google Drive (so I don't have to host my own cloud sync servers). What's a good library or schema to achieve this?

Personally I usually don't have multiple instances of the same app open on multiple machines, but other people might open the same or different files on their desktop and laptop.

- Not all settings should be synchronized (don't include machine-specific "recent files" paths).

- How should settings be stored locally (if I may have multiple instances of my app open on a single machine)? Registry (Windows-only)? INI with atomic saving (requires care and locking to prevent multiple instances from trampling or racing with each other)? SQLite?

IMO Stylus is a pretty good implementation of offline-first cloud settings sync over Dropbox/etc. It's currently based around one JSON file per CSS file (Dropbox/Apps/Stylus - Userstyles Manager/docs/uuid.json), and what appears to be a transaction log (Dropbox/Apps/Stylus - Userstyles Manager/changes/number.json). Cloud sync has been 100% reliable in my experience, though I do notice temporary file lock errors when switching between different machines in my dual-boot setup (but sync seems to be eventually consistent nonetheless).

uBlock Origin is worse. Instead of merging settings, it expects the user to upload and download the entire settings blob at once (and pulling an old blob can erase changes you've made locally). And in the past it's entirely failed to sync because the blob was too big to upload to Mozilla's servers. (Right now it "works" but takes several minutes for one computer to see a config uploaded from another computer.)

franga2000

I've done something pretty ridiculous to solve this problem and I'm not sure I'd recommend it, but here it is:

- A directory is synced with the server with no conflict resolution - The application creates its config file in that directory, named by a random UUID, which is stored outside the synced folder - The config file stores the setting overrides (defaults were compiled-in) in any format (I used YAML) - Each setting override includes a "locked" and "lastModifed" - On startup (sync was external) all files in the directory are read and merged starting with the local one, then skipping any settings that are locked (locally or remotely), last modified wins

Some deployments used a daily rsync cronjob, some had a mounted network share (with hilarious broken file locking) and of course it worked with direct bind mounts as well.

I also briefly experimented turning the "locked" field into a "group" field to enable multiple "sync groups" with some keys shared globally and some only with other group members (even different groups for different settings), but it ended up not being useful for my use case, although it did work.

nyanpasu64

Sounds interesting. I suppose it would break if a computer's clock was set in the future (its lastModified would always win), but IDK what wouldn't break in that scenario.

What does the locked field do?

Do you have a link to your implementation, or is this proprietary?

lbhdc

I would imagine using a CRDT would be appropriate for the data you want to sync would let you sync state between multiple open clients.

killingtime74

My old company used S3 to sync config files in json. S3 is strongly consistent now

dvdhnt

I really enjoy examples like this, thanks, I’ll be exploring it.

As an aside… I would truly love to explore a collection of interesting ways to use SQLite. It’s such an impressive piece of technology that I’d like to use more often. Please share if you have something similar!

heyzk

I had a really great experience building an EAV store with Datalog as the query interface on top of SQLite for embedding in native mobile apps.

Pros: querying complex data hierarchies was easy, and was able to skip the pain typically associated with managing a SQL schema.

Mertax

What’s the application for this? EAV is often an anti-pattern when a schema could be defined, but I’m actually using it as well. Our application is an end-user-defined database for mobile data collection. The EAV model in SQLite is a bit of a cognitive burden but makes offline sync and conflict resolution pretty straight forward. It’s almost a crude CRDT implementation.

heyzk

It was a scheduling, work tracking and invoicing app for service providers that have spotty-at-best network connections, think long periods out of cell service but still need to do complex data entry and querying.

> EAV is often an anti-pattern when a schema could be defined

Super interesting, I wasn't aware that EAV is an anti-pattern in that case. Is it an efficiency thing?

For clarity, my design wasn't schemaless, values (can) have defined datatypes and relationships are first-class. I meant that I found adding to or modifying the schema was less cumbersome and error prone than traditional SQL schema additions or changes. I feel like SQL schema management is more suited to server-based dbs where you have tight control over the db lifecycle, which you don't when it lives on a bunch of mobile devices.

Totally agree with the ease of sync and conflict resolution, another strong pro.

Love to hear more about your approach! Also feel free to reach out (email in bio) if you'd like to compare notes some time.

tehbeard

Are you able to elaborate on why you chose EAV over using something like the json1 extension of SQLite?

heyzk

Honestly I didn't look at json1.

It was built on a single table that held the entity-attribute-value tuple along with some additional metadata like type information, whether or not the attribute was a pointer to another entity, and the cardinality of that relationship (one or many).

Relationships were walked via self joins and the eav columns were all indexed.

phyrex

I would love to hear more! Did you write the datalog layer or is there one somewhere? Is there any code available I could see?

dunham

You might be interested in the now defunct Mentat project from Mozilla. They made an EAV store with syncing on top of sqlite. It ran datalog queries by translating them into sql.

https://github.com/mozilla/mentat

mamcx

I'm doing something like this with CRDTs + RDBMS (POC with sqlite).

The closest thing is this:

https://munin.uit.no/bitstream/handle/10037/22344/thesis.pdf

There it split each record in a stream of CRDTs values. I found (quickly!) that it could cause serious violations of business logics if done as-is. Now, I trying to threat the record as whole. Still could have issues for multi-record/table logical integrity, so I have tough in build a "transaction markers" so your stream of changes are:

  Start
   ADD: T1.Row1...
   ADD: T2.Row1...  
  End
So you don't partially apply a change.

P.D: If interested and know Rust we can talk!

dvdhnt

Thanks for sharing! Unfortunately, I don't work with Rust :/

Zababa

Maybe not very interesting but I'm using SQLite as a dataframe replacement in languages where the support isn't great, for a project where I want to transition from a CLI tool to a web application (and probably from SQLite to Postgres).

dvdhnt

Huh, I'm unfamiliar with Dataframe. Thanks for mentioning it!

psychometry

SQL in the browser again: https://github.com/jlongster/absurd-sql

dvdhnt

> It basically stores a whole database into another database. Which is absurd.

The project is both interesting and amusing, thanks!

shoo

From the perspective of someone less familiar with this kind of thing, this comparison would be easier to understand with a bit of an introductory explanation about what job we're trying to do or problem we're trying to solve, and the assumed context or constraints.

globular-toast

The key piece of missing info is this is about web development. Some web devs don't seem to know that there are other engineers out there who don't work in web dev. More specifically, we're talking about frontend web dev, which means writing code that runs in a web browser. Browsers traditionally are stateless and get all their data from a server. Any frontend code would also need to communicate with some backend if it wants to store any persistent state. But that requires an internet connection. Nowadays browsers have some ability to store persistent state themselves so this is about supporting code that works offline, ie. without a connection to the backend.

amw-zero

Offline-first is a well-known term. You can search for it elsewhere. When writing, it's important to choose who you're speaking to exactly so that you can avoid sharing context, since that takes up time and bandwidth. The truth is, communication is much more efficient if you don't re-explain every single concept that you are talking about.

shoo

leaving aside the definition of "offline first", it'd be clearer to understand if the there was a brief problem statement pinning down that yes, we're specifically investigating offline first databases that are easy to integrate with javascript apps -- or perhaps comparisons of other offline first approaches for native mobile apps or whatnot would make sense and be welcome.

naive question: arguably git and mercurial and subversion could be thought of as offline first databases -- albeit targeted at a domain-specific use case. does it make any sense to compare them too?

academia has many things to learn from the world of software development, particularly around testing to ensure quality and reproducibility of work, but perhaps software development could benefit from a few ideas from academia: giving a brief introduction to contextualise the work -- not an extensive glosarry, but at least a few links to relevant work others have already done - ideally with at least one link to something that introduced the idea or is an extensive survey of the subject.

I'm reading the book "designing data-intensive applications" at the moment and looked up offline-first applications in the index, which references http://blog.hood.ie/2013/11/say-hello-to-offline-first/ , which no longer exists, but is still mirrored by https://web.archive.org/web/20200222150347/http://hood.ie/bl...

wolfram74

A compromise solution would be linking to a glossary or introductory material if someone is generally versed but not in a particular subject, we get a lot of, say, pure math people on here who find software engineering interesting.

y4mi

you'd still have to draw a line at some point which terms you'd explain as otherwise your article only consist of glossary information.

i'm pretty sure the term offline-first wouldn't have met the cutoff point, as its _really_ well known from my experience.

throwaway743

Am I wrong to think that pouchdb uses indexeddb?

Over the last 8 months, I've been working on a react/capacitor based android app (potentially ios later on) and was originally using idb-keyval, which uses indexeddb for key val storage, and things were great since it's a local storage solution compatible with react/capacitor, and one of my goals is to not rely on a remote data storage solution.

As said, things were going great, but then a couple weeks ago things went to shit when all the data that was stored in the prototype on my android device was wiped. Apparently, both android and ios tend to wipe browser/web-view local storage at random/when space is needed(?).

Dug around since looking for an alternative solution (would love a capacitor compatible mongodb solution), came across pouchdb via rxdb, but could've sworn there was mention that it relies on indexeddb. So just to be safe switched to sqlite and been rewriting components since.

Lesson of the story, even if it isn't dependent on indexeddb, if you're looking for a local storage option for a mobile app and happen to be using a js framework with capacitor or anything that utilizes a web-view, stay away from anything that uses indexeddb. If a wipe like this were to happen post release, the chance that your app succeeds afterwards would be near 0%

Edit: so yeah, just double checked/was reading through the readme of this project, and pouchdb via rxdb is reliant on indexeddb

typingmonkey

Author here.

I am using RxDB with Capacitor (iOS and Android app). You can use the SQLite based pouchdb adapter with capacitor. It keeps your data and is (sometimes) faster.

Here [1] I have documented a whole section about how to use RxDB+SQLite in Capacitor.

[1] https://rxdb.info/adapters.html

throwaway743

Ah okay cool. Thank you for pointing this out.

Now would I use the adapter for react native or for cordova?

If react native, I've been writing with react js and have been under the impression that react native specific plugins aren't compatible with react js. Is that wrong?

If cordova, it's totally compatible with capacitor?

Sorry just want to make sure before jumping in

deesep

cordova-sqlite adapter can be used with cordova and capacitor. The article says so.

perttir

How fast is the Pouchdb with sqlite on android? I dont really need any replication in my application, should i just use the normal cordova sqlite instead and switch to dexie.js when using in web browser?

mikojan

> Apparently, both android and ios tend to wipe browser/web-view local storage at random/when space is needed(?).

"Apparently"? Mozilla docs say so much in the introduction[0], even directing you to a dedicated page[1].

[0]: https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_A... [1]: https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_A...

WA

Can confirm: don’t use IndexedDb in a mobile app made with Capacitor or Cordova. I never had the IndexedDB wiped by the OS, but there are occasional browser bugs that affect IndexedDb.

One time, users could not load their data if their android device had less than 1 GB of free disk space, because Chrome had a bug in calculation the quota for IndexedDb.

SQLite is fast and it gives me peace at night, knowing that user data is safe.

WorldMaker

> one of my goals is to not rely on a remote data storage solution

Yeah, that's going to be hard goal to meet in this case. A benefit to PouchDB over raw IndexedDB is the great replication support and replicating everything back down in the case of one of those worst case wipes is an alright solution in some cases (but that does require managing remote data storage solutions, unfortunately).

Android and iOS are supposed to manage IndexedDB as Application Storage when installed as an app via something like Capacitor, but it's still not great. Supposedly Android is getting a lot better about it for apps installed as PWAs instead and Capacitor should give you a good PWA path for Android at least. (iOS is unfortunately still lagging far behind on PWA support.)

y4mi

My biggest gripe with offline-first capability of PWAs is speed. a network roundtrip is generally faster then fetching the same information from indexeddb, and you still have to sync first because indexeddb tends to get wiped at inopportune times.

its great if you're writing a traditional app though, but really unfortunate wrt to the PWAs

radex

IndexedDB is a _bad_ API, and making many small read/write operations on it is absurdly slow. This is one of the reasons why WatermelonDB on web only uses IDB as a dumb storage medium but actually does all the database'y things in memory. You won't reasonably scale to gigabytes this way, but for most PWAs this is plenty enough and MUCH faster in practice. Certainly far faster than network.

tehbeard

Why is that?. Is this because you're on a fibre link? indexedDB is being used with several layers to add a comfy SQL and ORM layer for you?

I've used indexedDB on a couple of projects at work, while there are definitely downsides with its indexing design, limited querying options and the menagerie of fuckups by Team Fruit(TM), it works well as a local cache when our clients are out on site with their customers and all they've got is a crappy intermittent 3/4g signal.

y4mi

In my tests I actually went with plain indexed db at the end, because I wanted to make sure it's as fast as it could go.

It's true that the indexeddb access is faster then a 3g signal with timeout, but that wasn't happening often enough to warrant slowing down all other requests measuribly just to speed up the rare case when this occurs.

Loading from indexeddb generally took about 100-200ms, loading data over WLAN/4g from a remote server (~500km real life distance) over socket took < 50ms overall for multiple json payloads with about 200 serialized entities altogether.

And doing both at the same to serve whatever finished first wasn't worth the trade for me either, as the indexeddb access isn't cheap from a energy drain perspective either.

Other people might come to different conclusions depending on their challenges.

mikojan

Curious! Even with machines on the same premises IDDB has been orders of magnitutes faster for us. In the beginning we were storing carbon copies of mariadb tables and initial load times went down drastically.

getcrunk

Really? I've never used index db but to confirm your saying to fetch data from local memory is slower than network round trip?

dmw_ng

The complaint was about IndexedDB access, not local memory. Folk mentioning this are usually referring to the disastrous implementation in Chrome: see https://dev.to/skhmt/why-are-indexeddb-operations-significan... and https://jlongster.com/future-sql-web

getcrunk

Wow. Orders of magnitude slower than ff.

tehbeard

indexedDB is quite a low level in what the interface gives you.

This includes what options you have available for querying (just accessing a key range, upper/lower bound, forward or backwards (backwards can be much slower), rather than a full query language like SQL etc..

So you need to design your app / data format and pre-plan any queries so you'll have the indexes you need, or do some glue logic to combine indexes as needed.

oblib

You can use CouchDB installed on a desktop PC and tell PouchDB to use that instead of the web browser's IndexedDB for offline-first apps.

This approach gets you pretty close to native app speed.

wanderingmind

Any information on how scalable are these databases compared to traditional SQL databases? Or specifically, when should you use this (in prototyping or production)

rendall

IMO the principal consideration here is that these are offline/local databases for browsers and (probably?) Electron, so they are not intended to be scalable at all. If you have a progressive web app (PWA) and you need a queryable database for some reason, then you would use these. Otherwise, stick your queries behind API endpoints.

janl

CouchDB is a lot more scalable than SQL databases because it has a distributed scaling model built in (just add nodes), no need to mess with read-replicate and finicky hot-failover, it all just works out of the box (Dynamo style).

karmelapple

It’s more scalable in theory, and I talked its praises in a different comment, but our team has hit scaling issues with our one-user-per-database approach. It was a mess to sort out, but Cloudant support was very helpful.

Our major issue: we write many small documents, and we write them over every user’s database fairly frequently. And Cloudant’s default settings don’t like that with a one-user-per-database approach. In fact, they discourage anyone from the one-db-per-user approach these days: https://www.ibm.com/cloud/blog/cloudant-best-and-worst-pract...

That blog post calls it an anti-pattern, but I would respectfully disagree. It is an absolutely great pattern to keep a native app and a web app in sync across multiple devices with intelligent conflict resolution.

A solution was to reduce the number of shards that a database was split out over, since our database’s data is pretty small overall and we didn’t need each database split out so much across our cluster.

janl

sure, changing defaults for different use-cases is totally normal for any database. Best thing: CouchDB 3.x comes with shard splitting and default shard factor of 2 (previously 8), so you get best of both worlds getting started, and you end up with larger dbs you can split their shards on the fly.

peterthehacker

What kind of consistency models [0] do Offline-first databases like RxDB and PouchDB have?

I was thinking read uncommitted, but they might allow dirty writes. Maybe there’s some CRDTs under the hood… I can’t find any documentation on consistency though, anyone here know?

[0] https://jepsen.io/consistency

janl

peterthehacker

Thanks! Very cool. I haven’t used couchdb in prod. I’ll read into this more.

genewitch

Is the reason for all of these sorts of comments some sort of "never lose data" quality assurance?

There's consistency, write first, latency, replication, and then acronyms.

I just implement hardware and OS stuff, but I like the DB people to be happy. What am I missing?

spankalee

Wait, this is benchmarking the Firestore emulator. How is that relevant?

typingmonkey

It is benchmarking the firestore JavaScript library that runs at the client.

spankalee

Maybe, but I don't see where the browser is brought offline to enforce that. It looks like the emulator is still running and there's still a connection to it.

Can you show where you force the browser offline?

FractalHQ

I imagine that Supabase would be a perfect candidate for this project! I would love to get the authors opinion on it too.

typingmonkey

I considered to add Supabase, RethinkDB and meteor. But they are not really client side databases. They realtime stream query results from the server to the client.

knes

Did you look into https://ditto.live/? Its and off-line first db too. I never used it but looks very interesting and would be interested how it compare to the technologies you picked

irae

I love RethinkDB. I really believe it is a database that deserves more attention. We've been running on it for six years and it is flawless, easy to use. Like any other database you need to understand how it performs and how to query efficiently.

But it does has everything required for streaming and syncing data in a modern way. You can open streaming queries and even with a backlog of data, so syncing should be pretty easy. Maybe would be feasible to fork PouchDB to use RethinkDB as a backend?

PKop

Firestore isn't really a client side database then either. It is certainly also querying results from a server.

jonplackett

I second that! Would like to see supabase since that’s what I’ve been using a lot recently and really liking!

mastazi

I have a question, how do you manage the fact that Supabase is based on a relational db[1]? Do you just put all fields (except the primary key) in a JSON column[2]? Does it work well if you use it that way? We were investigating it as a way to escape Firebase's vendor lock-in but the fact that we would have to manage schema migrations was a bit of a deal breaker based on our use case. I'm also interested in hearing about any alternatives to Supabase that use a document-based nosql db.

[1] https://supabase.io/database

[2] https://www.postgresql.org/docs/13/datatype-json.html

jonplackett

Personably I kinda like the fact that it’s a relational DB. The stuff you gain by having users only allowed to do certain things based on certain fields using Policies makes life a lot easier and more relaxing.

carabiner

Some charts for that table would be so awesome.

smcnally

I'm not the OP or author. This sheet[0] has the Metrics and Feature Map data for charting, sorting, etc. directly from their GH page.

[0] https://docs.google.com/spreadsheets/d/12ReO-4_bZ2BaLj9P6oJT...

Daily Digest email

Get the top HN stories in your inbox every day.