Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

banashark

One thing I don't usually see addressed with the pure-sql approaches is how to handle dynamic query building. The most common example being large configurable forms that display a data grid. Kysely[1] does a good job of starting from this angle, but allowing something like specifying the concrete deserialization type similar to the libraries here.

I'm a big fan of sql in general (even if the syntax can be verbose, the declarative nature is usually pleasant and satisfying to use), but whenever dynamic nature creeps in it gets messy. Conditional joins/selects/where clauses, etc

How do folks that go all in on sql-first approaches handle this? Home-grown dynamic builders is what I've seen various places I've work implement in the past, but it's usually not built out as a full API and kind of just cobbled together. Eventually they just swap to an ORM to solve the issue.

* [1] https://kysely.dev

rzmmm

One approach is to create views for the required data and then just select the columns which are needed. The joins will be pruned by the query planner if they are not needed, so there is no need for conditional joins.

hu3

> The joins will be pruned by the query planner if they are not needed, so there is no need for conditional joins.

I always wondered about this. How reliable is that in your experience? Thank you in advance.

derekperkins

Depends on the underlying database, and you'd have to test your query to have any level of certainty.

That being said, query planning is generally where Oracle/MSSQl outshine MySQL/Postgres, especially for pruning unnecessary joins. BigQuery is great at it IME.

banashark

Yeah this definitely makes sense, and is good database API design as well.

dathinab

> dynamic query building

it's not (really) addressed by sqlx (intentionally), in the same way most ORM features are not addressed

but to some degree this is what is so nice about sqlx it mainly(1) provides the basic SQL functionality and then let you decide what to use on top of it (or if to use anything on top).

If you need more e.g. the sea-* ecosystem (sea-query, sea-orm) might fulfill you needs.

(1): It can compile time check "static" queries (i.e. only placeholders) which is a bit more then "basic" features, but some projects have to 99+% only static queries in which case this feature can move SQLx from "a building block for other sql libs" to "all you need" to keep dependencies thinner.

tracker1

Not rust, but I've been a pretty big fan of Dapper and Dapper.SqlBuilder in the C# space... have used it with MS-SQL and PostgreSQL very effectively, even with really complex query construction against input options.

https://github.com/DapperLib/Dapper/blob/main/Dapper.SqlBuil...

nicoburns

I find that interpolating strings works pretty well for this use case (which actually switchd TO string interpolation from ORMs at a previous job of mine).

But this is conditional on either your database or your minimal abstraction layer having support for bindings arrays of data with a single placeholder (which is generally true for Postgres).

zackangelo

Is something like SeaQuery[0] what you're talking about?

[0] https://github.com/SeaQL/sea-query/

inbx0

SeaQuery looks like a similar dynamic query builder for Rust as Kysely is for JS/TS, so yeah, that'd probably solve the dynamic query problem. But I think parent wasn't so much asking for another library but for patterns.

How do people who choose to use a no-dsl SQL library, like SQLx, handle dynamic queries? Especially with compile-time checking. The readme has this example:

  ...
  WHERE organization = ?
But what if you have multiple possible where-conditions, let's say "WHERE organization = ?", "WHERE starts_with(first_name, ?)", "WHERE birth_date > ?", and you need to some combination of those (possibly also none of those) based on query parameters to the API. I think that's a pretty common use case.

fstephany

I agree with you that dynamic query building can be tedious with a pure SQL approach. The use case you are describing can be solved with something alone the lines of:

  WHERE organization = $1
     AND ($2 IS NULL OR starts_with(first_name, $2)
     AND ($3 IS NULL OR birth_date > $3)
With SQLx you would have all the params to be Options and fill them according the parameters that were sent to your API.

Does that make sense?

williamdclt

I generally avoid DSLs as they don't bring much... except for this exact use-case. Dynamic queries is pretty much what a query builder is for: you can avoid a dependency by rolling your own, but well it's not trivial and people out there have built some decent ones.

So, if I have this use-case I'd reach for a query builder library. To answer the question of "how to do dynamic queries without a query builder library", I don't think there's any other answer than "make your own query builder"

dathinab

> Especially with compile-time checking.

no compile time checking and integration tests

in general sqlx only provides the most minimal string based query building so you can easily run into annoying edge cases you forgot to test, so if your project needs that, libraries like sea-query or sea-orm are the way to go (through it's still viable, without just a bit annoying).

in general SQLx "compile time query checking" still needs a concrete query and a running db to check if the query is valid. It is not doing a rem-implementation of every dialects syntax, semantics and subtle edge cases etc. that just isn't practical as sql is too inconsistent in the edge cases, non standard extensions and even the theoretical standardized parts due to it costing money to read the standard and its updates being highly biased for MS/Oracle databases).

This means compile time query checking doesn't scale that well to dynamic queries, you basically would need to build and check every query you might dynamically create (or the subset you want to test) at which point you are in integration test territory (and you can do it with integration tests just fine).

besides the sqlx specific stuff AFIK some of the "tweaked sql syntax for better composeability" experiments are heading for SQL standardization which might make this way less of a pain in the long run but I don't remember the details at all, so uh, maybe not???

---

EDIT: Yes there is an sqlx "offline" mode which doesn't need a live db, it works by basically caching results from the online mode. It is very useful, but still no "independent/standalone" query analysis.

seivan

[dead]

bitbasher

I've been using sqlx with postgres for several months now on a production server with decent query volume all day long. It has been rock solid.

I find writing sql in rust with sqlx to be far fewer lines of code than the same in Go. This server was ported from Go and the end result was ~40% fewer lines of code, less memory usage and stable cpu/memory usage over time.

jchw

Speaking of Go, if you want compile-time type checking like what SQLx offers, the Go ecosystem has an option that is arguably even better at it:

https://sqlc.dev/

It has the advantage that it implements the parsing and type checking logic in pure Go, allowing it to import your migrations and infer the schema for type checking. With SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available. This makes SQLx kind of a non-starter for me, though I understand why nobody wants to do what sqlc does (it involves a lot of duplication that essentially reimplements database features.) (Somewhat ironically it's less useful for sqlc to do this since it runs as code generation outside the normal compilation and thus even if it did need a live database connection to do the code generation it would be less of an impact... But it's still nice for simplicity.)

bitbasher

It's possible to run sqlx in 'offline' mode that uses your schema to do the checks so you don't need a live database. That's a popular option in CI/CD scenarios.

jjice

It's absolutely core to SQLx. I'm surprised to hear that that isn't widely known based on the parent. The first time I used SQLx has to be 4 or 5 years ago and they had it back then.

solidsnack9000

This is not quite the same thing, because it requires `sqlx prepare` to be run first; and that talks to the database to get type information. In SQLC, on the other hand, query parsing and type inference is implemented from first principles, in pure Go.

echelon

Offline query caching is great. The team has made it work fantastically for workspace oriented monorepos too.

I ran sqlx / mysql on a 6M MAU Actix-Web website with 100kqps at peak with relatively complex transactions and queries. It was rock solid.

I'm currently using sqlx on the backend and on the desktop (Tauri with sqlite).

In my humble opinion, sqlx is the best, safest, most performant, and most Rustful way of writing SQL. The ORMs just aren't quite there.

I wish other Rust client libraries were as nice as sqlx. I consider sqlx to be one of Rust's essential crates.

solidsnack9000

Implementing the parsing and type checking logic in pure Go is not an unqualified advantage. As you point out, it means that SQLC "...essentially reimplements database features..." and in my experience, it does not reimplement all of them.

conroy

Maintainer of sqlc here. Thanks for the kind words! I'm considering switching to the sqlx model of talking to a running database simply because trying to re-implement PostgreSQL internals has been a huge challenge. It works for most queries, but for the long tail of features, it's a losing battle.

Can you tell me why it's a non-starter for you?

jchw

I think it's only a non-starter for me in SQLx if not using query caching. Caching makes the situation workable.

For sqlc, it isn't really a big problem because you only need to run the code generation when you're actually modifying database things. Still, with that having been said, I think just passing a database URI and having analysis work based on that is unideal. Using an actual database isn't a huge problem, but having to manage the database instance out of band is the part that I think isn't great, because it allows for the schema in the code to trivially desync with the schema used in analysis. If I used SQLx I'd probably be compelled to try to wire up a solution that spawns the database and migrates it up hermetically for the caching part. Likewise if I used this mode of sqlc.

I guess it might be possible for sqlc to add first class support for that sort of concept, but I can see holes in it. For one thing, you have to figure out where to grab binaries from and what version. An approach using Docker/Podman works, and at least partly solves this problem because you could allow specifying any OCI image, but that has caveats too, like requiring Docker or Podman to be installed. The most heroic effort would be to use some kind of solution using WASM builds of database engines: pulling down and running something like PGlite in process seems like it would be an almost ideal solution, but it sticks you to whatever things can actually be made to work in WASM in terms of features, extensions and versions, at least unless/until database servers and extension vendors miraculously decide that supporting WASM as a target is a good idea. Still, if you want some crazy ideas for how to make the UX better, I think either the Docker approach or the WASM approach could be made to work to some degree.

Barring that, though, I'd be most likely to have some kind of Docker setup for running sqlc with an ephemeral database instance. It's not pretty, but it works...

I don't think it would be a non-starter, though. I only really think that connecting to the database from within rustc invocations is a non-starter.

tizzy

I never gelled with how SQLC needs to know about your schema via the schema file. I'm used to flyway where you can update the schema as long as it's versioned correctly such that running all the sets of flyways will produce the same db schema.

I referred go-jet since it introspects the database for it's code generation instead.

jchw

The way I prefer to use sqlc is in combination with a schema migration framework like goose. It actually is able to read the migration files and infer the schema directly without needing an actual database. This seems to work well in production.

craftkiller

> with SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available.

FWIW, the compile-time query checking is entirely optional. If you don't use the query syntax checking then you don't need live database and you don't need `sqlx prepare`.

written-beyond

I spent 2 weeks trying to build a very basic rest crud API with SQLc and it was not better. I had to shift to SQLx because of how unintuitive SQLc was.

devjab

We've been running SQLC in production for a while now and I'm curious which part of it you found unintuitive? We run ours as a container service within the development environment that will compile your code from a postgres dump file. We've had no issues with it at all after the initial configuration guidelines for SQLC, though the documentation certainly isn't exactly great. Hell, I'm not sure I've ever worked with a better SQL to language tool in my decades so I'm surprised that it isn't working out for you.

That being said, as I understand it, SQLx does something very different. If you want dynamic queries, you'll basically have to build that module yourself. The power of SQLC is that anyone who can write SQL can work on the CRUD part of your Go backend, even if they don't know Go. Hell, we've even had some success with business domain experts who added CRUD functionality by using LLM's to generate SQL. (We do have a lot of safeguards around that, to make it less crazy than it sounds).

If you want fancy Linq, grapQL, Odata or even a lot of REST frameworks, you're not getting any of that with SQLC though, but that's typically not what you'd want from a Go backend in my experience. Might as well build it with C# or Java then.

happens

Interesting - I've had the opposite experience. I usually prefer rust for personal projects, but when I recently tried to use SQLx with sqlite, lots of very basic patterns presented problems, and I wished I had sqlc back.

frollogaston

This is why I like using NodeJS or Python with SQL, it's very simple to have it not care about the return types. SQL is already statically typed per se, I don't need to re-assert everything. Achieving the same kind of automation in Go etc requires parsing the schema at compile-time like what you described, which is complicated.

Thaxll

imo sqlc from Go is supperior to sqlx from Rust. The other thing is that sqlx is somehow slow, when I did some test, pgx ( Go ) was faster than sqlx.

selfmodruntime

sqlx pulls in `syn`. Syn is really slow to compile.

undefined

[deleted]

frollogaston

How is it more LoC in Go, just cause of the "if err" stuff?

bitbasher

Go's verbose error handling certainly impacted the vertical height of files (lots of early returns), but wasn't a big contributor to overall LoC.

The more serious LoC offenders in Go were:

1. Marshalling/Unmarshalling code (for API responses, to/from external services, etc). In general, working with JSON in Go was painful and error prone. Rust's serde made this a complete non-issue.

2. Repetitive sql query code (query, scan for results, custom driver code for jsonb column marshalling/unmarshalling). Rust's sqlx made this a non-issue.

3. Go's use of context to share data through handlers was a real pain and error prone (type casting, nil checks, etc). Rust's actix-web made this a real beautiful thing to work with. Need a "User" in your handler? Just put it as an argument to the handler and it's only called if it's available. Need a db connection? Just put it as an argument to the handler.

4. Go's HTML/Text templates required more data to be passed in and also required more safety checks. Rust's askama was overall more pleasant to use and provided more confidence when changing templates. In Rust, I'd catch errors at compile time. In Go, I'd catch them at runtime (or, a user would).

I must admit I was surprised. I thought Rust would have been more lines of code because it's a lower level language, but it ended up being ~40% less code. My general sentiment around working with the code is very different as well.

In the Rust codebase I have no hesitation to change things. I am confident the compiler will tell me when I'm breaking something. I never had that confidence in Go.

frollogaston

Hm. I've used Rust a lot more than Go, so this is secondhand to me. I know that generics are iffy and nullness is annoying. If you're paying for static types in Go and still not getting the guarantees, that really bites.

tmpfs

I have used this as well as many of the other lower-level db drivers (which don't check your SQL at compile time) and I can say I much prefer the latter.

My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.

Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.

Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.

SQLx is a nice idea but too cumbersome in my experience.

[1]: https://docs.rs/sqlite/latest/sqlite/ [2]: https://docs.rs/postgres/latest/postgres/

belak

I'm have no experience with abstracting away the backend, but Dockerizing is actually pretty easy now - there's an offline mode[1] where you can have sqlx generate some files which let it work when there's no DB running.

[1]: https://docs.rs/sqlx/latest/sqlx/macro.query.html#offline-mo...

vegizombie

It's definitely not perfect, but I think both of those issues are better now, if not fully solved.

For needing a DB at compile time, there's an option to have it produce artefacts on demand that replace the DB, although you'll need to connect to a DB again each time your queries change. Even that is all optional though, if you want it to compile time check your queries.

0xCMP

I know it's annoying (and apparently there is a solution for generating the required files before the build), but in these kinds of situations Go and Rust are great for doing a static build on the system and then copying into a scratch image.

Versus Python and Node often needing to properly link with the system they'll actually be running in.

adelmotsjr

Why would you want to abstract away the underlying database? Wouldn't it better to already use the target DB to cattch potential issues earlier? Also to avoid creating another layer of indirection, potentially complecting the codebase and reducing performance?

TrueDuality

Primarily for libraries and deployment environments that aren't fully in your control which is still pretty common once you get to B2B interactions, SaaS is not something you can easily sell to certain environments. Depending on the assurance you need, you might even need to mock out the database entirely to test certain classes of database errors being recoverable or fail in a consistent state.

Even in SaaS systems, once you get large enough with a large enough test suite you'll be wanting to tier those tests starting with a lowest common denominator (sqlite) that doesn't incur network latency before getting into the serious integration tests.

small_scombrus

> Wouldn't it better to already use the target DB to cattch potential issues earlier?

The target DB can change as a project goes from something mildly fun to tinker with to something you think might actually be useful.

Also I personally find that SQLite is just nice to work with. No containers or extra programs, it just does what you ask it to, when you ask it to

stmw

Thanks, interesting experience - so much depends on getting developer ergonomics right. There is something to be said for checking the SQL at compile-time, though - esp. if trying to ORM to a typesafe language.

no_circuit

How long ago did you try SQLx? Not necessarily promoting SQLX, but the `query_as` which lets one make queries without the live database macro has been around for 5 years [1].

For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.

The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.

The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.

If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.

[1] https://github.com/launchbadge/sqlx/commit/47f3d77e599043bc2...

[2] https://crates.io/crates/rusqlite

[3] https://www.sea-ql.org/SeaORM/

[4] https://github.com/diesel-rs/diesel/issues/2078

[5] https://github.com/diesel-rs/diesel/issues/1728

[6] https://github.com/diesel-rs/diesel/discussions/4160

[7] https://en.wikipedia.org/wiki/Database_normalization

owlstuffing

Quite similar to manifold-sql[1], which is arguably better integrated into Java than SQLx is into Rust. Inline native SQL in Java is *inherently type-safe*, no mapping -- query types, query results, query parameters all projected types at compile-time.

    int year = 2019;
    . . .
    for(Film film: "[.sql/] select * from film where release_year > :rel_year".fetch(year)) {
        out.println(film.title);
    }
1. https://github.com/manifold-systems/manifold/blob/master/man...

Ameo

sqlx is my favorite way of working with databases in Rust hands down.

I've tried alternatives like Diesel and sea-orm. To be honest, I feel like full-blown ORMs really aren't a very good experience in Rust. They work great for dynamic languages in a lot of cases, but trying to tie in a DB schema into Rust's type system often creates a ton of issues once you try to do anything more than a basic query.

It's got a nice little migration system too with sqlx-cli which is solid.

theOGognf

I’ve used Diesel for a bit now but haven’t had issues wrangling the type system. Can you give an example of an issue you’ve encountered?

p4ul

This has been exactly my experience! I've found SQLx to be a joy to work with in Rust!

undefined

[deleted]

WD-42

Same. Never again diesel. The type system just turns it into madness. Sqlx is a much more natural fit.

thesuperbigfrog

I used SQLx with an SQLite database and ran into connection pool problems that would cause the database to be unexpectedly dropped.

The issues I saw seem to be related to these issues:

https://github.com/launchbadge/sqlx/issues/3080

https://github.com/launchbadge/sqlx/issues/2510

The problems did not manifest until the application was under load with multiple concurrent sessions.

Troubleshooting the issue by changing the connection pool parameters did not seem to help.

I ended up refactoring the application's data layer to use a NoSQL approach to work around the issue.

I really like the idea of SQLx and appreciate the efforts of the SQLx developers, but I would advise caution if you plan to use SQLx with SQLite.

zbentley

As a total outsider to sqlx, those issues don’t surprise me: any application on any platform that uses a SQLite in-memory DB concurrently is likely to violate many assumptions made by client-side connection pooling tools. In-memory SQLite is a great tool, but using it indirectly behind a connection pooler that assumes the database is external to the current process is bound to cause problems.

stmw

Agree with zbentley, I actually wouldn't expect this to work well - perhaps a good thing for sqlx team to warn against.

cortesi

SQLx is great, but I had a long laundry list of issues with its SQLite support so I forked it into a focused SQLite-specific library. It has now diverged very far from SQLx, and the number of small inaccuracies and issues we fixed in the low-level SQLite bindings is well into the dozens. The library is unannounced, but is already being used in some high-throughput scenarios.

https://github.com/cortesi/musq

thesuperbigfrog

Musq looks very friendly. I will try it in a future project.

Thank you for sharing it!

moggers123

SQLx is great, but I really wish they had a non-async interface. I had to switch a project from sqlx to rusqlite seemingly just due to the overhead of the async machinery. Saw a 20x latency reduction that I narrowed down to "probably async" (sort of hard to tell, I find it very difficult to do perf analysis of async code). I try to avoid discussing async so as to not come off as a frothing-at-the-mouth-chest-thumping-luddite but honestly, if sqlx had a non-async interface I'd be very happy to accept the "you don't need to use it" argument. its the only place where I don't feel like I really have a choice.

pie_flavor

Async does not incur 20x slowdowns when you're I/O bound. It would be ridiculous for copying a few bytes to be slower than a syscall. This sounds like mutex issues, or WAL config, or something like that.

moggers123

I just chucked something together to try and demonstrate. I don't see the massive 20x slowdown, only about a 3x slowdown (5x on release build). Still enough to be painful for the use case in question.

https://github.com/Moggers/rusqlitebenchmark

Do you think you could look through it and point out what you think the reason is? I think they've both got the same WAL and mutex settings. Its a very contrived and synthetic example but actually somewhat representative of what the original code wanted to do.

nbf_1995

SQLx and F# type-providers are probably the best developer experience for writing database access code. I wish more languages had something equivalent.

stmw

I think this sort of stuff only comes after a LOT of experience with building SQL db backed systems - it resonated with me immediately. (I'm the OP but not affiliated with this Rust project at all).

hyperbrainer

I have never used SQLx, but the best SQL integration I can think of is LINQ. How does this compare to that?

S04dKHzrKT

In the dotnet world, SQLx is more analogous to F# type providers like FSharp.Data.SqlClient , SQLProvider or Rezoom.SQL.

RonanSoleste

Different products. I would not compare them. LINQ is more like Diesel (https://diesel.rs/)

maxbond

To expand, SQLx isn't an ORM or query builder, what it does is allow you to write raw SQL with compile-time guarantees of type safety. It does this by connecting to a dev database at compile time & uses SQL's introspection features (specifically, by preparing a statement[1]) to analyze your queries. (It can also cache this information to check without a database available, and has a basic migration facility.)

[1] https://github.com/launchbadge/sqlx/blob/main/FAQ.md#how-do-...

chuckhend

Love SQLx for my Rust projects. I would like to figure out a great way to use the compile time checks in python or js projects, but haven't explored it yet.

germandiago

I am not much into Rust ATM. I am quite comfortable with C++. So here it goes my question:

I use sqlpp11 in C++.

I generate code and I can use it with strong typing by including some headers. This Rust crate seems to provide compile-time checking.

But it will give me code-completion? It is very nice that by pressing '.' you know what you potentially have.

LtdJorge

It depends. On RustRover you do, because the query text can be language-injected as SQL, and it uses your configured schema.

lsuresh

I first went to sqlx thinking it would be like JOOQ for Rust, but that wasn't the case. It's a pretty low-level library and didn't really abstract away the underlying DBs much, not to mention issues with type conversions. We've since just used rust-postgres.

Daily Digest email

Get the top HN stories in your inbox every day.

SQLx – Rust SQL Toolkit - Hacker News