Get the top HN stories in your inbox every day.
herodoturtle
probablypower
This is interesting, because I have the exact opposite response to these sorts of articles.
I think any bias or personal interest should be declared upfront in media (articles, videos, podcasts, ...) rather than appear as a 'common consumer' talking about a pain point in a relatable way. It really rubs me the wrong way when an article ends with a bait-and-switch, where you realise the entire article was manufactured to make you relate to their product's business case.
Obviously this method must resonate with people, like yourself, otherwise it wouldn't become so common. I guess I'm just the 'B' in the A/B testing that results in this type of marketing.
xwowsersx
I don't think there's any bait-n-switch unless you're unaware you're on fauna.com. Content marketing is well understood and at least in this case it isn't trash content someone spit out in 2 mins in order to lure you to their site. I don't see anything wrong with a company/product delivering value to you in the form of quality content and in exchange they are afforded an opportunity to run your eyes over their product offering.
naasking
> It really rubs me the wrong way when an article ends with a bait-and-switch, where you realise the entire article was manufactured to make you relate to their product's business case.
I'm not sure you have to take an adversarial interpretation of that tactic. If you don't find yourself agreeing with the setup, then you aren't a prospective customer, but if the article was informative to that point at least you now understand the existence of a domain that you're not aware of. Isn't that the point of reading technical articles? In the future, you might find yourself in that position after all.
zzzeek
agree, I have to look at the domain name, the title / sidebar , etc. to see up front, "OK this is yet another 'we think we have a better SQL' startup", then I skip the whole thing.
A site that's about "here's our product and why you might like it!" without getting into some "SQL, well you know, it has shortcomings" which is just unnecessary.
austhrow743
Why would your original assumption not be that the Fauna website exists to promote Fauna?
snthpy
Disclaimer: I'm a core contributor to PRQL [1] and post about it a lot on HN. Apologies for jumping in on other people's threads, but for people interested in the headline, PRQL might be of interest.
At PRQL[1] we believe that SQL is a combination of two things:
1. Relational Algebra, which is eternal because it's just maths, and 2. A language designed in the 70s that looks like COBOL.
When people say that SQL will never die, they are usually thinking about Relational Algebra because SQL has been used interchangeably with that. With PRQL we agree that Relational Algebra is fundamental to thinking about data and we intend to keep that. However we've learned a lot about programming languages in the last 50 years and so PRQL is a revamp of SQL that brings the composability of functional languages and modern ergonomics to data transformations in order to improve the DX and UX of data scientists, data analysts and analytics engineers.
PRQL is simply a compiler that produces SQL so you can use it with whatever database you are currently using. It's completely open source with zero commercial associations and is deeply committed to staying that way forever.
ttfkam
The syntax comparison section will likely sour a lot of viewers who already know SQL. You try too hard to highlight how easy and terse PRQL is by putting comma-separated items on their own lines but in SQL you put each item on separate lines.
It may be typical of many SQL users and formatters, but it leaves a poor taste in the mouth that you aren't interested in an actual comparison but in marketing.
For those who already know SQL, the real question is: will it make my queries faster? Putting the FROM first isn't sufficiently compelling on its own. Having a processing pipeline, though marginally more elegant to look at, doesn't actually improve upon CTEs.
When you say you can use it with any database, how do you handle functions, stored procedures, jsonpath, and the massive differences in functionality between Oracle, MS SQL Server, Postgres, DB2, MySQL, MariaDB, H2, SQLite, etc.? Lowest common denominator?
After 49 years of SQL, more than syntax has to change; you need an engine that supports this natively and can actually improve planner behavior over existing engines.
I will grant that if you are limiting your target audience to primarily analytics, it's probably sufficient. The marketing of PRQL doesn't always appear to do this however.
wackget
Yeah the syntax comparison is deliberately misleading.
They style it as "4 lines vs 10 lines!" when it's actually 4 lines vs 4 lines.
# PRQL
from employees
select {id, first_name, age}
sort age
take 10
# Misleading SQL
SELECT
id,
first_name,
age
FROM
employees
ORDER BY
age
LIMIT
10
# Actual SQL
SELECT id, first_name, age
FROM employees
ORDER BY age
LIMIT 10
The join example is similarly deceptive: # PRQL
from employees
join b=benefits (==employee_id)
join side:left p=positions (p.id==employees.employee_id)
select {employees.employee_id, p.role, b.vision_coverage}
# Misleading SQL
SELECT
employees.employee_id,
p.role,
b.vision_coverage
FROM
employees
JOIN benefits AS b ON employees.employee_id = b.employee_id
LEFT JOIN positions AS p ON p.id = employees.employee_id
# Actual SQL
SELECT employees.employee_id, p.role, b.vision_coverage
FROM employees
JOIN benefits b USING employee_id
LEFT JOIN positions p USING employee_id
Nonsense.sebastianz
It is also (for me) quite a bit noisier and less readable than SQL. The join example has multiple different syntactic symbols, for no clear intuitive reason (= vs ==, {} vs (), side:left vs p=positions).
maximilianroos
[PRQL dev here]
I strongly think we should have the best examples of SQL to compare against. I've ironically made this complaint for other libraries, so I'm alarmed that folks think we might have done the same.
We would take a PR for the first example if folks think that's better spacing. (I think the spacing is the only difference?)
For the second — `USING` isn't fully equivalent to `ON`. There are discussions on GH (https://github.com/PRQL/prql/issues/723) as to which we should compile to. In the meantime, we'd take a PR on anything that is equivalent.
OJFord
Also there's no need to capitalise, so the 4 lines vs. 4 lines is actually just 'order by & limit' vs. 'sort & take'. NBD really, but it's probably nicer in more complex examples.
But I wanted about this the last time PRQL was submitted in its own right - https://news.ycombinator.com/item?id=36869376 - so I'll leave it there!
Scarbutt
Looks like they are showcasing the output of the compiler, not the hand written SQL.
j-a-a-p
PRQL is a solution in search of a problem. Lacking such, the problem had to be fabricated.
iamsam123
Does it SELECT * by default if I never define a SELECT below my FROM? ... Continuing to encourage folks by allowing them to SELECT * easier is would not be fun for me... I could be wrong?
Agreed, just parsing out the formatting so its "fewer lines" than traditional SQL soured me.
The expressions example is ridiculous, in Redshift I can do this all day?? SELECT 1 + 2 AS num1 , num1 * 2 AS num2 -- Literally no difference
Just learn SQL...
Zak
> Just learn SQL...
I know SQL, and I imagine the authors of PRQL know it better than I do.
Doesn't it seem weird that dozens of application languages have become popular since the 1970s, but we're still using dialects of the same old database query language? If it had a really elegant syntax, perhaps it wouldn't, but SQL's syntax is anything but. Some of the semantics can be awkward as well.
I, for one welcome attempts to move things forward (which is different from saying I'm going to run out and use PRQL in production tomorrow).
Izkata
> Putting the FROM first isn't sufficiently compelling on its own.
Personally I see that as not even neutral, it's a downside. Optimizing for autocomplete is an antipattern, code is read far more often that it's written and the SELECT clause is the interface to the following code. It should be easy to find when skimming, not buried in the query.
The SELECT clause is also akin to an assignment and it's extremely rare I see anyone advocating flipping the order of those to match what they say they want in SQL.
Edit: Since I'm sure someone is going to jump on it, yes, I'm conflicted about the WITH clause: It's extremely useful and I like what it does, so I do use it, but I don't like where it's positioned. I've been toying with indentation to work around it so SELECT is still just as visible as otherwise.
anon84873628
You might find Malloy interesting as it makes a greater departure from SQL syntax. Queries are first class objects which can be nested within each other in order to do trellising. It still compiles to SQL because that is the only language accepted by DBMSs today; however it will automatically write symmetric aggregate calculations and do those nestings that are hard for a human to write.
ako
How are views not first class query objects?
maximilianroos
[PRQL dev here]
I strongly think we should have the best examples of SQL to compare against. I've ironically made this complaint for other libraries, so I'm alarmed that folks think we might have done the same.
We would take PRs for any improvements to the SQL that make it a better comparison.
chrisjc
Not a criticism, more of an observation.
The point of being able to write "FROM" before "SELECT" has become moot to me since many DBs support that SQL "convention" already.
The newline/indentation is something I would do for readability in the PRQL too.
At the end of the day, I'm becoming less concerned about these various query language syntaxes and more concerned about the logical and physical plans generated by the respective query engines.
In my experience, what ends up most problematic is that each query-engine/optimizer sometimes/usually requires SQL tweaks/nuances to perform as efficiently as possible. That's where reading/writing/maintaining queries can get really confusing/difficult. An abstraction such as PRQL, dataframes, etc might add value here, but only if the abstraction (or runtime/evn built around it) can tune the output SQL to each query engine as it is needed.
There are a lot of projects using frameworks/libraries such as Substrait and SQLGlot to accomplish this.
remram
It's just syntax, it compiles to SQL and runs on today's DBMS. It has no difference in speed or functionality.
lolinder
It does make a difference if the resulting SQL is unperformant.
Someone who is good at SQL can look at a query and see where the query planner might go wrong, then make subtle tweaks to get better performance. Optimizing queries in a compile-to-SQL language basically has to be left to the compiler, which may not have the context needed to write performant SQL.
Compiling a query is different than compiling a whole program to assembly, because the query makes assumptions about schemas and indexes that cannot be encapsulated in what you're feeding to the compiler at that moment.
ttfkam
(I couldn't help but notice you didn't comment on the difference in formatting in the examples.)
Do you have examples of PRQL working with jsonpath? Generating JSON? Unnesting arrays? Returning ids from an INSERT or UPDATE without making a separate read query?
Not trying to be argumentative. Honest question.
snthpy
For those interested who want to learn more, we have a number of presentations coming up at conferences on three continents:
- [QCon SF, October 2nd, San Francisco, USA: ](https://qconsf.com/presentation/oct2023/prql-simple-powerful...)
- [PyconZA, October 5th, Durban, South Africa: ](https://za.pycon.org/)
- [Community over Code (ApacheCon), October 9th, Halifax, Canada: ](https://communityovercode.org/schedule-list/#FT005)
- [data2day, October 12th, Karlsruhe, Germany: ](https://www.data2day.de/veranstaltung-21353-0-prql-a-modern-...)
lolinder
SQL will never die for the same reason that JavaScript will never die: because it's built in to all major database engines.
In both cases, any other language will be starting as a second class citizen that has to compile to SQL/JS. During this phase of a new language's lifetime, it is either a surface-level syntactic change (a la Coffeescript) that provides no objective improvement, or it has to compile its simple semantic structures into opaque SQL/JS structures that will be off the beaten path and therefore not highly optimized by the runtime. Neither will reach sufficient adoption to become a first-class citizen in a major existing platform.
TypeScript succeeded where others failed because it provided much-needed static analysis while keeping the changes minimal enough that it's completely obvious what the runtime code will look like, so there are no unexpected performance gotchas. SQL, on the other hand, doesn't really need a TypeScript because SQL is highly statically analyzable by nature.
It's not that I don't believe we could do with an improvement on SQL, but I really don't see a realistic path forward for a replacement.
totalhack
IMO a semantic layer is a nice UX/DX improvement over plain SQL in a business/analytics setting. I use a semantic layer* for >95% of use cases and fall back to SQL when needed. This balance will be different for each business of course.
paulddraper
While I mostly agree, there is a bit of Stockholms syndrome.
A lot of people don't know what they even could be missing.
For example, there is no succinct way of writing an antijoin in SQL .
The MERGE command has only been implemented by some engines due to (IIRC) concurrency concerns/ambiguities.
ANSI SQL JSON operations have improved but are still clunky.
Boolean NULL and IN is a clusterf of footguns.
Etc.
lolinder
Oh, I agree! SQL is far from optimal, as is JS. My thoughts are more about the feasibility of a replacement than the need for one.
US electrical outlets are also highly flawed, but we're never going to replace them either. Standards, once entrenched, are nearly impossible to uproot.
maximilianroos
[PRQL dev here]
I agree with the sentiments, even if not the conclusion. SQL is omnipresent and is "fine" in a lot of cases.
TypeScript is indeed a great example of the case; Kotlin too. I'd also add that databases are already adding PRQL support — ClickHouse has native support, there's a DuckDB extension, and folks are working on a Postgres extension.
One thing I'll respectfully disagree with — "SQL is highly statically analyzable by nature":
As a really basic example: `SELECT <expr> FROM tbl` — can we tell what shape the result is? In SQL, shapes / types require a lot of context — the result could be a single row in the case of `SUM(foo)`, or it could be every row in the case of `foo, bar`. More in https://prql-lang.org/faq/...
dragonwriter
> As a really basic example: `SELECT <expr> FROM tbl` — can we tell what shape the result is?
Like most statically analyzable code, you have problems if you try to statically analyze part of the code without the relevant definitions.
But, yes, if you have the relevant code (e.g., the DDL for the table), you can. (Without it, you can in the sense that you can statically determine it as a function of the table definition, which can be sufficient in some cases.)
> in SQL, shapes / types require a lot of context — the result could be a single row in the case of `SUM(foo)`, or it could be every row in the case of `foo, bar`.
Oh, you mean, can we statically determine the shape of an expression’s results without knowing the expression? Well, no, and that’s true in most statically-analyzable languages.
lolinder
Can you tell the shape of the result of `<expr>()` in Java by just reading it? No. Does that mean that Java can't be statically analyzed? Of course not!
A static analysis system is not restricted to weird abstract constructs like `select <expr>`, it sees the complete picture and can come to conclusions based on the concrete code construct it's given. There's absolutely nothing stopping a SQL static analysis from recognizing that `SELECT sum(foo)` will always return one row with one column that is an integer type, while `SELECT foo, bar` returns some number of rows that have foo and bar columns whose types can be inferred from the CREATE TABLE statements.
undefined
chrisjc
> One thing I'll respectfully disagree with — "SQL is highly statically analyzable by nature"
Are you suggesting that PRQL is capable of this? Or at least easier to do in PRQL?
edgyquant
If you know <expr> then yes you know the shape of the data. You may not know the number of rows, but I fail to see how that makes it not analyzable.
anon84873628
Well, Malloy is developed within Google by the founder of Looker, so there is a chance it could be natively integrated into BigQuery. At that point you have a next gen SQL replacement available on one of the most widely used analytics and transformation engines.
Scarbutt
It's hard to say, if microsoft or google where behind prql and promoting it, it may as well become a typescript. There's a reason many developers use query builders, while not exactly the same, they want programming language features and familiarity of modern programming languages. Not saying prql is the correct approach here since I don't know it.
frogulis
Pretty cool, your description got my click. I particularly enjoy that a filter is a filter before and after grouping.
One thing, the "showcase" section is not usable for me on mobile. The code box does not fit on the screen horizontally and I can't scroll right to see the remainder of it.
snthpy
Thank you for the feedback. I'll let the team know.
We definitely want people on all devices to be able to learn about the project.
ledauphin
I've been excited in the abstract about PRQL for quite a while. But something FQL seems to have a much better handle on is the value of document-orientedness, or what you might alternatively call "gradual schematization".
This problem has been solved (if not beautifully, at least acceptably) by modern SQL databases that support a JSON storage format and associated "secondary query language".
I know PRQL has had an open issue on this subject for a while. I just want to note that I think this is truly one of the critical "missing pieces" to PRQL, without which it may never be able to break out into common usage.
anon84873628
FQL is interesting because it focuses on transactional systems and eliminating the need for an ORM in applications. I feel many of the SQL replacement projects like PRQL and Malloy instead come from the analytics side of the house, which doesn't really help application developers at all. (But does raise the question, how do I do analytics in Fauna? Do I ETL to a traditional warehouse system?)
panda888888
That's a good question. If you need to ETL to a traditional data warehouse in order to do analytics, wouldn't you need an ORM, which is exactly what they're trying to avoid? (Also note that it's usually ELT instead of ETL these days, like if you're using Snowflake)
Or is Fauna a hybrid transactional/analytical database so you can do analytics in it using FQL? (Maybe the long-awaited possibility of true hybrid databases is real?!) But then you'd need to train all of your business intelligence analysts on FQL, which would be a drawback.
roenxi
Great project, wish you all the best. Anything to try and unseat SQL from common use (we can all wish for the day we run PostgrespostSQL in production). At the moment the project is probably going to lose people because it isn't obvious how to get started - many SQL beginners don't know what a compiler is and will get confused by the docs.
For the sake of their sanity, it'd be worth considering putting an example of using the compiler on a local text file somewhere prominent on that site. That way beginners can go in, write some PSQL, compile it and use it against real SQL databases.
Or if not the compiler, make it clear how beginners are supposed to engage with this. There is a big need out there for something dplyr-like that works. There are a dizzying array of options and that isn't going to help some good people who need a bit of handholding.
snthpy
Thank you for your feedback. That's really valuable!
We have the [PRQL Playground](https://prql-lang.org/playground/) exactly for that purpose.
We'll try and make it more prominent on the front page. I've also felt that we should have a "Getting started" page and will push that as a priority.
roenxi
Ah, I see. Maybe you're just losing beginners who NoScript then. I hadn't figured out what the Playground was supposed to do but it turned out there was an extra disabled script.
Nevermind, thanks.
psacawa
Is there any intention of eventually supporting DML or DDL statements? That's when the COBOL-like nature of SQL syntax is most frustrating. For example, in order to run "ALTER COLUMN ..." I have to parse a ridiculous BNF like this[0] almost every time. I'll never remember it.
Usually, the error is a gotcha built into the language syntax (e.g. forgot the keyword "TO").
seanhunter
SQL is based on relational calculus rather than relational algebra, which is why it's declarative. Relational algebra is built on fundamental relational operators (select, project, filter, product etc) which are imperatively applied. You can find out more about it here https://techdifferences.com/difference-between-relational-al...
dagss
Nitpick, but relational does not mean joins, it means tables/rows of tuples. A "relational document database" which is the slogan of Fauna it seems is a contradiction in terms.
contrast
That’s technically correct, and I think the author would say he’s aware of that definition.
The article as I read it is trying to make a broader point, that there are underlying mathematical principles that inspired Codd’s relational model.
I’ve never had cause to explore it, but my understanding is that there’s nothing in those principles that require tables/rows of tuples.
One goal of the article seems to be to inspire a curiosity in knowledgeable readers: what happens if you build a document database that also supports the same mathematical principles that inspired the relational model?
gregjor
> there’s nothing in [Codd’s] principles that require tables/rows of tuples.
Have you read Codd’s Rules #1 and #2? Pretty clear on this point.
https://en.wikipedia.org/wiki/Codd%27s_12_rules
Technically the relational model uses the term relation to refer to an unordered set of tuples, where every tuple has a key (one or more elements) to uniquely identify it, and every tuple has the same number of items, of the same type. Tables are relations. So are the results of a query, which can include joins.
ghusbands
> > there’s nothing in [Codd’s] principles that require tables/rows of tuples.
You're misquoting; is that intentional? The post you're replying to says the principles that inspired/predate Codd don't require tables/tuples. Thus, the details of Codd's relational model are irrelevant.
dragonwriter
> The article as I read it is trying to make a broader point, that there are underlying mathematical principles that inspired Codd’s relational model.
The relational model is a direct product of a set of mathematical principles Codd put together called relational algebra, which deals with sets of tuples called relations.
Nothing in the article addresses any of the mathematical underpinnings of the relational model. Its blowing smoke at an audience that it expects to know next to nothing about the topic.
> One goal of the article seems to be to inspire a curiosity in knowledgeable readers: what happens if you build a document database that also supports the same mathematical principles that inspired the relational model
The features of RDBMSs that they seem to be suggesting FQL supports are ACID transactions. While that's an important feature of RDBMSs, it isn’t the same thing as the mathematical principles addresses by the relational model, whether relational algebra or the more general set theory that inspires it. The article isn't directed at knowledgable readers.
bazoom42
A relation is by definition a set of tuples (informally called a table where the tuples are the rows).
Codds relational database model adds the further constraint that nested tables are not allowed (first normal form), instead representing relationships through foreign keys.
Codds motivation for disallowing nested tables is that it makes query languages much simpler. He develops relational algebra which is the foundation behind SQL, which is why SQL does not allow nested tables.
Document databases does not follow first normal form and allows nested structures, so they cannot be queried with relational algebra, since it doesnt have a way to “drill down” into nested structures.
It is unclear to me what “mathematical principles” remain if you remove the notion of relations from the relational model.
_a_a_a_
Out of my depth here (no experience) but "Codds relational database model adds the further constraint that nested tables are not allowed" may be wrong. He allowed nested stuff, it's just that SQL didn't support it.
Can anyone elucidate? Please don't shout that I'm wrong because there was something there in his first paper.
marcosdumay
Whatever you want to point from theory, the one single distinctive feature of the relational model is the "mostly free" interdependency between the relations. AKA, the fks and joins.
crabbone
Worthless article. Zero useful description of what it's trying to sell. A bunch of disjoint historical facts about relational databases that have nothing to do with the product being sold take about 2/3 of the article.
Also the author seems to be very proud of associating themselves with Microsoft's products (w/o even a hint of doubt that that may not show them in favorable light)...
Also, marketing-inspired use of pseudo-programming terminology (eg. "dynamic languages"). Ewww.
ako
Seems like a lot of what fauna does by storing documents isn’t really new, oracle, Postgres and others have provided this for a long time. I was really surprised by the performance of json queries [1], opens the doors to using Postgres as a client api cache, storing the payload in a table, and doing deserialization using (materialized) views.
Difference seems to be the approach to minimize number of calls from your application, get all require session data in one call, similar to what graphql is doing for api calls. They’re also using http as the protocol for database connectivity.
[1]: https://ako.github.io/blog/2023/08/25/json-transformations.h...
default-kramer
Using FQL instead of SQL seems to be a pretty big difference too.
ako
Postgres has procedural languages and enables you to return complex json structures combined of relational data and json documents both with its procedural languages and regular sql. Sure, the syntax is different, but not sure if the difference makes a big impact.
robertlagrant
Yes - the difference you mention seems to be the main difference.
ttfkam
CTEs and query pipelining are not sufficient?
gigatexal
Re PRQL … I see it like my text editor. I’ll stick with vi because it has solved text editing. It’s done. Same with SQL. I’ve not seen anything yet ready to replace it. It’s not perfect. But for what I need from it it’s perfectly serviceable.
robertlagrant
On strong schemas and flexibility:
1. You still have a schema in your code. With weak schemas it's now just harder to know if every record in your database conforms to it.
2. An ORM is a great tool for prototyping. R.g. have SQLAlchemy objects in code, run a command to generate a database migration; run the migration, and you have all your data guaranteed to be compatible with your latest code, and you didn't write any SQL.
m_mueller
If you program defensively you can save on certain common Schema updates in e.g. a document based data model (e.g. adding more fields). But strong schemas definitely make sense when you’re dealing with relational data from my experience. Earlier in my career I built a relational model on top of CouchDB (due to its strong replication capabilities, including on mobile devices), but it was definitely painful (and less performant) compared to building it in a relational DB.
roenxi
> If you program defensively you can save on certain common Schema updates in e.g. a document based data model (e.g. adding more fields)
ALTER TABLE whatever ADD COLUMN new_field type DEFAULT NULL;
I've seen a lot of people claim that they don't want to waste time clarifying their schema and I'm sure there are edge cases where that is clever. But, in the majority of cases, they are literally risking data integrity for a saving smaller than the time it takes to write a HN comment.
Making schema implicit doesn't "save" anything. The schema is still there, now just only insiders who are completely familiar with the code know what it is. And they're going to have a few extra bugs because they'll forget too.
ghusbands
> Most importantly, SQL databases made supporting highly consistent ACID transactions easy.
The default transaction isolation level for every major database is not ACID. Enabling the required serializability tends to make performance terrible, and so most don't.
_a_a_a_
> Enabling the required serializability...
is trivial, no?
> ...tends to make performance terrible
I've heard this a lot but never seen any figures - anyone have any numbers/experience?
(edit: and most apps I've worked with didn't need serialisability, either because they were working with a snapshot of data or absolutely precise answers weren't needed)
ghusbands
I've heard multiple accounts of people being taken by surprise by this and by how transaction isolation actually works in databases and not finding it at all easy to correct it. A famous one is https://blog.codinghorror.com/deadlocked/
(On your edit: The problem is not knowing when you're being hit by it. Even just maintaining a limit on total size of uploaded files or such, for example, is nontrivial under default isolation levels.)
_a_a_a_
On your first para I'm not remotely sympathetic. They are using a complex, sophisticated tool without understanding it. They got what they deserved. Particularly telling is this ridiculous quote "I'm a little disappointed that SQL Server treats our silly little web app like it's a banking application". Edit: so, they wanted even more lax integrity constraints by default? FFS
And, BTW, deadlocks can most definitely happen using the default Read Committed isolation level.
Per your 2nd para, I simply don't understand, can you clarify?
iudqnolq
that's presumably why the author said "made supporting... easy", not "is"?
xwowsersx
I have to say this was written extremely well. Quite cogent and I feel I learned a little something. Bookmarking this as a pretty decent intro to this area that I can refer people to.
slotrans
Fixed schemas are good. Document stores are bad. SQL is good.
Stop doing this nonsense. It's a step backwards. As the intro points out, hierarchical and graph DBs came first, and relational was built in part to solve their problems. Document DBs just bring those problems back.
bob1029
> Fixed schemas are good.
I recall getting into an argument recently (perhaps on HN) wherein the central thesis for why SQL is bad is because the schema is "difficult" to change relative to a document store or other no-SQL abstraction.
If you don't have a clear idea of what the representative SQL schema might be for your problem or business (say, within ~80%+ certainty), one may argue you should not be writing any software until you've further clarified things with business stakeholders.
I strongly believe that virtually all evil which emerges from practical software engineering comes out of this "flexible schema" bullshit. If the business is certain of the shape of their problem, there is almost certainly a fixed schema that can accommodate. There are very few problem domains which cannot be coaxed into a strict SQL schema.
totalhack
There are also ways to add some flexibility into a "fixed" schema when you need it. Entity-attribute-value tables, views, JSON columns (as a last resort), or a semantic layer like https://github.com/totalhack/zillion
bob1029
> JSON columns (as a last resort)
This is the design-time escape hatch for me. The remaining 20% space of unknowns can usually be dealt with here. As long as the most critical tables, relations & properties have been set in stone, I would find it reasonable to allow a "ExtendedPropertiesJson" column in limited areas of the schema.
Over time, these JSON blobs should be refactored into the schema as features become more stable.
The key is to have a solid relational model as the foundation. You cannot really do it the other way around.
Scarbutt
Business requirements change over time, specially at the beginning, you may have 80% certainty of the schema today but not in four months.
xtracto
I am tech advisor to a bunch of startups. One of them doing stock buy/sell came to me with their MongoDB based system. The first thing I told them is that using a document based db for oltp for their use case was going to give them problems.
I saw it first hand 10 years ago, and had to do a migration.
Their justification for using mongo was that their system is very dynamic so their data changes a lot and sql based DBs dont allow that. I told them about DBA migrations and whatnot, but I just haven't been able to convince them.
It's sad seeing how they are digging into the same hole I had to digg out myself from a decade ago.
lcnPylGDnU4H9OF
At this point, one chooses the solution for their problem. The reason the fads occur is that a person who vaguely understands both the problem and the solution will write a blog post which happens to go viral talking about how the solution will solve all problems.
NoSQL databases aren’t unilaterally worse than relational ones. They just solve different problems.
pphysch
> NoSQL databases aren’t unilaterally worse than relational ones. They just solve different problems.
I can't prove this, but I assert that a relational database that has solid JSON+text support (e.g. Postgres) is on much better footing than a NoSQL DB that attempts to implement a true relational model.
One is a adding a special new datatype, the other is trying to add an entire paradigm.
Just use Postgres. If you do need to migrate to Mongo for some reason, dumping your tables into JSON isn't the end of the world.
lcnPylGDnU4H9OF
> a relational database that has solid JSON+text support (e.g. Postgres) [vs] a NoSQL DB that attempts to implement a true relational model
I think both cases are similar to using a flat-head screwdriver for a phillips screw. It will technically work but you might run into otherwise avoidable problems. Use the right tool for the job. Though generally a relational database with JSON support will better serve most web applications, a nosql db might better serve niche internal processing.
scott_meyer
what is a document? How is an ORANUM or a bignum not a document?
One motivation for creating documents is that modeling document contents as relations requires the creation of a bunch of primary keys which no natural definition. A simple document might be an ordered collection of paragraphs, [p23, p57, ...]
Modifying such things is difficult. In fact, the most effective way of structuring modification seems to be OTs based on document offsets. What Google docs does.
undefined
Get the top HN stories in your inbox every day.
This is a very interesting way to promote a product, credit to the author (who is an industry veteran it seems).
I had no idea what Fauna was. I just clicked the link here because the title caught my eye (I work with databases quite a bit).
The opening paragraph immediately grabbed my attention - "My first deep dive into SQL was in 1987, just before I became the first technical person at Microsoft to work on SQL Server." - woah!
So I read this entire article, which is very well written and easy to read but mostly affirms what I already know.
And then I get to the final section where they promote Fauna - and so now I know about Fauna too.
Kudos to these folks, in my humble opinion, this is marketing done right.