Get the top HN stories in your inbox every day.
lordnacho
andrewl-hn
Another superpower of TimeScale is that it plays nicely with other Postgres extensions. We had a really good experience with it combined with PostGIS. Scenarios like "Show sensors on a map with value graphs for each sensor" can be done in a single query, and it's fast and beautiful.
BWStearns
I am using Timescale at work and I like them a lot. If your data is nicely structured it's a breeze. But my data is kind of pathological (source can just change the structure and I gotta put up with it), so I'd honestly use Influx in a heartbeat if their pricing wasn't totally insane.
Labo333
I actually quit a quant trading job after 2 weeks because they used kdb+. I could use it but the experience was so bad...
People could complain about abysmal language design or debugging but what I found the most frustration in the coding conventions that they had (or had not), and I think the language and the community play a big role there. But also the company culture: I asked why the code was so poorly documented (no comments, single letter parameters, arcane function names). "We understand it after some time and this way other teams cannot use our ideas."
Overall, their whole stack was outdated and ofc they could not do very interesting things with a tool such as Q. For example, they plotted graphs by copying data from qStudio to Excel...
The only good thing was they did not buy the docker / k8s bs and were deploying directly on servers. It makes sense that quants should be able to fix things in production very quickly but I think it would also make sense for web app developers not to wait 10 minutes (and that's when you have good infra) to see a fix in production.
I have a theory on why quants actually like kdb: it's a good *weapon*. It serves some purpose but I would not call it a *tool* as building with it is tedious. People like that it just works out of the box. But although you can use a sword to drive nails, it is not its purpose.
Continuing on that theory, LISP (especially Racket) would be the best *tool* available as it is not the most powerful language out of the box but allows to build a lot of abstractions with features to modify the language itself. C++ and Python are just great programming languages as you can build good software on them, Python being also a fairly good weapon.
Q might give the illusion of being the best language to explore quant data, but that's just because quants do not invest enough time into building good software and using good tools. When you actually master a Python IDE, you are definitely more productive than any Q programmer.
And don't get me started on performance (the link covers it anyway even though the prose is bad).
wenc
The article calls out Python and DuckDB as possible successors.
I remember being very impressed by Kdb+ (went to their meetups in Chicago). Large queries ran almost instantaneously. The APL like syntax was like a magic incantation that only math types were privy to. The salesperson mentioned KdB was so optimized that it fit in the L1 cache of a processor of the day.
Fast forward 10 years. I’m doing the same thing today with Python and DuckDB and Jupyter on Parquet files. DuckDB not only parallelizes, it vectorizes. I’m not sure how it benchmarks against kdb+ but the responsiveness of DuckDB at least feels as fast as kdb+ on large datasets. (Though I’m sure kdb+ is vastly more optimized). The difference? DuckDB is free.
singhrac
We use DuckDB similarly but productionize by writing pyarrow code. All the modern tools (DuckDB, pyarrow, polars) are fast enough if you store your data well (parquet), though we work with not quite “big data” most of the time.
It’s worth remembering that all the modern progress builds on top of years of work by Wes McKinney & co (many, many contributors).
wenc
Yes Wes McKinney was involved in both Pandas and Parquet and Arrow.
wenc
I just realized all the data tools I use are animals.
Pandas
Polars (polar bear)
DuckDB
Python
cout
Do you use duckdb for real-time queries or just historical? You mentioned parquet but afaik it's not well suited for appending data.
wenc
Also a tip: for interactive queries, do not store Parquet in S3.
S3 is high-throughput but also high-latency storage. It's good for bulk reads, but not random reads, and querying Parquet involves random reads. Parquet on S3 is ok for batch jobs (like Spark jobs) but it's very slow for interactive queries (Presto, Athena, DuckDB).
The solution is to store Parquet on low-latency storage. S3 has something called S3 Express Zones (which is low-latency S3, costs slightly more). Or EBS, which is block storage that doesn't suffer from S3's high latency.
eismcc
You can do realtime in the sense that you can build Numpy arrays in memory from realtime data and then use these as columns in DuckDb. This is approach I took when designing KlongPy to interop array operations with DuckDb.
wenc
Not real time, just historical. (I don’t see why it can’t be used for real time though... but haven’t thought through the caveats)
Also, not sure what you mean by Parquet is not good at appending? On the contrary, Parquet is designed for an append-only paradigm (like Hadoop back in the day). You can just drop a new parquet file and it’s appended.
If you have 1.parquet, all you have you to do is drop 2.parquet in the same folder or Hive hierarchy. Then query>
Select * from ‘*.parquet’
DuckDB automatically scans all the parquet in that directory structure when it queries. If there’s a predicate, it uses Parquet header information to skip files that don’t contain the data requested so it’s very fast.In practice we use a directory structure called Hive partitioning, which helps DuckDB do partition elimination to skip over irrelevant partitions, making it even faster.
https://duckdb.org/docs/data/partitioning/hive_partitioning
Parquet is great for appending!
Now, it's not so good at updating because it's a write-once format (not read-write). To update a single record in a Parquet file entails regenerating the entire Parquet file. So if you have late-arriving updates, you need to do extra work to identify the partition involved and overwrite. Either that or use bitemporal modeling (add data arrival timestamp [1]) and do a latest date clause in your query (entailing more compute). If you have a scenario where existing data changes a lot, Parquet is not a good format for you. You should look into Timescale (time-series database based on Postgres)
belfthrow
Not surviving more than 2 weeks in a QF role because of kdb, and then suggesting they should rewrite everything to LISP is one of the more HN level recidivous comments I think I have ever seen.
dumah
You didn’t learn Q in two weeks to the extent that you are qualified to assert that someone who knows how to use a Python IDE is more productive than a quant dev with decades of experience.
I find it much more likely that you couldn’t understand their code and quit out of frustration.
If you were a highly skilled quant dev and this was a good seat, quitting after two weeks would have been a disaster to manage the next transition given the terms these contracts always have.
Jorge1o1
Their pykx integration is going a long way to fix some of the gaps in:
- charting
- machine learning/statsmodels
- html processing/webscrapes
Because for example you can just open a Jupyter Notebook and do:
import pykx as kx
df = kx.q(“select from foo where bar”)
plt.plot(df[“x”], df[“y”])
It’s truly an incredibly seamless and powerful integration. You get the best of both worlds and it may be the saving feature of the product in the next 10 yearsnivertech
I think this will only work with regular qSQL on a specific database node, i.e. RDB, IDB, HDB[1]. It will be much harder for a mortal Python developer to use Functional qSQL[2] which will join/merge/aggregate data from all these nodes. The join/merge/aggregation is usually application-specific and done on some kind of gateway node(s). Querying each of them is slightly different, with different keys and secondary indices, and requires using a parse tree (AST) of a query.
---
[1] RDB - RAM DB (recent in-memory data), IDB (Intraday DB - recent data which doesn't fit into RAM), HDB - Historical DB (usually partitioned by date or other time-based or integral column).
Jorge1o1
That’s accurate enough. I think the workflow was more built for a q dev occasionally dipping into python rather than the other way around.
I think you touch on something really interesting which is the kink in the kdb+ learning curve when you go from really simple functions,tables, etc. to actually building a performant kdb architecture.
qkdb1
Will be interesting to see what comes of some of the things that are being put on their roadmap https://code.kx.com/pykx/2.5/roadmap.html#upcoming-changes seems to be moving in a direction of an API similar to Polars
hysteria2024
[dead]
keithalewis
[flagged]
RodgerTheGreat
One of the compelling features of kdb+/Q that isn't explicitly called out here is vertical integration: it's a single piece of technology that can handle the use-cases of a whole stack of other off-the-shelf technologies you'd otherwise need to select and glue together. The Q language, data serialization primitives, and IPC capabilities allow a skilled programmer to tailor-build exactly the system you need in one language, often in a codebase that would fit on a few sheets of paper instead of a few hundred or thousand.
If your organization has already committed to serving some of these roles with other pieces of software, protocols, or formats, the benefits of vertical integration- both in development workflow and overall performance- are diminished. When kdb+ itself is both proprietary and expensive it is understandably difficult to justify a total commitment to it for new projects. It's a real shame, because the tech itself is a jewel.
absurdcomputing
I agree that the vertical integration capability of kdb+/Q is amazing, and it is beyond comprehension why Kx themselves don’t effectively leverage it. Kx Platform appears to be mostly written in Java, and the API’s callable from Q are not very well documented. My team and I find the dashboards product is difficult to use, and there are some nasty bugs that cause frequent editor crashes for dashboards of moderate complexity. Q is so feature rich that it would be a blast to write web applications in, but instead we’re forced to use this drag and drop editor if we want to make something available to our users.
I think Shakti could become a viable competitor to Kx if they included libraries that handle some common enterprise usecases, such as load balancing, user permissions and SSO. I have no doubt that an experienced K programmer could whip this up in a week or two, but in my experience a sufficiently large enterprise will specify that all these capabilities need to be implemented before they let the product in the door.
RodgerTheGreat
I'm a little too close to be throwing stones, but without going into specifics I believe that key leaders at Kx do not properly appreciate the unique characteristics and benefits of their own technology, and are trapped in a mindset of trying to make their products more similar to their competition in order to make sales and marketing easier. In the process, they discard their competitive advantage. Tale as old as time.
plorkyeran
I think it is very difficult to judge how much of an advantage your competitive advantage actually is. It’s very easy to look at the things which directly cost you sales and conclude that those are the things you need to fix rather than doubling down on your strengths. The most common way to avoid that is to go too far in the other direction and become convinced that your niche technology is vastly superior to the mainstream choice and anyone who rejects you for your shortcomings is just shortsighted and wrong.
From the outside it’s always seemed that kdb fans tend to land in the second camp, and I think it would be understandable for Kx to have overcorrected into undervaluing their work instead.
mbroecheler
I agree that being able to write one piece of code that solves your use case is a big benefit over having to cobble together a message queue, stream processor, database, query engine, etc.
We've been playing around with the idea of a building such an integration layer in SQL on top of open-source technologies like Kafka, Flink, Postgres, and Iceberg with some syntactic sugar to make timeseries processing nicer in SQL: https://github.com/DataSQRL/sqrl/
The idea is to give you the power of kdb+ with open-source technologies and SQL in an integrated package by transpiling SQL, building the computational DAG, and then running an cost-based optimizer to "cut" the DAG to the underlying data technologies.
gricardo99
Get a free version out there that can be used for many things…
I think this has been the biggest impediment to kdb+ gaining recognition as a great technology/product and growing amongst the developer community.Having used kdb+ extensively in the finance world for years, I became a convert and a fan. There’s an elegance in its design and simplicity that seems very much rooted in the Unix philosophy. After I left finance, and no longer worked at a company that used kdb+, I often felt the urge to reach for kdb+ to use for little projects here and there. It was frustrating that I couldn’t use it anymore, or even just show colleagues this little known/niche tool and geek out a little on how simple and efficient it was for doing certain tasks/computations.
jcul
Isn't there a free version or something?
I had to write some C++ code in the past to send data into kdb and also a decoder for their wire protocol. For both I definitely had a kdb binary to test against.
I just needed to test against it. Maybe Kx gave us a development license or something, it was a good few years ago.
7thaccount
They do have a free version for non-commercial work.
shrubble
Were any of the open source versions such as ngn/k or Kerf etc. usable for you?
RodgerTheGreat
Kerf1 has only been open source for a fairly short time, and prior to that it was proprietary. ngn/k is tremendously less feature-rich than Q/k4, has some built-in constraints that make building large programs difficult, and does not come with the "batteries included" necessary for building distributed systems. Neither is currently a credible alternative to kdb+ for production environments.
thaufeki
Well you would have to know how to code in k, not just q, the syntax is a lot more terse and there are a lot of features missing
chrisaycock
I agree with everything in this article. If you're building from scratch, just store your data in Parquet and access it via Polars or DuckDB.
I built my own language for time-series analysis because of how much I hated q/kdb+, but Python has been the winner for a bunch of years now.
anonu
I built a (moderately successful) startup using kdb+. It was what I knew and it helped us build robust product, quickly. But as we scaled we had to rewrite in FOSS to ensure we could scale the team.
Agree with all the recommendations, except I think kx should open source the platform. This will attract the breed of developer that will want to contribute back to the ecosystem with improvements and tools.
mritchie712
What was the startup? What FOSS did you move to?
7thaccount
Kdb+ seems really cool and I've learned it a little bit for fun along with APL. It would actually be pretty cool for a lot of uses in my industry too, but the price is just crazy. We can't pay like $100k/cpu or whatever it is that the financial banks pay. So they've basically ignored a HUGE amount of potential customers.
coliveira
They found a niche that can pay the price to have an innovative product. I believe they did the right thing, after all it is not a product trying to solve all problems in the world. Other people could learn from their techniques and do the same for other areas and languages.
7thaccount
Not quite where I was going. The product does seem to be good and there is demand for it in many industries I'd think, but instead of using discriminatory pricing and having people pay less that have a much lower ability to pay, they just ignore the segment entirely. Maybe they know what they're doing though. It's a shame I don't get to use it at work
RodgerTheGreat
Semiconductor manufacturers understand that giving free samples of their chips to hobbyists creates an environment that breeds future sales: if 1 out of the 1000 people they mailed samples uses their chip in the design for a commercial product, they come out ahead.
Proprietary programming languages that are inconvenient for hobbyists to obtain- any more friction than cloning a git repo or installing via a package manager- have stunted open-source ecosystems, and in turn limited opportunities for grass-roots adoption.
zX41ZdbW
A few corrections to the article.
1. ClickHouse is not a new technology — it has been open-source since 2016 and in development since 2009.
2. ClickHouse can do all three use cases: historical and real-time data, distributed and local processing (check clickhouse-local and chdb).
3. ClickHouse was the first SQL database with ASOF JOIN in the main product (in 2019) - after kdb+, which is not SQL.
benjaminwootton
I run a data consultancy with a big focus on ClickHouse. There is a lot of interest in replacing KDB with it. I’ve had probably 10 conversations with companies looking at a migration.
Tellingly, nobody has pulled the trigger on a migration yet as I think it’s a big call with all of the integrations that KDB sprouts, but it definetly feels like the spiritual successor.
fnordpiglet
3 is a point that’s lost on people who use Q and related things for financial calculations. They picked kdb+ for a reason, and it wasn’t the database. I took that as the point of the post.
haolez
Is it still possible to learn from scratch and make big bucks developing for kdb+ (k/q)? I remember seeing an open position a few years ago which paid like 1MM per year. Astounding.
puzpuzpuz-hn
Nice article, thanks for sharing it. It's a pity kdb+ has a DeWitt Clause, so that no one can benchmark it against other databases from the article. I wonder if they have any public benchmarks held by a 3rd-party.
timkpaine
There are certainly enough rubes out there to sell the next KDB+ to: https://shakti.com/
parentheses
I feel kdb is like the equivalent of a drag racer - useless generally. Great at a one (or few) things in very limited environments.
hysteria2024
[dead]
Get the top HN stories in your inbox every day.
I thought I'd throw in TimeScale. It's a postgres extension, so all your SQL stuff is just the same (replication, auth, etc).
It's also a column store, with compression. Runs super fast, I've used it in a couple of financial applications. Huge amounts of tick data, all coming down to your application nearly as fast as the hardware will allow.
Good support, the guys on Slack are responsive. No, I don't have shares in it, I just like it.
Regarding kdb, I've used it, but there are significant drawbacks. Costs a bunch of money, that's a big one. And the language... I mean it's nice to nerd out sometimes with a bit of code golf, but at some point you are going to snap out of it and decide that single characters are not as expressive as they seem.
If your thing is ad-hoc quant analysis, then maybe you like kdb. You can sit there and type little strings into the REPL all day in order to find money. But a lot of things are more like cron jobs, you know you need this particular query run on a schedule, so just turn it into something legible that the next guy will understand and maintain.