Get the top HN stories in your inbox every day.
bri3d
miki123211
This isn't specific to Rust or Typescript. You can do this in basically any language.
Imagine you have to distinguish between unescaped and escaped strings for security purposes. Even with a dynamically typed language, you can keep escaped strings as an Escaped class, with escape(str)->Escaped and dangerouslyAssumeEscaped(str)->Escaped functions (or static methods). There's a performance cost to this, so that's a tradeoff you have to weigh, but it is possible.
Another way of doing this is Application Hungarian[1], though that relies on the programmer more than it does on the compiler.
[1] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...
dasyatidprime
> There's a performance cost to this
That part is (de facto) required for dynamically typed languages, but not for statically typed ones where the newtype constructor/deconstructor can be elided at compile time. Rust and C++ especially both do the latter by having true value types available for wrappers that evaporate into zero extra machine code.
But then just this moment I wondered: do any major runtimes using models with no static type info manage to do full newtype elision in the JIT and only box on the deopt path? What about for models with some static type info but no value types, like Java? (Java's model would imply trickiness around mutability, but it might be possible to detect the easy cases still.) I don't remember any, but it could've shown up when I wasn't looking.
gf000
Well, java can do escape analysis, so a wrapper with a single field may end up as a local variable of the embedded field.
As for other JVM languages like Kotlin and Scala, they have basically what "newtype" is, but it can only be completely erased in the byte code when they have a single field.
k_bx
What you cannot do is compile-time safety guarantees, and in languages like Rust type system isn't strong enough to do some advanced compile-time guarantees (via types). So no, you cannot do this in basically any language (unless you turn it into Haskell).
pkolaczk
Rust (and Scala) type systems are somewhat stronger and more expressive in some areas than Haskell. Weaker in some other. But it’s not a clear cut that Haskell type system offers more safety guarantees.
josephg
Can you give some examples? What is Haskell’s type system capable of that you can’t express in rust?
uecker
What the parents describe can be done with almost any language.
light_hue_1
> This isn't specific to Rust or Typescript. You can do this in basically any language.
This just isn't true.
In any dynamic language you would not get these guarantees at compile time. You'd get random failures at runtime. That's not safety of any kind.
Also, part of the goal of languages like Haskell is that they help you think about your code before it runs. All of that is lost.
> Imagine you have to distinguish between unescaped and escaped strings for security purposes
That would be a nightmare in many languages. You'd have to rewrite large parts of the code to be compatible with one or both. And in many languages you'd have to duplicate your code entirely.
In other languages, the result would so ugly, you would never want to touch that code. Imagine doing this with say, templates in C++.
>There's a performance cost to this
There is no performance cost in Haskell! This is entirely undone by the compiler.
Also, because the compiler understands what's going on at a much higher level, you can do things like deriving code. You can say that your classified strings behave like your regular strings in most contexts, like say, they're the same for the purpose of printing but not for the purpose of equality, in one line.
wyager
> You can do this in basically any language.
You can do it in Assembly. That doesn't mean it's cost effective.
bonesss
And categorically: the issue isn’t what “I’d” do, my habits often match my habits, it’s what other project members will be doing (including future degenerate versions of myself assumed to be some combination of busy, tired, stressed and drunk).
The Confucian philosophy that people act like water coming down a mountain, seeking the path of least resistance comes to play.
Haskell, OCaml, F#, and their ilk can yield beautiful natural domain languages where using the types wrong is cost prohibitive. In languages without those guarantees every developer needs discipline to avoid shortcuts, and review needs increase, and time-pressure discussions rehashed.
myst
Costs are a skill issue ;-)
dirkt
> works really well in Rust and TypeScript too
And of course Rust and TypeScript were heavily influenced by Haskell... they just don't mention it and call things differently, to avoid the "monads are scary, I need to write a tutorial" effect. Though it's less about monads and more about things like type classes.
Imitation is the sincerest form of flattery.
metaltyphoon
> heavily influenced by Haskell... they just don't mention it and call things differently
Rust wikipedia says otherwise
zelphirkalt
Afaik Rust and Haskell both inherited from (S/Oca)ML.
cmrdporcupine
Rust's influence was OCaml, not Haskell. Its first compiler was written in OCaml. Its syntax directly looks like OCaml and C++ had a baby. It's got ML smells all over it. Haskell is not the sum of Hindley Milner-esque languages.
Personally, never enjoyed Haskell's syntax (or lack of it) and tendency to overthinking. But I did enjoy SML/NJ and OCaml to some extent.
undefined
singpolyma3
Rust has typeclasses so that can't be it.
mhh__
This is more of a question of Affordances than type systems as per se e.g. you can do this quite happily in C# or something it's just that the amount of visual clutter is more than the actual type definition.
WorldMaker
I think that's getting better in C# with record types (and primary constructors in general). Real Discriminated Unions should help a lot, and that's finally in Language Preview now.
ossopite
I'm not convinced it really works well in typescript. the lack of nominal types requires you to remember some pretty hacky incantations if you want something like a newtype wrapping a primitive type
my experience is that ocaml is more powerful than rust for enforcing this sort of type safety, because you have gadts that give you more expressive power, and polymorphic variants and object types (record row types) that give you more convenience. and the module system and functors of course.
you also avoid some abstraction limitations/difficulties that come from the rust borrow checker for places where garbage collection is just fine
cyberpunk
It really feels like we’re solving the wrong problem sometimes. If a bad type can crash your application, sure, type safety is one answer but I have to admit I like the erlang approach; if something unexpected happens crash the process (not os process, erlang process) which has a very small blast radius on a well architected system (maybe doesn’t even fail the individual request that caused it). I wish more languages had this let it crash philosophy, it really allows for writing code exclusively for the happy path, safe in the knowledge that a -1 where a “string” should be isn’t going to take down production.
Somehow, it feels like a better solution than these complicated type systems. Does any other language do this outside BEAM?
chongli
When working on large, important software, crashing is not the worst thing that can happen; corrupting user data and/or allowing unauthorized access is.
The point of using the type system to do something like distinguish between sanitized and unsanitized strings is specifically to prevent these kinds of security breaches.
Erlang was designed for traditional telecom, where reliability of connections was the biggest factor, not security. I fail to see how Erlang’s approach can deal with the issue of security breaches or corrupted user data.
ossopite
In a way I agree with you, and I'm not sure that what popular languages embrace or make it easy to follow this philosophy. My sense is that Erlang is still the leader.
But I did want to add something the article also touches on: types can be not only about ensuring safety or correctness at runtime, but also about representing knowledge by encoding the theory of how the code is supposed to work as far as is practical, in a way that is durable as contributors come and go from a codebase.
Admittedly this can come at the cost of making it slower to experiment on or evolve the code, so you have to think about how strongly you want to enforce something to avoid the rigidity being more painful than valuable. But it's generally a win for helping someone new to a codebase understand it before they change it.
Edit: another thought I had is that type mistakes do not always causes crashes. Silent corruption can be much more insidious, e.g. from confusing types which mean something different but are the same at the primitive level (e.g. a string, number or uuid)
satvikpendem
Or have a static type system and something like BEAM. I'm not sure why this is a one or other approach, both are useful and unfortunately it doesn't seem like any languages include both. Gleam exists but doesn't really integrate with BEAM, it seems to have its own way of doing things that are more akin to Haskell, given its origins.
thfuran
>if something unexpected happens crash the process
There are some expectations where that's a reasonable response to a violation, but there are many expectations where the violation implies a bug elsewhere and crashing the process will do nothing to address that that wouldn’t have been better accomplished with stronger compile time checking.
techpression
For me this has been a life saver being the only back end developer at the company. I don’t have the energy nor time to think about every possible scenario, especially not the mobile client sending random strings to something that should be parsed as an uuid (has happened more than once). By letting it crash I can have a look at the traces at my own leisure and a lot of them I never fix, because I don’t have to.
The amount of silencing (implementer error, but quite prevalent) of errors I’ve seen in typescript codebases are horrifying. Essentially ”try happy path, catch everything else and return generic error”, the result is is mostly the same for the user, but night and day for me who is trying to fix it.
jbreckmckye
> some pretty hacky incantations
type NewType<T, Name> = T & { readonly __brand: Name };
I don't really see a big problem here?matt_kantor
EDIT: previously the example in the parent comment was:
type NewType<T> = T & { __brand__ : Symbol }
---This seems wrong; the type spelled `Symbol` refers to the boxed interface for symbols[0]. I suspect you meant to write `unique symbol` there, but it can't be used in that position.
I'm not sure if `NewType` in your comment is supposed to stand in for a specific newtype (in which case it probably doesn't need to be generic[1]) or if it's supposed to be a general-purpose type constructor for any newtype (in which case it should take a second type parameter to let me distinguish e.g. `EmailAddress` from `Password`[2]). The use of `unique symbol`s is also only really necessary if you want to keep the brand private to force users to go through a validation function or whatnot, otherwise you can just use string literal types.
I agree these incantations aren't big problems (it all falls out naturally from knowledge of TypeScript's type system, and can be abstracted away as per my comment in [2]), but the fact that you goofed in the very comment where you were trying to make that point is causing me to second-guess myself.
[0]: https://github.com/microsoft/TypeScript/blob/v6.0.3/src/lib/...
d0mine
“Parse don’t validate“ seems like the same idea
https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
You do not need Haskell for that eg it works in Python (via pydantic, attrs data classes)
satvikpendem
It's more similar to "Make invalid states unrepresentable": https://news.ycombinator.com/item?id=40150159
christophilus
Agreed. Clojure gets this with Mali and Spec. That said, types are such a productivity boost over time that I think they should only be discarded for very good reasons.
germandiago
I think you csn also goa long way with C++ and templates to represent sny kind of restricted type in the type system. Variants are somewhat clumsy without pattern matching but most tools you can make use of are already there I would say.
In my backend system I represent users with different variant states to avoid a lot of unrepresentable states.
As for underutilization, I think only functional languages, Rust and C++ support variants and that might be one reason: people just make blobs of state and choose which fields to use instead of encoding states and make some combinations unrepresentable. Javascript, Java, C# or Python do not have Variant types to the best of my knowledge.In Ocaml and Haskell and with pattern matching they are very natural. In Rust with enums, same. In C++, they are so so but still usable compared to the others that do not have.
In my load tests I even went, since I launch thousands of clients, with a boos.MSM to drive the test behavior. One state machine per user.
satvikpendem
"Make invalid states unrepresentable," as I have posted here before: https://news.ycombinator.com/item?id=40150159
ngruhn
You can't enforce purity on the type level in TypeScript and IIRC neither in Rust.
matt_kantor
You can't in Haskell either! For example, any function could secretly call `unsafePerformIO` to cause a side effect (and that's not the only example).
I believe `const` functions in Rust are actually be guaranteed to be pure, though I haven't followed that feature closely and there may be nuances.
In most languages purity is a norm rather than enforced by static analysis. I definitely agree that it's much safer to assume that an arbitrary Haskell function is pure than it is to assume that of an arbitrary TypeScript function.
chuckadams
You can compile Haskell code with the -XSafe flag, and this is communicated in the package, so something like backpack can be told to load only safe modules. Still, there's probably plenty of code that's safe but not pure, but that's as good as we're likely to get.
xedrac
I loved working in Haskell for a few years. I wasn't actively looking it, but the opportunity just sort of landed in my lap. It was exciting and mentally stimulating. But the unfortunate fact is, I am easily twice as productive in Rust as I am Haskell, even after 3 years of nothing but Haskell. There are more pitfalls in Haskell that you have to just know how to avoid. It can be very difficult to digest as the language can be borderline write-only at times, depending on the author of the code. The tooling is often married to Nix, which is it's own complex beast. And it feels like language extensions are all over the place. Cabal files are not my favorite. And the compiler errors take some time to get get used to.
Darmani
Pretty surprising -- I had much the opposite experience.
On our last product, we decided to start switching from Typescript to Rust on the backend because we got tired of crashes. I consider that to be one of the greatest technical mistakes I've made ever, as our productivity slowed massively. I'll just share two time-draining issues that only occur in Rust: (1) Writing higher-order functions (e.g.: a function to open a database connection, do something, and then close it -- yes, I know you can use RAII for this particular example), which is trivial in Haskell and TypeScript and JavaScript and C++ and PHP, turned out to be so impossible in Rust [even after asking Rust-expert friends for help], that I learned to just give up and never try, though it sometimes worked to write a macro instead. (2) It's happened many times that I would attempt a refactoring, spend all day fixing type errors, finally get to the top-level file, get a type error that's actually caused somewhere else by basic parts of the design, and conclude the entire refactoring I had attempted is impossible and need to revert everything.
On top of that, Rust is the only modern language I can name where using a value by its interface instead of its concrete type lies somewhere between advanced and impossible, depending on what exactly you're doing.
I came away concluding that application code (as opposed to systems or library code) should, to a first approximation, never be written in Rust.
atombender
I'm not a Rust expert by any means, but I'm surprised to hear this. In my Rust code, doing anything with a database connection is not at all different from, say, Go or TypeScript.
For example, I use the deadpool-postgres crate for database pooling. Getting a connection looks like this:
let conn = self.pool.get().await?;
Because of RAII, you don't need a higher-order function helper, but if you really wanted to make one: async fn with_conn<T>(
&self,
f: impl AsyncFnOnce(Object<Manager>) -> Result<T>,
) -> Result<T> {
let conn = self.pool.get().await?;
f(conn).await
}
Now you can do: with_conn(|conn| async move {
conn.query("SELECT 1") // Or whatever
}).await
If you know TypeScript, this shouldn't be too difficult to read or write. The gnarliest stuff here is knowing the type signature of the function argument; because of async, it must be AsyncFnOnce, for one, and you need to know that the type that the deadpool crate returns is called Object<Manager> (which doesn't sound like a connection, to be fair). Determining the exact concrete type to match type constraints on is sometimes a chore, but TypeScript is surely no different here!If you don't know Rust too well, the "move" part will be a little mysterious, to be sure.
Darmani
Just investigated -- looks like this works now! Yay!
For this family of examples, had been completely stymied by AsyncFnOnce not being released yet. IIRC it had been in the works for several years, was still an experimental feature when I was trying to use it, and I gave up after much frustration at trying to get a version of Rust with experimental features working under devenv (nix).
A subtraction then to my frustrations with Rust -- though I'd still be very wary of doing this, having seen how fragile higher-order functions have been in the past.
pjmlp
I appreciate Rust for making affine types mainstream, and having at least the C++ community start caring about security, even if half hearted.
However I share your conclusion, outside scenarios where having automated resource management as the main approach is either technically impossible, or a waste of time trying to change pervasive culture, I don't see much need for Rust.
In fact those that write comments about wanting a Rust but without borrow checker, the answer already exists.
Darmani
I think Rust would be fine for application code if it kept the borrow checker, but had greater allowance for dynamically-sized variables, or even garbage collection. The reason calling things through an interface is so tough in Rust is because doing so requires having a pointer to a value of unknown size, which involves either heap allocation or alloca(), neither of which are very happy in Rust. Many of the other things I complained about are also downstream of this decision. Affine types are useful both in high-level state management as well as in low-level memory management. But it's Rust's focus on static memory layout that really cements it as a low-level systems language, not its inclusion of borrowing.
Way back as an undergrad in 2011, I contributed to Plaid, a JVM language whose main feature is based on affine and linear types. I'm one of the very few people in the world who knew what borrowing is before Rust had it. So I know first-hand that borrow-checking is perfectly compatible with garbage collection.
adastra22
(1) Higher order functions are pretty much the same as all the other languages you mentioned, using closure syntax? What was the problem you ran into?
(2) In such situations the compiler (type system or borrow checker) is telling you that what you wanted to do has hidden bugs, and therefore refuses to compile. Usually that's a good thing.
(3) &dyn Trait
Darmani
(1) Oh sure, the syntax is easy. Getting it to borrow-check is somewhere between insane and impossible. As I said, I've had friends who are actual Rust experts give up trying.
(2) No, it stems from a compiler limitation (imposed in large part by the need for static memory layout), not because there's anything intrinsically buggy about doing this.
(3) Look up "dyn-compatibility", for the largest, but not the only, problem with doing this.
jvuygbbkuurx
Maybe it depends on the application, but web servers are effortless with something like axum. Libraries can do a lot of heavy lifting to expose straightforward coding patterns. Never had any problems like you desribed with database connections and such. In rust with db pools things just work and get closed on drop etc. I would never even consider making a higher order function for that.
Only other language that I think gets close to rust ergonomics is Kotlin, but it suffers from having too many possibilities for abstractions.
pjmlp
Depends very much on the market.
On my line of work we don't do Web servers from scratch, we use lego pieces like with enterprise integrations.
Think Sitecore, Dynamics, Sharepoint, Optimizely, Contentful, SAP, Mongolia, Stripe, PayPal, Adobe, SQL Server, Oracle, DB2,.....
Axum offers very little over existing .NET, Java, nodejs SDKs provided by those vendors.
dagi3d
That's pretty interesting. I was thinking about starting a new pet project and was considering doing it in Rust to learn as I never tried anything with it and after some small pocs I had the feeling it was too verbose to my taste, but wasn't sure it was just me and/or my lack of experience with Rust. Still, wonder if it's still worth it to give a shot considering other positive elements of the language.
jandrewrogers
Verbosity aside, whether or not Rust is a good fit depends on what you are doing. The language design is broadly optimized for low-level application code, like command-line utilities. If that is the use case then you are likely to have a good experience.
For high-performance and high-reliability systems code, Rust is much more of a mixed bag. In a systems context it lacks the ability to easily and ergonomically express idiomatic constructs important for safety and performance that are trivial to express in e.g. C++. When you run into these cases it can get pretty ugly.
Most people don't write this kind of systems code. What most people call "systems code" is really more like low-level applications code, where Rust excels. It is software like highly-optimized kernel-bypass database engines and similar where the limitations start to show.
mplanchard
It’s worth learning, in my opinion, but I’ve been writing it professionally for the better part of a decade, so my opinion may be a bit skewed.
It’s my favorite language to write, and it gets much easier over time. As a first approximation, if you’re doing something and it feels insanely difficult like the GP is talking about, try to think of a different way to do it rather than fighting it. There’s usually a way to do almost anything, but it’s more pleasant to lean into the grooves the language pushes you towards.
Darmani
Rust is definitely very verbose. I think it's a fine choice -- probably even the best choice -- if you're doing systems code or if performance is your most important feature. If not, I would pass.
nothinkjustai
Some say verbose, some say explicit. I had the complete opposite reaction to Rust than this other person, and I don’t think I’m particularly smart so I don’t think it’s purely a matter of intelligence. Even asynchronous rust is pretty easy once you get the hang of it.
IshKebab
That is a very unusual Rust experience. I find "application code" very pleasant to write in Rust. Of course there are things that aren't as ergonomic in Rust as in other languages (e.g. callbacks) but that's true of pretty much any language.
Darmani
I have heard this reaction from others before. One of the Rust expert friends I consulted with told me "I'm not convinced you're not trying to write Haskell-style code in Rust;" I told him the patterns I was struggling with were both trivial and common in Java.
The things I found quite difficult or impossible in Rust were to me pretty basic patterns for modularity and removing duplication that it's really shocking that these complaints are not more common.
I currently have but two hypotheses for why.
First, the second problem I mentioned only comes from using tokio, which causes your top-level program to secretly be using a defunctionalized continuation data type, derived from where exactly in other files you put your await's, that might not be Send. If you're not using tokio, you won't experience that issue.
Second...I was kinda told to just give up on deduplication and have lots of copy+pasted code. This raises the very uncomfortable hypothesis that Rust afficionados are some combination of people who came to Rust early and never learned traditional software design and don't know what they're missing, and people who were raised on traditional good software engineering but then got hit with Rust's metaphorical baseball bat of lack-of-modularity over and over until they got used to being hit with a baseball bat as a normal pain of life.
I don't like either of these explanations (esp. with tokio seeming quite dominant), so I'm awaiting an explanation that makes more sense. https://xkcd.com/3210/
maz1b
I think perhaps contrary to popular belief, Mercury choosing Haskell and their early leadership having such a storied experience in it probably played some non-insignificant role in their success.
As a customer of Mercury, it's truly one of the critical companies my toolkit, and I just can't help but feel that their choosing of Haskell made their progress, development and overall journey that much better. I realize that you can make this argument with most languages, and it's not to say that a FP lang like Haskell is a recipe for success, but this intentional decision particularly pre "vibe coding" and the LLM era seems particularly prescient, of course combined with their engineering culture that was detailed in the post.
1024bits
I'd also wager that hiring generalists with no prior experience in the language actually helped them, because they got to instill their culture and style from the ground up with their new hires. Pre vibe-coding, most of those people would'nt have wanted to just jump in and hack away with zero instruction.
spopejoy
I would counter that it was probably their startup-oriented fintech focus and execution that led to their success. I love good tech culture as much as the next HNer but I've seen companies with great tech die because of bad biz focus.
I might further argue that the startup-y fintech culture led to good tech culture. The fact that they didn't start as a bank (as opposed to say SVB) means that they didn't have to be as conservative, or integrate with some horrific ancient tech stack.
I'm pleased they've had such success with Haskell, but much like Jane Street and OCAML, I think the language choice is almost accidental*, as much as the companies would like you to believe otherwise.
I would like to know however what they're doing for front-end. I would guess that all of this Haskell is back-end only.
*EDIT by "accidental" I mean to the business side. Jane St had some good trades, Mercury had great focus and execution. They also have some good tech :)
ipnon
I have noticed that everything in their app Just Works. It's very satisfying coming from other services!
jwsteigerwalt
I feel the same way. I only started using Mercury about 6 months ago and I’m continually impressed that it just makes sense.
thot_experiment
My bestie works at this company and looking from the outside they have a good engineering culture. I do think Haskell is the right tool for the job, and they are playing to it's strengths, but part of me wonders if a lot of their success is attributable to the place just being well run in general.
meken
> but part of me wonders if a lot of their success is attributable to the place just being well run in general
That was my sense reading the article - that the author would be running a successful engineering org using any language really.
ironmagma
That would not run counter to the popular (whether true or not) idea that by using functional programming languages you filter for a higher quality labor pool / applicant pool.
spopejoy
That wouldn't apply here, since as the article says they hire "generalists, and most of them have never written a line of Haskell before joining."
In any case, I think the "Haskell tax" concept (where you can pay well-paid programmers less if you have a Haskell shop) is stale by now. Rust attracted away a lot of FP-ers, plus mainstream langs like C++, Java and even Typescript got smarter. Haskell's biggest problem by far is the tiny labor pool, which Mercury seems to wisely avoid.
sn9
The post explicitly makes the case for the filtering playing a role. Ctrl-F "Python".
runevault
The version I've always heard is just well designed but less popular languages, but the ones I can think of were all functional (Haskell/F#/OCaml/Clojure/Elm/Erlang)
germandiago
I am currently reading Real-World Ocaml and I am really learning more about functional programming, though I was already familiar with a few things.
Looks to me like you can build amazingly robust pieces of software with functional programming.
However, I am divided.
I have a backend that works in NiceGUI for a product. It does the job. The code is reasonable and MVVM. The most important task it does is connecting to a websocket per customer and consume data to present some analytics.
I will not have a great deal of customers, maybe in the tens or maximum hundreds visiting the website.
I also want REPL and/or hot reload, but I am aware that as I grow features (users admin panels, more analytics, etc) maybe functional programming can do a good job transforming data pipelines.
But Haskell or Ocaml are static. I guess if I want something later that grows and scales and is still dynamic Clojure or Elixir should be a good choice. But at the same time I am afraid that if at some point I need to refactor, things will go wrong.
Currently I use Python with Mypy. All is written in the backend: the frontend is generated by NiceGUI from the backend.
spopejoy
Not sure about Ocaml but with Haskell you can use ghci/`cabal repl` and get blazing fast reload of a web app as you develop. Tbh a lot of haskellers don't take advantage of this IMO.
germandiago
Ocaml seems to have a REPL as well, not sure how it works outside of Emacs (in Emacs with utop looks good what I am trying).
Haskell is so so correct that it tends to get a bit on the way and you tend to encode everything in the type system. This is a blessing for correctness and a curse for other stuff (tracing, debugging, adding side-effects).
This is the reason why I am looking at Ocaml instead of Haskell: not so pure, more pragmatic and supports imperative programming well.
As I said, it is double-edged.
GuB-42
The problem I have with functional programming is debugging. Or more precisely, I would say it is a strength of imperative programming, especially the procedural kind.
In functional/declarative style, you generally describe how things should be, not how things are made, and you let the language piece everything together to get the expected result in the end. It is all well and good (and even better) if you did everything right, but what if you didn't and you don't get the expected result? How do you find the bug?
In a language like C, it is relatively straightforward: go line by line, look at the execution state (the RAM, essentially) between each step and if it isn't as expected, something wrong must happen at that line, so you step in and progress like that. Harder to do when the language goes out of its way to hide the state from you, as it is the case for functional programming.
It is interesting that the longest section of the article is about this problem: "design for introspection", where the author has to go out of his way to make his code debuggable. A good insight on the often overlooked practical use of Haskell.
Weebs
A lot of code ends up being easy to factor out into small pieces for tests. I can't speak for Haskell but coming from another ML with eager evaluation step debugging works as you would expect
mrkeen
My trick to debugging is to simply make every nontrivial piece of code return the same output for the same input. (The trivial pieces of code too!)
No other (mainstream) language comes close.
But what about situations where the code cannot be written in such a form (like shared memory concurrency)? I use transactions for that.
No other (mainstream) language comes close.
And that's without the low hanging fruit of no nulls, no implicit integer casts, etc.
It is absolutely true that debugging Haskell code is harder than debugging other languages. If you took away the bottom 90% of footguns, how could it not be?
zelphirkalt
Same output for same input is implicitly part of FP. (Not for OOP, due to mutation and side-effects.) I would think that when writing Haskell, one naturally always aims for same input same output.
pjmlp
There are plenty of debuggers for FP languages, it is no different from a language like C.
WorldMaker
Functional Programming debugging is often "REPL-guided" in a way that imperative programming often is not. This is not unique to functional programming, though. Even the (mostly) imperative languages Python and Javascript you may be more likely to use REPLs of one sort or another (Python shells, browser consoles, Node/Deno/Bun shells, notebooks, etc.) as your first layer of debugging.
There are interesting trade-offs in REPL-oriented debugging. One of the big things is that in a language like C you might often start first from whole program debugging and breakpoints to try to hit exactly where you think the problem is. In a REPL-oriented world you often try to build the components of your program in a way that you can test more units of it directly in the REPL.
Your module/API/Type boundaries in a REPL world become to mirror your debuggability story. There is sometimes more pressure to get those right and easy to use than in imperative languages like C/C++ because you might want to reach for them directly in a REPL.
But yes, a tradeoff versus whole program-first debugging is sometimes it becomes harder to isolate complex integration issues between your units in strange real world scenarios. However, that REPL-first approach is often encouraging of minimizing your integration "surface" to a bare minimum so often FP languages don't exhibit some of the same integration effects you see in imperative languages.
> Harder to do when the language goes out of its way to hide the state from you, as it is the case for functional programming.
Functional programming languages aren't really hiding any state from you. They also are running on imperative hardware and still dealing with real hardware states. At some point there is a translation between the "worlds" (which also likely aren't as different as you seem to think that they are). You still have those imperative breakpoints and imperative debuggers to fallback on.
That's why the term is "REPL-guided" debugging. You can use a REPL to pinpoint the problematic unit (the exact module/API/function) and the problematic input giving you the surprise output. If you can't see the bug in the source as written you can still send it to an imperative debugger and watch nearly the same "line-by-line" experience and hope it provides additional missing context. Even better by that point you probably don't need to choose good "breakpoints" because you've already isolated the problem enough in the REPL to have "natural breakpoints" because the unit you are debugging may be small and narrow enough that stepping just that unit is all you need.
> It is interesting that the longest section of the article is about this problem: "design for introspection", where the author has to go out of his way to make his code debuggable.
I think you found the wrong message from that section. That section wasn't about debuggability it was about observability. It was about connecting logging/telemetry systems correctly, mocking fakes during testing, adding retries/circuit-breakers at a systematic/app-wide level rather than relying on individual libraries to get it right. In the imperative world these aren't debugging issues either: These are Dependency Injection issues. These are Middleware installing issues. These are factoring concerns like using Abstract Interfaces over Concrete Classes at your public API boundary.
The design suggestions are factorings. They don't impact debuggability, they impact how easy it is to install observability middleware to someone else's public API.
tromp
A similar Haskell success story (from Bellroy) is the subject of an upcoming Melbourne Compose meeting: https://luma.com/uhdgct1v
neilv
I worked on a somewhat similar system in a fringe language (Scheme, and later, Racket) that got huge, but that remained manageable and high-velocity over a long period by a small team.
We didn't create many bugs, and usually functionality could be added very rapidly (e.g., we were the first to achieve a certain certification for hosting sensitive data on AWS).
Though occasionally functionality had to be added more slowly, because we had to write from scratch what would be an off-the-shelf component in a more popular platform. But once we did it, it worked, and we were back to our old velocity, and not slowed by the bloat and complexity of dozens of off-the-shelf frameworks. We could also adapt rapidly because we controlled a manageable platform, which is how we were able to move fast to AWS when there was a need.
The system also had some technical bits of architectural secret sauce from the start (for complex data, and Web interaction), which enabled a lot of rapid development of functionality, and also set the tone for later empowering smartness.
One difference with our system, from the Haskell fintech, was that our team size was very small (only 2-3 software engineers at a time, and someone who managed all the ops). So we didn't have the challenges of hundreds of people trying to coordinate and have a coherent system while getting their things done. Instead, there was usually one person doing more technical and architectural changes to the code, and a prolific other person doing huge amounts business logic functionality for complex processes.
With careful use of current/near-term LLM-ish AI tools, software development might find some related efficiencies of very small and incredibly effective teams. But the model that comes to mind is having a small number of very sharp thinkers keeping things on an empowering and manageable path -- not churning massive bloat to knock off story points and letting sustainability be someone else's problem.
le-mark
It’s hard to imagine what two millions lines of Haskell could possibly be doing. I mean that’s a lot of code and I have the impression that Haskell is “tight” meaning a little code can do a lot. Maybe they have a lot of libraries to do things like json serializing/deserializing, rest api frameworks, logging etc?
imoverclocked
From TFA:
> The problem is that we cannot trust code we cannot instrument. If a third-party binding makes HTTP calls through concrete functions, we have no way to add tracing, no way to inject timeouts tuned to our SLOs, no way to simulate partner outages in testing, and no way to explain the 400ms gap in a trace except by squinting at it and developing theories. So we write our own. More work upfront, but the clients we write are observable by construction, because we built them that way from the start.
troupo
> If a third-party binding makes HTTP calls through concrete functions, we have no way to add tracing, no way to inject timeouts tuned to our SLOs, no way to simulate partner outages in testing, and no way to explain the 400ms gap in a trace
Given that tracing etc. is IO, are they just threading IO through the entirety of all their Haskell code?
verandaguy
Nit: the quality of a language that you call "tight" is usually called "expressive." You can use few characters to express a relatively very abstract idea.
Some people call this "high-level," too.
I will say, though, that 2 million lines of code is much less code than it sounds like at first glance, especially for a company in a highly-regulated space like finance, plus a few years of progress.
appplication
Agree on the 2M lines of code point. Looking at GitHub stats, I’ve personally written about 500k loc over the last 6 years, and that’s not including my teammates contributions etc. There are a lot o f things on our roadmap and I would consider the codebase to still be immature a feature incomplete. And this is all for a particular niche in a gigacorp.
If anything, having your entire company’s codebase be 2M loc and it be a functional product seems reasonably efficient to me.
reikonomusha
Haskell is typically terse in addition to expressive. So "tight" seems more apt.
Lisp is traditionally not so terse, but still expressive.
gf000
> Haskell is “tight”
Absolutely not an objective metric, but I have found that Haskell just has a different "aspect ratio". Line count may be somewhat lower, but the word count is essentially largely the same as more imperative OOP languages.
LeCompteSftware
I obviously don't know what the codebase looks like, but
a) Haskell's reputation for terseness partially comes from its overrepresentation in academic / category-theoretic circles, where it's typically fine to say things like `St M -> C T`. But for real software it's a lot more useful to say things like `TransactionState Debit -> Verified Transaction` etc etc.
b) The other part of Haskell's terse reputation is cultural, something extending back to LISP: people being way too clever about saving lines with inscrutable tricks or macros. I imagine that stuff is discouraged at a finance company like Mercury in favor of clarity and readability: e.g. perhaps the linter makes you split monadic stuff into pedantic multiline do expressions even if you can do it in a one-liner with >> and >>=.
undefined
jappgar
It's a double-edged sword. Two million lines is a major feat. It's also represents a significant maintenance burden.
The advantages to Haskell are theoretically obvious. The downsides are harder to intuit.
The temptation is to model _everything_ as types. The codebase itseld becomes a _business specification_, not an application. Every policy change is a major refactor (some of which are shockingly high-touch thanks to Haskell safety).
The lesson is you cannot have your cake and eat it too. Eventually you become trapped by your types.
Haskell is really impressive and powerful, perhaps especially at this scale. However it brings its own unique problems. The temptation to model business logic as types leads to rigid structures. And the safety these structures bring can blind you to other classes of risk.
tikhonj
You can do a great job of navigating that as long as you have some experienced engineers with taste building the core pieces. You can't have it all, but you can have a lot.
I interned at Jane Street years ago and they seemed to do a great job of walking that line (in OCaml rather than Haskell, but same difference). They moved remarkably quickly despite working in an area with a lot of inherent complexity and where reliability and correctness are an existential concern to the business. (Which, perhaps surprisingly, is massively more the case for a trading firm than for a Mercury-like neobank...) In hindsight, a key thing Jane Street did was hire some experienced OCaml programmers with great taste (like Stephen Weeks, the author of MLton) and let them build the core libraries and guide the whole codebase from the beginning.
Unfortunately, this is one of the things that Mercury didn't do anywhere near as well.
tylerchilds
Typescript too: https://www.richard-towers.com/2023/03/11/typescripting-the-...
Tbvh the biggest downside of a Turing complete type system is that you can theoretically implement an application that compiles to dust.
undefined
tauroid
[dead]
sriku
I wish people'd report metrics like "40 type classes, 5000 functions, 300 instances, 20 monads, 14 functors, ..." And such instead of "lines of haskell".
wbsun
I wish people'd report metrics like "transactions per second, 99.x% availability with N secs/mins p99 latency, ..." when describing how practically useful and how effective a real-world banking/transaction system is, which further proves the practicality of the programming language.
isatty
I don't believe I'm the target market (I'm plenty happy with my small CU), and seeing their billboards makes me want to never use them BUT: seeing this post and their culture and that they use Haskell is kinda changing my mind.
markdennis
What’s wrong with their billboards?
Get the top HN stories in your inbox every day.
> Haskell gives you tools to encode these incantations in types so they cannot be forgotten. This is, for my money, the single most valuable thing the language offers a production engineering organization.
Haskell is admittedly, probably the most powerful widely (or even somewhat widely) used language for doing this, but this general pattern works really well in Rust and TypeScript too and is one of my very favorite tools for writing better code.
I also really like doing things like User -> LoggedInUser -> AccessControlledLoggedInUser to prevent the kind of really obvious AuthZ bugs people make in web applications time and time again.
I've found this pattern to be massively underutilized in industry.