Skip to content(if available)orjump to list(if available)

Notes on structured concurrency, or: Go statement considered harmful (2018)


Can the discussions here try to stay away from Go bashing. This post is not about Go. It's about Structured concurrency VS background tasks.

There's many interesting discussions one can have about the latter but the former turns into toxicity.

With that said. The Rust teams are very interestes in structured concurrency at the moment. Rust 1.63 is going to get scoped threads, which is structured concurrency for threading. Myself and others have also been looking into structured async, albeit it's not as easy to implement.

I personally love the concept and I hope it takes off. You rarely need long running background tasks. If you do, you probably want to have a daemon running along side your main application using structured primitives and then dispatch work to them instead. This really simplifies the mental model when you get used to it.


Structured async sounds like a very interesting idea, this is definitely a space where it feels intuitively as though there should be a better way to express what we mean and get roughly the same machine code but better human understanding for both author and future maintenance programmers - if somebody nails this it could be as big a deal as the loop constructs in modern programming.


> I personally love the concept and I hope it takes off

I have used structured concurrency with Kotlin and you are right, it is absolutely easier to reason about concurrent code that way.


Structured programming can guarantee that the control flow graph has constant treewidth[0], which enabled the use of parameterized algorithms for efficient static analysis. Wondering if structured concurrency can impose some additional constraints on the ordering of tasks that makes it easier to analyze, e.g. for linearizability or other properties.

[0]: Mikkel Thorup. 1998. All structured programs have small tree width and good register allocation


The OpenJDK team is also persuing structured concurrency so we should have multiple interpretations shortly to compare. Exciting stuff for folks that write highly concurrent software.


But should they be writing highly concurrent software, without thinking bloody hard first I mean?

I can see this being the next Big Data or NoSQL 'must have' bandwagon.


> But should they be writing highly concurrent software, without thinking bloody hard first I mean?

Yes, by necessity. We're 15 years after the end of the race for frequency, and the end of moore's law is getting ever closer. Concurrency and parallelism are becoming table stakes for both reactivity and performances. This means making them reliably usable and composable is getting more and more important.


Well I don't know about most people but to speak for myself I mostly write high performance network servers, databases and queues and in my world concurrency (if not parallelism) is strictly necessary. For me it's also not a recent development or a fad, it's been this way my entire ~15ish year career.

I imagine folks writing Web software, UIs and heavy desktop applications will also benefit from these developments but those areas are out of my core expertise so I can't speak to exactly how useful structured concurrency will be for them however I can see very clear applications for my domain.


If you do any kind of UI work, you are already considering carefully what to run on the UI thread (unless you're using JS but then you brought this upon yourself). Additionally, many reactive patterns require you to collect a flow and this blocks the whole thread. Launching a coroutine on a background dispatcher is safe and simple.

There is a middle ground before highly concurrent, and it's "I don't really care what you do, but please use the cores i can have and don't block the main thread thank you"


Swift has task-based structured concurrency since last year, and it looks really nice.


Very interesting. Thanks for sharing!


Interesting article, which gives some good idea how to structure concurrent programs with less shooting at your feet :). A bit long winded until it comes to the actual core concept being presented: spawn concurrent routines within a block which doesn't exit until the last routine has exited. This is a good concept an can make a lot of code clearer. It prevents some data races, as you can reason that after the block no concurrent operations are still ongoing, but of course it could make the block lock up by itself, if one of the routines doesn't finish. This is certainly an interesting concept I might try out myself more specifically. It is also important to know, especially as the "go" statement of the language Go is put into the headline, that this very language already supports nursery-like structures, just doens't have the syntax sugar the python extension of the author has.

It is called a wait group. See for an example here:

So, except for the syntactic sugar around it, the nurseries compare to creating a wait group in a block, spawn goroutines which each call Done() on the wait group on exit and at the end of the block call Wait() to errect the same boundary, nurseries do. Of course you can also pass the waitgroup object to any goroutine created inside goroutines. This is a very common pattern in Go, but indeed, it probably should be presented more clearly and up front in tutorials about goroutines.

So for that, I will keep the article around, it shows the concept nicely - perhaps I might do a pure "go" version of it, which then shows the go implementation of nurseries. Might be nice to add to the original article, that not only the presented python library is a solution, but also that there is a native go way of achieving that, as the article uses "go" as the negative example :p


I think you missed the point of the article. Its suggesting that goroutine style concurrency should be completely replaced.

Much like if/for etc has replaced goto, structured concurrency can replace goroutines. Note: goto can be used like an if or a for loop. You've just made the argument that goroutines can be used like 'nurseries'. You've essentially argued in favour of goto from the articles perspective

I want to note again that this article is not actually about go. It is the same in most languages with concurrency primitives.

The key takeaway is that having to manually manage waitgroups is room to make mistakes or to introduce spaghetti concurrency, while you might be used to it at this point, it doesn't make it the best system for the future of concurrency


No, I didn't miss the point of the article. I just feel, it is important to note that the proposed style of concurency is available in go and point out to how you would implement it, if you cannot use this Python library.

I would agree, it would be nice to have a similar syntactic construct in go to enforce this pattern, though it is actually possible to basically implement that with higher order functions.

And yes, I think goto has a place in any language, if you are aware of how you should and how you shouldn't use it. In most cases the typical constructs of structured programming are preferrable, but not in all.


It's true that with Turing completeness it's in some sense only ever a matter of style. But that's not a very useful observation in practice.

If you really thought the arbitrary jump "has a place in any language" you would not use basically any languages from the 1980s onwards since they all neuter this feature for a good reason as the article explains, and some of them do away with it altogether.

Even C++ - a language that has never seen a gun it won't supply loaded and pointed at your feet - does not allow you to goto labels in other functions, and will run the necessary constructors/ destructors when you enter/ leave scope with a goto statement.


> No, I didn't miss the point of the article. I just feel, it is important to note that the proposed style of concurency is available in go

Apart from the TFA covering a particular python lib, I'd like to point out how the Go philosophy worked before. Given:

Situation 1: "the proposed style of concurency is available in go" so let's have a free ride and we can have linters and human reviewers trying to catch all the concurrency bugs faster than contributors are inserting them. /s

Situation 2: the proposed style of concurrency is the only one available in the language.

Approach 2 would make sense.


Its not the same. Anything in the nursery can cause cancellation of all async work. Cancellation is safe to unwind in all workers by scope exit rules.

Completion is just a type of cancellation.

Spawn 10 workers to lookup Dog in 10 different dictionaries, the first one to get the answer wins. This is hard to do with out language/runtime cancellation support.

Note below is a practical lib to get close to this ..


You can easily cancel worker goroutines in Go using contexts and their associated cancel functions (listen on ctx.Done() in a select statement).


> This is hard to do with out language/runtime cancellation support.

It is trivial if the workers can communicate using a shared flag that represents "Dog is found."


This was also my immediate thought, wait groups and a channel for any error would work just like the nursery. However I still see the advantage for this new approach as a provider for sane defaults. Right now you need to structure the concurrent routines yourself and are free to mess up. Nurseries should make it the other way around. I'm not sold on that but may give it a chance.


At best, waitgroups are to the structured concurrency as gotos are to loops/ifs(structured programming)--you can implement the latter in terms of the former but the structured part is better (even if/because it is less powerful) as the history shows.


Waitgroups are not the same thing as they cannot be dynamically sized in the same way nurseries can.


Not sure what you mean by this. You can dynamically add to a wait group using the Add() method.


So I slightly misremembered the subtlety and it is more forgiving in that Add can be called while waiting if a thread is still executing in it.


I have been using go professionally for a while and I’ve found it to be quite remarkable and I wanted to share a few hints to people so they can find those remarkable parts faster than I did.

To get a better sense of go there are five essential concurrency features:

1. Go statements are a nice syntax to run background micro threads (as mentioned in this article)

2. Go Channels pass messages across those threads as fixed memory queues and powerfully as you add items to the queue you block until the item is added (the second part, blocking on add, is very powerful as it prevents the flow of concurrency from infinitely filling queues that are never consumed! There is more to go channels that is possible with fixed memory buffering to prevent the block, but that blocking is key to consider!)

3. Go Select Statements (not switch statements!) let you watch concurrent queues at once in a thread safe way. These are essential for using channels properly and they help manage almost all channel flow (consumer thread progression, error handling, done detection, etc)

4. Go Context objects let you cleanup background threads based on multiple criteria server errors and timeouts being the most common. The best hint you are in a concurrent-friendly function is usually that it passes context as an argument for cleanup purposes.

5. Go wait groups let you wait for all the go statements spawned to finish before proceeding (simple, but essential at times)

I know it’s hard to learn the entirety of a langue without using it daily, but I encourage people to try out go to experience these five parts together. Go is truly excellent at expressing some hard concepts well. That doesn’t make it easy — concurrency isn’t easy — but it is easier with go constructs than without.


> Go statements are a nice syntax to run background micro threads (as mentioned in this article)

That's literally the opposite of what the article says. Which btw isn't about Go specifically, but about concurrency in general.


Well go statements are as nice as you can get with old-school concurrency primitives. Author of the article believes we need to have new completely concurrency primitives.


Yes, you may agree or disagree with that, and please do share your thoughts, but the discussion should stay on topic. The parent comment goes on a lengthy tangent why they like Golang that is completely unrelated to the article.



Go Statement Considered Harmful (2018) - - March 2021 (82 comments)

Notes on structured concurrency, or: Go statement considered harmful - - April 2018 (230 comments)


In releated news:

"Considered Harmful" Essays Considered Harmful


I can't take a hypocritical article seriously.


Brilliant solution, terrible name.

Took me a while to get it — child processes belong in nurseries. Bad abstraction, because the key here is processes. Lots of thing haves child nodes.

And what happens in nurseries? Growing? Maybe they were thinking watching — babysitting and it’s a cultural terminology difference.

But it would be just as silly to call a thread monitor/manager a babysitter as a nursery.

Like I said, it’s a great concept and a valuable abstraction, but I fear it will need a better name to take off.


> Took me a while to get it — child processes belong in nurseries. Bad abstraction, because the key here is processes.

Abstractions have nothing to do with their names. They are not good/bad based on what they are called. You might be conflating metaphors/analogies with that.


I hear the term "threadset" in other discussion about structured concurrency, i think "threadset" would be a better name.


A nursery is just an errgroup ( I almost never have to use the `go` keyword directly, only through errgroups. Now I can see that it's because `go` is usually too low level to be used on its own. Not sure I agree with removing `go` entirely though.


The article does not make clear whether the cases where Go struggles with concurrency are also cases where structured concurrency improves the situation.

Pretty sure "sync.WaitGroup is too hard to reason about" is not a real issue people are having.

AFAIK most of the challenges occur within that structured block, so to speak. Robust communication between concurrent processes is the hard part, not managing their basic lifecycle.


After a long time experimenting with a lot of patterns, i found the "operationqueue & operation" building blocks from objective-c the most versatile and powerful construct. They let you do all those things the other alternatives i've tried often fall short :

- you can cancel them

- you can pass a pointer to the operation from places to places

- you can set dependencies between operations so that one doesn't start until another is finished

- you can set its execution priority (on the queue)

Syntax may not be the best, and there may be a few problems with encapsulation (an operation can do any kind of memory manipulation), yet i keep getting back to them whenever i have to do serious and robust work.


So 4 years later where is Trio? Looks like OP does not contribute to the lib anymore?


AFAIK it's a very popular and active library still.


Well they maybe thought leader. Its up to others to implement their greatest ideas.


Read the article but honestly don’t fully understand it. How does using this not end up with one big nursery started somewhere in your main passed down everywhere and basically scoped to the entire app lifetime? Getting you in back in the same spot as before.

My (rust) code using Tokio starts a bunch of tasks in main that live for the entire lifetime of the app. They’re independent and communicate over channels with each other – and possibly the outside world.

Hard to see what problems this causes that Trio can fix. But maybe I’m zooming in on the wrong use case?


I think the idea is that you can do that for tasks that are meant to run forever. It's global scope. But you use smaller scopes for tasks that are supposed to shut down. In Go terms, it's like having an automatic waitgroup so you can't leak goroutines.

It doesn't sound like you need it for what you're doing.


Kind of a contrived example but let’s say you’re going to kick off a job that downloads a “Post” and associated data to display it, like a “Profile” and “Comments” (with sub comments, profiles, etc) but you decide that hitting an unhandled error in the Profile loading stage should cancel loading all the rest of the sub-jobs you’ve kicked off. Nurseries do this for you, you scoped the whole chunk of concurrent operations and if an unhandled exception bubbles up to the nursery supervisor, you can cancel all the subsequent parts of it. Then you can handle that issue if you choose to, or just ignore loading that post. If you scope that to the whole program, you’d have to handle that failure in the whole program. If you scope these things more finely, you can just cancel and retry or ignore that chunk, without bleeding that logic down into the stack and increasing complexity.


I can see how this works, but my personal experience is that concurrent code that is finely scoped is easier to reason about than larger-scoped concurrency already anyways. Hardly a controversial statement, I guess. So if nurseries help me reasoning about finely scoped stuff even better that's great of course, but only solves part of 'the problem'. And maybe not the most interesting part.

Just quickly running a task to perform a handful of stuff concurrently for a single purpose (like doing a few network requests and packing the results) is hardly where I encounter big issues. The compositional behavior of Futures really helps here I think. A bunch of `Future<Result<A, Error>>` go a long way.


I’ve liked using Kotlin’s coroutines which put in place the behavior described in the article, and I like them in comparison to unscoped concurrency that exists in Go and other languages by default, or the addition of a context that needs to be passed to every function, because I don’t like it polluting the type signature of the functions.

I actually have found rusts futures/results to be more cludgy than scala, were I could flatmap in a fairly nice syntax provided I was fine with using transforms.

Do you pass an early cancellation variable to your future returning rust functions?


I don't see the link between golangs go statement and goto except they cause a fork in control paths. Go's go statement is not bad.

I wrote a userspace M:N scheduler which multiplexes N lightweight threads onto M kernel threads. It currently preempts lightweight threads and tight loops round robin fashion but I could implement channels between lightweight threads and implement CSP.

I created a construct for writing scalable systems concurrently called Crossmerge. It allows blocking synchronous code to be intermingled with while (true) loops and still make progress and join as a nursery does. There is a logical link and orthogonality between blocking and non-blocking and sometimes you need both in the same program.

I added a multiconsumer multiproducer RingBuffer ala LMAX disruptor to the M:N thread multiplexer and it handles IO in a separate thread. If I add epoll and io_uring I could also handle asynchronous IO.

My goal is to add an LLVM JIT and then I have an application server.

I write a lot of concurrency and parallelism in ideas4



> I don't see the link between golangs go statement and goto except they cause a fork in control paths.

This is a lot of why he goes into the history of "old-school goto" and its problems in the post. One of the issues with "old-school goto" is that you could never be sure if you called into a function whether it would actually return out of that function, or end up "goto"-ing somewhere else completely different, without going into it and inspecting it. Similarly, one of the issues with calling a function in Golang is that you can't be sure it didn't spin off a goroutine which is still doing some random work or other.

I mean, yeah, you shouldn't just randomly fork off a goroutine which retains references to state passed into a function without clearly documenting it. But there's nothing to stop you from doing it.


> But there's nothing to stop you from doing it.

And the crucial point is that because you can do it languages have to have weaker guarantees because they have to assume you could do it.