Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

wokwokwok

Really, the problem isn't tokio. The problem is this:

> An inconvenient truth about async Rust is that libraries still need to be written against individual runtimes.

That's really the heart of it. If it was really just a runtime, it wouldn't matter what implementation you plugged in.

...but it's not true for the rust runtime; I mean, it's understandable, how can you have one runtime that is multi-threaded and one runtime that is not, and expect to be able to seamlessly interchange them?

I understand it's hard and lot of work went into this, but let's face this. This article is right:

Practically speaking, tokio has become 'the' rust async runtime; but it's an opinionated runtime, that has a life cycle and direction outside of the core rust team.

That wasn't where we intended to end up, and it's not a good place for things to be. I, at least, agree: avoid async. Avoid teaching rust using async. When you need to use it, partition off the async components as best you can. I <3 rust and I use it a lot, but the async story stinks.

We should have an official runtime, officially managed, and guided by the same thoughts that guide the rest of the language.

What we have now is a circus. After 4 years of async being in stable.

wyager

I use at least 3 separate runtimes: tokio and 2 no_std runtimes (rtic and embassy). The latter would probably not be possible at all if there was an "official" runtime, because the official runtime would inevitably require allocation, and if it existed they wouldn't bother writing async in a flexible enough way that you could use it without an allocator.

The way async is implemented in rust is actually technically quite impressive, and would almost certainly not exist if there were some official green thread solution.

You could solve async/non-async polymorphism via the introduction of HKTs (and monads) - perhaps eventually they will be forced to do that.

In the mean time, if they can make a few changes like stabilizing TAITs and async traits, that would go a long way to improving the ergos of async.

weinzierl

Not sure if this is an apt comparison, but I like to think that the allocator is a good precedent.

Similar to the async runtime most software needs one and most developers don't care much which one they use and are happy with the default allocator. Another similarity is that both are not just some ordinary old library but required by language features. We also usually don't use multiple ones in a single application.

Still we allow the developer to choose an allocator or bring their own one.

karavelov

For interop between runtimes, they need to add `std::async` IO traits that could be implemented by each runtime.

thinkharderdev

And APIs for timers!

zozbot234

> You could solve async/non-async polymorphism via the introduction of HKTs

Rust has stabilized GATs, which are comparable in power to HKTs while having better interop with the language's broader feature set.

wyager

I haven't thought about it super hard, but I suspect the ergos of that would be quite poor, as you would need to pass around the type of the trait object, even though all you really care about is the associated type constructor.

bluejekyll

>> An inconvenient truth about async Rust is that libraries still need to be written against individual runtimes. >That's really the heart of it. If it was really just a runtime, it wouldn't matter what implementation you plugged in.

It is absolutely possible to make a runtime agnostic library that can work over multiple runtimes. With the trust-dns libraries, we’ve managed to provide a resolver which is capable of working on async-std, Tokio (default), and even Fuchsia. It’s harder and takes planning, also to be fair and fully transparent we haven’t achieved this for all features, like DNS-over-quic.

> We should have an official runtime, officially managed, and guided by the same thoughts that guide the rest of the language.

I disagree. Rust is a systems level language capable of being used to build Operating Systems or other embedded tools, having a single runtime would make async Rust something you could not use in that context.

geodel

Rust situation reminds me of US military aphorism:

“amateurs talk strategy and professionals talk logistics”

Rust community is endlessly talking and obsessing with strategy where as average Rust user suffer from lack of logistics concerns about libraries / runtime usage etc.

bluejekyll

Maybe you could express your concern differently? There are definitely a lot of ins-and-outs about many aspects of Rust. It operates differently from many other languages, sometimes in surprising ways.

I agree that in some areas there could be better guidance. Is Tokio the runtime most people choose? Yes. Would most people be fine choosing that for their daily work? Yes. Might you want to choose a different one? It depends on what you’re doing, others have different goals and tradeoffs. Are there interface choices regarding things like Send + Sync or IO interfaces/traits you pick that will have impacts on how you structure your code to make it portable across runtimes? Absolutely.

And finally, can Rust be better in regards to async development? Yes, everyone agrees that it should be. My big thing is that we really need async traits in the language. We have an excellent work around with the async-trait macro until we get support for it in the language, but you need to discover that, and then recognize some of its idiosyncrasies in certain situations.

saurik

> how can you have one runtime that is multi-threaded and one runtime that is not, and expect to be able to seamlessly interchange them?

I feel like I do this in C++ right now without issue? I routinely mix/match coroutines from multiple runtimes, including one I built myself (which theoretically might could be multithreaded but very much right now is not and I know I rely on that still), one from cppcoro (which is a bit broken--I filed a bug with a detailed analysis, but it was never fixed--so I can only use a few parts that happen to add a lot of value), and one from boost asio (which is very much multithreaded and was a somewhat-impressive retrofit onto a more abstract design purely involving callbacks); I also effectively have a fourth, as another library I am using--libwebrtc--maintains its own I/O thread paradigm, and I have chosen to reinterpret a number of its delegation callbacks into coroutines. It involved some trivial adapters in a few places, but I developed those years ago and have long since forgotten as it all works so easily to willy-nilly co_await anything I want from wherever I am... is this really so difficult?

thinkharderdev

In rust it is yes because rust statically guarantees that you don't have data races (in safe rust at least). So you have marker traits `Send` and `Sync` which indicate that a type can be sent between threads (Send) or shared between threads (Sync) safely. So for a multi-threaded executor which can scheduler tasks on different threads when they resume has to make sure futures are `Send` whereas a single-threaded executor does not have that constraint.

fiddlerwoaroof

Aren’t the same data races possible in async without threads? As soon as you suspend one task and start another, you have the problem that the currently running task can break the invariants of the suspended one, regardless of whether you’re doing a single-threaded event loop or threads running in parallel.

saurik

This honestly doesn't sound like a problem as such types fall into one of two categories: ones which need to execute on one thread--in which case resuming them should always resume on their native runtime as they are thread-locked: I already have to deal with this as I adapt between the runtimes in C++ and it simply isn't a concern--and ones whose storage in virtual memory are somehow fundamentally locked to a specific CPU core and I honestly have never myself coded one of these despite having done some extremely low-level development.

Like, here: if I am in my single-threaded runtime and I await something on a different runtime with a billion threads, MY continuation does NOT need to be able to resume on any of those threads as it CAN'T. To achieve "seamless interoperability" I just need to be able to await the other routine and resume when it completes, not somehow make the two runtimes merge into one unified one and violate their constraints. The ONLY data from my coroutines which should end up on a different thread is what I explicitly pass to the routine, not my continuation.

insanitybit

A lot of these complaints like "you need to have Send + Sync + 'static" and "oh no you need an arc or mutex" are identical problems in C++, except in C++ it's totally unsafe if you forego those.

saurik

I am under the maybe-totally-wrong impression that people are saying that the runtimes are incompatible, not that you merely need to think harder about the scoping rules and type traits; do I misunderstand what is going on?

surajrmal

I have managed to use async rust for over 4 years and never once use tokio. Primarily this is possible because I just avoid 3rd party libraries with async if they are tied to a particular async runtime. It is limiting, but I think it's important to be lean on 3rd party deps, so it's almost a good thing

convolvatron

I'm very interested in this approach. sorry to be a pest, but could you point to the base traits/interfaces for using asynch without for example tokio? this might help me alot personally to get over some of my issues with rust.

edit: is it just future/await and nothing else?

pests

This is the Future impl:

    trait Future {
        type Output;
        fn poll(&mut self, wake: fn()) -> Poll<Self::Output>;
    }

    enum Poll<T> {
        Ready(T),
        Pending,
    }
async functions get converted into -> impl Future<Output=original_return_type> automatically.

You poll() until it returns Ready.

wake() will notify you when it's ready to be polled again.

That's it.

Tokio and smol and these runtimes only exist to keep track of these, implement their own API, launch some threads, and run this event loop.

cmrdporcupine

My understanding is you always need a runtime to play the async game -- something needs to drive the async flow. But there are others on the market, just not without the.. market domination... of tokio.

https://github.com/smol-rs/smol looks promising simply for being minimal

https://github.com/bytedance/monoio looks potentially easier to work with than tokio

https://github.com/DataDog/glommio is built around linux io_uring and seems somewhat promising for performance reasons.

I haven't played with any of these yet, because Tokio is unfortunately the path of least resistance. And a bit viral in how it's infected things.

But I'm planning on giving glommio at least a whirl.

tempodox

> What we have now is a circus.

I couldn't agree more. And my conclusion is, as it has been, to stay away from async until we have a sane situation.

chrisshroba

I'm new to Rust so please interpret this as curiosity and not criticism, but why not just use tokio? I understand that it's nice to build applications against a generic interface so that you can swap out libraries if one stops working well, but at this point tokio seems fairly well-vetted, and there are plenty of other parts of a typical stack that require some degree of lock-in: which database you choose, which web framework you build on, which cloud provider you interface with, etc., so I don't see choosing a specific async runtime as a deal-breaker. Could you elaborate on why you do?

reacharavindh

Just a bystander with a curious question..

Is it possible to avoid async with Rust when you use most common 3rd party libraries? such as ones to make API requests, database connectors, date/time, logging, deal with special kind of files etc.? or are we talking "the burden is on the user to set feature flags and carefully choose which crates they import into their projects"?

Is it possible to set up the Rust toolchain to not allow async in a project at all?

Animats

> Is it possible to set up the Rust toolchain to not allow async in a project at all?

It's getting hard.

> Tokio's roots run deep within the ecosystem and it feels like for better or worse we're stuck with it.

Tokio has become a tentacle monster that is suffocating Rust.

Async is fine for webcrap, not good for embedded, and all wrong for multi-threaded game dev. The trouble is, the web back end industry is bigger than the other applications, and is driving Rust towards async. Partly because that's what the Javascript crowd knows.

Personally, I wish the web crowd would use Go. The libraries are better, the goroutine model, which is sort of like async but can block, is better for that, and garbage collection simplifies things. Rust is for hard problems that need to be engineered, where you need more safety than C++ but that level of control.

Grimburger

The one web framework that took it slow on async adoption got absolutely pilloried for it.

There's very much a shiny new thing problem in the rust ecosystem.

devit

You can use `block_on` from the futures-lite crate (or from other crates) to synchronously call async functions.

Not using async or async crates is not recommended since most new or updated high quality crates now use async.

MuffinFlavored

> such as ones to make API requests,

instead of reqwest you use ureq crate

> database connectors

can't answer at this time

> date/time

chrono crate has nothing to do with async

> logging

log crate with env_logger crate has nothing to do with async. pushing to something like elasticsearch instead of letting filebeat scrape your stdout is a different story

> deal with special kind of files etc.?

std::fs came first, the async stuff on top that recreate it in an async fashion came later. i'm pretty sure if you are dealing with a big file you can do std::fs with a "stream reader" basically

insanitybit

I write a shitload of Rust and I think the situation is pretty sane. There are a few warts but the way people talk about it is insane - it's frankly not that bad at all and, mostly, quite good and easy to get started with.

I had already replied to this article over on lobste.rs

https://lobste.rs/s/iovz9o/state_async_rust

The tl;dr is that I think this entire async concern stuff is ridiculously overblown. I suspect the vast majority of Rust devs, like myself, not only accept the current state as "fine" (couple things to work on) but are very happy with the decisions tokio made.

drpossum

I use rust as a case study about what happens when don't manage a need for users because of indecision and inflexibility. I've generally been disappointed by the rust community because of inflexibility and the unfortunate infighting that spills out. Just harmful to success.

j1elo

Not managing needs for lots of users because of indecision and/or inflexibility, has always been par for the course in Golang, as they wouldn't and won't introduce greatly requested features without years of careful study and design. And none of that has resulted in an adoption problem (but it indeed did result in a lot of whining). Actually, despite this extremely slow pace to introduce popular features, Go seems to be in very good shape.

geodel

One big difference is Go is primarily driven by Google devs and all the heavy duty work once agreed upon is implemented to last details by Google team. Rust is driven by volunteers for most part, so any carefully deliberated and designed things won't amount to much if implementers are busy, uninterested or just want to work other fun stuff and leave some things halfway done.

ardfard

Aren't the async situation in Rust is because the designer want Rust not to be opinionated and be flexible? i.e. you can choose not to have runtime in your app or using runtime that fit your particular needs.

Ar-Curunir

What need does Rust not serve? You don't have to use async if you don't want to, and for most use cases Tokio suffices. The number of people who hit edge cases with using tokio with other libraries is small.

thinkharderdev

Is it harmful to success. It certainly seems like Rust has been wildly successful. Maybe the async fragmentation will change that but I don't see any evidence of that so far.

assbuttbuttass

> We should have an official runtime, officially managed, and guided by the same thoughts that guide the rest of the language.

Agreed, it feels like we're in a worst-of-both-worlds situation. On one hand, tokio is relied upon by thousands of crates, and is very opinionated, meaning it's hard to innovate in the async space. On the other hand, tokio isn't a real standard, so we still get ecosystem fragmentation.

MrBuddyCasino

> An inconvenient truth about async Rust is that libraries still need to be written against individual runtimes.

If there was a common standard, how do you resolve the additional need of sync/send+static for multithreaded executors?

thinkharderdev

You don't. Anything that can be sent between threads needs to be `Send` and anything shared between threads needs to be `Sync`. This is really important invariant that the rust compiler provides.

yencabulator

Spec one API for singlethreaded and another for multithreaded executors.

dgb23

A very informative article, that brings up important pain points and problems.

I didn't agree with this sentiment though:

> In a recent benchmark, async Rust was 2x faster than threads, but the absolute difference was only 10ms per request. To put this into perspective, this about as long as PHP takes to start. In other words, the difference is negligible for most applications.

This statement is an ugly thorn that sticks out of the otherwise well written and reasoned article. It hurts me deep on the inside when I read stuff like this.

kieroda

I agree, saying that 10ms PER REQUEST is negligible is insane. If he actually read the benchmark that was referenced, he probably didn't mean what he wrote there: the benchmark measured a 10ms difference in processing an unspecified fixed number of requests from ~100 connected clients (the benchmark article isn't actually very good, and I don't care enough to dive into the github and find out what was measured).

mre

Author here. The benchmark part could be clearer; I acknowledge that.

Interestingly, when working with a limited number of threads, the thread approach is actually faster in that benchmark. So in practical applications, the differences are marginal and likely lean towards threads.

But even if this weren't the case, context matters. A 10ms discrepancy in a web request might be acceptable. However, in a high-performance networking application - which, let's be honest, isn't a common project for most - it could be significant.

Matthias247

If you would measure pure latency between a single request (HTTP, RPC, whatever), the latency difference between any async or non async implementation should be microseconds at most and never milliseconds. If its more, then something with the implementation is off. And as you mentioned threads might even be faster, because there is no need to switch between threads (like in a multithreaded async runtime) or are there needs for additional syscalls (just read, not epoll_wait plus read).

async runtimes can just perform better at scale or reduce the amount of resources at scale. Where "at scale" means a concurrency level of >= 10k in the last benchmarks I did on this.

zamalek

Concurrent is rarely faster than parallel, across almost any language that supports it. If you know that you don't need obscene scalability (1000 connections is pushing the edge of what's reasonable with parallelism) then stick with parallelism. If you overuse parallelism then expect your entire system (OS and all) to grind to a halt through context switching.

monocasa

You'd be surprised. 10k threads is more than manageable on Linux.

benreesman

Just totally depends. I’ve worked on systems that had to be wire-to-wire in ~5 mikes at the p99.9, and that’s crazy slow compared to the HFT assassins who are rumored to be under 100ns these days.

If you’re in single-digit mikes at the tail you’re not fucking around with someone’s green threads, and at 100ns you’re in an FPGA or even ASIC.

To serve a web page? 10ms, eh, I’d rather not spill on purpose but it’s a very, very rare professional CS:GO player who can tell. If my code is simpler and cheaper to maintain and more fun? Maybe I pay it.

What I don’t want is to ‘static bound shit to burn millis. Burning millis should buy me a beautiful sunset and a drink with an umbrella in it.

imtringued

10ms is one hundred requests per second. Get ten users using your site at the same time and you are in trouble, because a single link does more than load just one request.

Matthias247

That's assuming no concurrency, which isn't applicable. Every public CDN will have around 10ms latency per request (because it takes time to load data from disk, fetch from upstream servers, apply WAF rules, etc). But they still handle 5 digits of requests per second.

Latency is not directly related to throughput.

antonvs

> async Rust was 2x faster than threads

This phrase alone is utterly incoherent.

junon

Not sure why you're downvoted. This is indeed nonsense. I love async Rust but I don't understand we have to revisit this topic as if there's some dire hangup or something every few months.

Pannoniae

Well, one clock cycle is measured in picoseconds. That means, even if we assume 1 cycle is one nanosecond, that's 10 million CPU cycles you can do something with.

That's a lot.

jerf

Your scale is off. At the moment it is still better to think of clock cycles as nanoseconds; it's still large tenths of a nanosecond. 1 cycle per nanosecond is 1GHz, so a modern processor has 2-4 cycles per nanosecond, in which it can do a lot, but not anywhere near 10 million cycles.

Pannoniae

My scale wasn't off, but I agree, it was quite misleading. When I said measured in picoseconds, I was thinking about 300-500 picoseconds roughly.

However, CPUs can also execute more than one instruction in a cycle so effectively, you can have more instructions than cycles.

hu3

> 10ms... this about as long as PHP takes to start.

About that. Modern PHP in event loop with Swoole have response times of <1ms.

So 10ms is some legacy Apache/nginx mod_php/php-fpm number.

And even traditional php-fpm+Linux response times is around 2-5ms depending on configuration.

samsquire

Thank you for this article.

The Rust and tokio folks are working on difficult and complex problems, I appreciate and thank them for the work they're doing to improve server and desktop app performance everywhere. We all have multicore machines and it would be great if we could use more than 1/8 or 1/12 (or whatever high thread count of your beefy servers) of our hardware.

The Rust multithreaded thread memory management (and Arc and so on)* causes me to be uncomfortable because of a key lesson I've learned is that you cannot scale a program by adding threads and expect it to accelerate mutate access to the SAME memory location. Single threaded memory mutation performance is a fixed known quantity. Adding threads with contention for same memory location causes throughput and latency to be slower to a particular memory location at single threaded speeds because you need mutexes or a lock free algorithm to communicate safely.

To accelerate data fanout or storage (writing to memory from multiple threads) you need a shared nothing architecture or sharding.

This means that when you reach for threads and I'm guessing you're wanting to reach for threads for acceleration and performance you need to design your data structures to not share memory locations. You need to shard your data.

EDIT: I originally said Sync + Send.

nu11ptr

> We all have multicore machines and it would be great if we could use more than 1/8 or 1/12 (or whatever high thread count of your beefy servers) of our hardware.

It is probably important here to realize that async solves concurrency, not parallelism. You can use async with a single threaded runtime for I/O concurrency and mix that with threads for computational parallelism for long running jobs.

That said, there may be some benefit a multi-threaded runtime would have for the typical I/O bound app (after working around lifetime limitations by adding Send/Sync to data structures). This is because I/O bound programs and those requiring computation are not mutually exclusive and there is always some amount of computation going on, so there may still be some benefit. I doubt a synthetic benchmark would answer this as those typically don't measure any actual work performed, but just "requests/sec".

bkolobara

> It is probably important here to realize that async solves concurrency, not parallelism. You can use async with a single threaded runtime for I/O concurrency and mix that with threads for computational parallelism for long running jobs.

In my experience, it's impossible to mix threads and async tasks. They can't communicate or share state. Threads need locks, while async tasks require an awake mechanism. If you just stick to unbounded channels that don't block on send, you can get far, but in 99% cases you will need to decide upfront on a specific approach.

thinkharderdev

This has not been my experience at all. Delegating compute-intensive tasks to rayon inside a tokio runtime is not particularly hard (assuming you can pipeline things to separate IO and compute effectively).

A pattern that has worked quite well for me is to use

``` struct ComputeTask { some_state: SomeState, completion: Sender<Output>, }

impl ComputeTask { fn run(self) { .. do you compute-intensive stuff

     self.sender.send(output);
  }
}

async fn do_stuff() -> Result<Output> {

let (tx,rx) = tokio::sync::oneshot::channel(); let task = ComputeTask { .., tx }

  rayon::spawn(move || task.run());

  rx.await
}

```

tcfhgj

Shared mutated memory isn't necessarily a problem, because it still is unknown how often access is required to that memory.

E.g. having a thread that spends perhaps 1% of the time with state mutation vs 10 threads spending each 3% of the time with state mutation. You have smaller efficiency per thread, but still higher efficiency overal

samsquire

This reminds me of the whitepaper Scalability! But at what cost?

http://www.frankmcsherry.org/assets/COST.pdf

Which I think is about how single threaded programs are faster than scalable but slow multithreaded systems.

I think you might be right and that's why there is ReadWriteLock or RwLock for single writer taking turns.

If you have a counter or a data structure you're mutating in every request then you'll hit lock contention.

eximius

If you're just using Arc<T> without any other parallelism primitives, then it's immutable and the cores can all read without blocking. The only thing it does is reference counting to know when to Drop.

Blindly using Arc<Mutex<T>> without considering access patterns is a software architecture problem, not a problem with Arc or Mutex.

kzrdude

Send + Sync is maybe not enough but wouldn't it be possible to say: as long as the memory location is read-only you can parallelize access to it. Send + Sync helps pass the read-only data through without synchronization to all threads, while the rest of the Send and exclusive mutability system flags the tricky points for you.

I can see that Send/Sync by itself does not tell you if the data is read only or just internally synchronizing mutation.

galangalalgol

I still don't understand why async is faster. Sharding data can be as simple as a buffer per thread in a thread pool to catch incoming data. With async, don't you havev to allocate that input buffer each time? That seems hideously expensive.

secondcoming

async isn't typically 'faster' at all. It just lets do do more with fewer threads, and that has its own benefits.

async can actually _increase_ the number of syscalls your application performs.

stonemetal12

Which performance metric are you looking at for "faster"? Async is cooperative multitasking applied at a different level of abstraction. Much like OS level multitasking it adds overhead, and reduces performance in terms of latency. On the other hand it improves throughput by allowing better resource use.

>don't you havev to allocate that input buffer each time?

Have to? No, you could pre-allocate and reuse buffers. It is less straight forward than the buffer per thread strategy, but possible.

samsquire

I enjoyed this article from Cal Paterson my excolleague. It's about Python async not being faster:

https://calpaterson.com/async-python-is-not-faster.html

I think the idea is that while your blocking waiting for IO in one task you can serve a different task, potentially from a different user. Coroutines, green threads, communicating sequential processes as in Go or Occam.

jerf

Pure Python is a very slow language compared to Rust, with significant differences in orders of magnitudes of expenses. I would not expect information about Python performance to be particularly relevant to Rust without further evidence directly from Rust.

the_mitsuhiko

> By doing so, one would set up a multi-threaded runtime which mandates that types are Send and 'static and makes it necessary to use synchronization primitives such as Arc and Mutex for all but the most trivial applications.

That is a very weird argument to make. Tokio has very convenient APIs (LocalSet + spawn_local) for spawning non Send futures that you temporarily await (which really is the only useful thing non Send futures can do).

If anything tokio significantly improved the user experience of async in Rust in general because it promoted Send futures.

thinkharderdev

Where it has tripped me up in the past it has been because of the way the compiler rewrites async code and you implicitly capture scope at await points. So you code doesn't compile because "future is not Send" and it's not immediately obvious why. And then it turns out that 100 lines previously you had a parking_lot::Mutex local variable which got captured in scope. So you need to refactor a bit to make sure that local variable is out of scope at your await point.

mre

There's always a trade-off. By promoting Send futures, Tokio prioritizes safety and parallelism. However, this does add complexity for developers, especially newcomers. They need to be aware of the Send and 'static requirements and might have to use synchronization primitives more often. Because of this, I think promoting Send futures as the default is the wrong way to go.

LocalSet + spawn_local are great, and I wish more developers would know about them, but the Tokio tutorial [1] doesn't mention that and focuses on the multi-threaded runtime instead. AFAIK LocalSet is only mentioned in the docs [2]

[1]: https://tokio.rs/tokio/tutorial [2]: https://docs.rs/tokio/latest/tokio/task/struct.LocalSet.html

zozbot234

> LocalSet + spawn_local are great, and I wish more developers would know about them, but the Tokio tutorial doesn't mention that and focuses on the multi-threaded runtime instead.

Patches welcome: https://github.com/tokio-rs/website/

the_mitsuhiko

As someone who programs async Rust since early days (where tokio did not enforce Send bounds), people build themselves into horrible patterns (myself included). Once you go deep on non sendable futures, you can quickly end up creating something you shouldn't have. So I think it's more than sensible to tell people to do the right thing and then follow up on the exceptional case via API docs or a followup guide.

mre

Indeed, while the Send bound can safeguard against potential concurrency issues, it also dictates a specific architectural direction for applications. Consider a web server: with the Send bound, you might be encouraged to design it such that each incoming request is handled by potentially any thread in a thread pool. Without that bound, you might lean towards a more lightweight, single-threaded model similar to Node.js, which doesn't require Send bounds and still excels at handling I/O-bound tasks.

"Doing the right thing" can vary based on the context. For instance, in embedded systems where threads aren't available, requiring futures to be Send is unnecessary. Thankfully, the standard library does not enforce this and neither does Tokio with spawn_local, but embassy exists because there's a genuine need for async frameworks tailored to the unique constraints and requirements of embedded systems.

pavlov

> "An inconvenient truth about async Rust is that libraries still need to be written against individual runtimes."

In general Rust has tried hard to improve on the developer experience of C++ by providing more safety in the language and better defaults in the standard library. So it's interesting that both languages have now ended up in a similar same place for async.

(C++20 coroutines finally enable sensible async libraries, but code written against a higher-level library isn't easily portable to another one even though both are using the low-level language coro primitives.)

> "Freely after Stroustup: Inside Rust, there is a smaller, simpler language that is waiting to get out. It is this language that most Rust code should be written in."

Maybe there's a Meta-Stroustrup's Law in effect:

"Every successful language eventually becomes one which contains a smaller, simpler language struggling to get out."

It happened to C++ and Java and JavaScript, now Rust seems to be reaching that point.

Ygg2

In practice it's not hard to make your app support async and sync, simultaneously. Quick XML does it via macros, which looks very similar to keyword generics.

Edit:

> Maybe there's a Meta-Stroustrup's Law in effect:

> "Every successful language eventually becomes one which contains a smaller, simpler language struggling to get out."

Corollary: every simple subset of language contains missing features dearly needed by someone else.

soerxpso

> Corollary: every simple subset of language contains missing features dearly needed by someone else.

This could even be said about larger languages, though. If Rust committed to pleasing everyone, it would have an optional garbage collector, lifetime annotations would be optional, and there would be an interpreted runtime available as an alternative to the compiler. Those are features that some people do dearly need for some tasks. There's a point where you have to stop and put limits on what the language is actually for and what it's not. It seems to me that much of the push to add async/await in the first place was from people who really should have been using Go, Java, or Node, but wanted Rust to be their "everything language." It's okay to say, "This language is for writing performant systems applications. It's not a fullstack language for writing a webapp."

0xDEADFED5

quick-xml is great

est31

Regarding the one runtime point, I want to counter that it is also advantageous to not hardcode one runtime in std. This allows one to use different runtimes on webassembly. This has bitten the official async go implementation for example: https://news.ycombinator.com/item?id=37501552

jeffparsons

From my occasional skimming of WebAssembly meeting minutes, I'd say that Wasm will likely grow the features required for Go to perform well. There's plenty of interest in stack switching, coroutines, etc.

I work a lot more in Rust than I do in go, but I think each language made the trade-off that made most sense for that language.

est31

> I'd say that Wasm will likely grow the features required for Go

Wasm has been really great at shipping the MVP, but they are pretty slow about shipping the many features that build on it. In general, this makes sense as the system can't be changed once its stable. But it also means that a lot of things are still in limbo and will probably be for the forseeable future.

> I think each language made the trade-off that made most sense for that language.

Definitely! Go is meant for backend application logic computing where you can provision tons of ram and ignore the issues of gc. Rust targets a larger domain, less application logic in particular but the whole range of system programming. Also including applications but also low level libraries, places without an OS, etc. I think if Rust really wants to be low-low level, then not shipping an async runtime is a must, even if the std crate is present. Providing features for libraries to support multiple runtimes? Sure. But don't apply solutions that (mostly) work for Go to Rust's problem domain.

jeffparsons

> Wasm has been really great at shipping the MVP, but they are pretty slow about shipping the many features that build on it. In general, this makes sense as the system can't be changed once its stable. But it also means that a lot of things are still in limbo and will probably be for the forseeable future.

I concede that development of post-MVP development has been slow, but I am also optimistic for the near future based on recent progress. Are there particular things in limbo that you're particularly interested in or concerned about?

yencabulator

The linked message is just saying that implementing (green) threads on top of a non-threaded WASM spec has overhead. It doesn't really have anything to do with async, or the multiplicity of async runtimes, as such.

K0nserv

I agree with much of this post. I managed to avoid async Rust for three years of writing it. I do think it's the least beautiful part of Rust. My journey has been one of reaching for Arc and Mutexes and then running into problems with that approach. Relying more on channels and spawned tasks that own state i.e. Actors[0] has been a good improvement.

I do think the post is a bit unfair in this sense, it rightly identifies the problems of Send and 'static. However, it also suggests Arc and Mutex are *the* solutions for shared state in async, but suggested channels for the threads example.

The problem of function colouring and Send/'static bounds are the significant hurdles with async Rust, shared state is something that needs to be resolved whether using threads or async.

0: https://ryhl.io/blog/actors-with-tokio/

jedisct1

The few projects I wrote using async Rust eventually became unmaintainable. And when things go wrong, stack traces involving Futures are impossible to understand.

This is where Go really shines. Goroutines may not be "right" or "good", but they are very intuitive, and maintainable. Performance isn't bad either.

In Rust, there's the May project that is very similar and should really get more attention.

Patrickmi

Here’s the problem about languages like Rust, at the very beginning of rust goals it gives you all the control while offering security and performance, want some libraries to manage some these authorities no problem, but the problem here is that if everyone want to agree one thing or feature while the language gives the programmer to full control this causes fractions in the ecosystem, 3rd party vs rust core team issues and co, a single unmaintained library can deal a massive blow to the ecosystem on like Go where “The Language makes the decision for you” it gets worst as rust isn’t a domain specific language (way more than python or java) even tho it’s a systems language this brings in different domain ideologies into the language which in turn creates massive 3rd party libraries to be able to handle those ideologies which in turn causes a massive blow to the ecosystem if more unmaintained libraries pile up.

This circle will repeat its self over and over again

anyfactor

Fantastic article. My experience with Rust as an enthusiast was that tutorials tend to introduce Tokio very early on and it kinda makes Rust feel more difficult than it is. Rust's async shouldn't be taught, it should rather be discovered.

The author mentions

> If async is truly indispensable, consider isolating your async code from the rest of your application

I think ALL async code should be generally isolated.

Are there languages that provide foundational priority to asynchronous code yet supports good intermingling of sync and async in the same codebase? I maybe missing the point about isolation, but the mix of sync and async code gets bad really quick.

mre

Go doesn't have native async support per se, but its approach to concurrency with goroutines and channels simplifies the process considerably. Synchronous code resembles asynchronous code, eliminating the need to isolate goroutines.

Rust, on the other hand, took a different route. Green threads don't integrate smoothly with code interfacing through FFI. Moreover, Rust's async model doesn't require a garbage collector.

yencabulator

Rust's async model pretty much requires Arc, and reference counting is a simple form of a garbage collector...

0xDEF

Which Rust tutorial introduces Tokio early on?

anyfactor

With HTTP requests, you will come across reqwest and Tokio. This comment [0] introduces me to ureq and the commentor helped me to explore Rust better.

I understand that making async HTTP request is a fundamental concept. However, I question why we should recommend a more complex solution when there are simpler alternatives that still leverage Rust's capabilities.

[0] https://lobste.rs/s/2kvgav/learning_learn_rust#c_udauvn

rwaksmunski

All I want is a basic tiny single threaded async runtime in std. No need for Send & 'static on everything. A modern single core is more than plenty for my workloads. Need more horsepower? Sure grab Tokio. I'll be fine with single async thread for IO and Rayon threadpools for heavy compute. No need to over complicate stuff.

eximius

One of the common comments in this thread is "can't we just make standard interfaces in std?" "well, no, sync+send is hard"

I can't help but wonder if there are two sets of interfaces necessary? a set of standard single-threaded traits and a set of multi-threaded traits? would that be sufficient?

As an aside, what workloads require true multithreaded reactors as opposed to a runtime which uses multiple singlethreaded reactors?

yencabulator

> As an aside, what workloads require true multithreaded reactors as opposed to a runtime which uses multiple singlethreaded reactors?

For example, DataFusion spreads query processing over multiple cores by having the data flow be a "streaming DAG of record batches" (or something like that), as in futures::Stream.

https://docs.rs/datafusion/latest/datafusion/

https://docs.rs/datafusion/latest/datafusion/execution/trait...

klysm

I don’t write rust in any sort of large capacity, but async in rust gives me this sinking feeling that the project took a big misstep that’s going to either be permanently bad, or very painful to fix.

davidhyde

I’m aware that the issues are tough to work through but it’s a real shame that async traits remain in nightly. On top of this, being able to reference a set of reasonable traits from a popular library not linked to a runtime would make library writing less (runtime) siloed. For example, a library author would not have to expose their own async Read and Write traits allowing consumers of that library to use runtimes that consume those traits. The user would not then have to do the plumbing themselves.

est31

async traits are in the process of being stabilized: https://github.com/rust-lang/rust/pull/115822

Also, impl trait projections (ability to use Self::Foo associated types in async functions in traits): https://github.com/rust-lang/rust/pull/115659

Daily Digest email

Get the top HN stories in your inbox every day.

The State of Async Rust: Runtimes - Hacker News