Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Notes on structured concurrency, or: Go statement considered harmful (2018) (vorpus.org)
157 points by stopachka on July 2, 2022 | hide | past | favorite | 122 comments


Can the discussions here try to stay away from Go bashing. This post is not about Go. It's about Structured concurrency VS background tasks.

There's many interesting discussions one can have about the latter but the former turns into toxicity.

With that said. The Rust teams are very interestes in structured concurrency at the moment. Rust 1.63 is going to get scoped threads, which is structured concurrency for threading. Myself and others have also been looking into structured async, albeit it's not as easy to implement.

I personally love the concept and I hope it takes off. You rarely need long running background tasks. If you do, you probably want to have a daemon running along side your main application using structured primitives and then dispatch work to them instead. This really simplifies the mental model when you get used to it.


Structured async sounds like a very interesting idea, this is definitely a space where it feels intuitively as though there should be a better way to express what we mean and get roughly the same machine code but better human understanding for both author and future maintenance programmers - if somebody nails this it could be as big a deal as the loop constructs in modern programming.


> I personally love the concept and I hope it takes off

I have used structured concurrency with Kotlin and you are right, it is absolutely easier to reason about concurrent code that way.

https://elizarov.medium.com/structured-concurrency-722d765aa...


Structured programming can guarantee that the control flow graph has constant treewidth[0], which enabled the use of parameterized algorithms for efficient static analysis. Wondering if structured concurrency can impose some additional constraints on the ordering of tasks that makes it easier to analyze, e.g. for linearizability or other properties.

[0]: Mikkel Thorup. 1998. All structured programs have small tree width and good register allocation


The OpenJDK team is also persuing structured concurrency so we should have multiple interpretations shortly to compare. Exciting stuff for folks that write highly concurrent software.


But should they be writing highly concurrent software, without thinking bloody hard first I mean?

I can see this being the next Big Data or NoSQL 'must have' bandwagon.


> But should they be writing highly concurrent software, without thinking bloody hard first I mean?

Yes, by necessity. We're 15 years after the end of the race for frequency, and the end of moore's law is getting ever closer. Concurrency and parallelism are becoming table stakes for both reactivity and performances. This means making them reliably usable and composable is getting more and more important.


@ohgodplsno (edit, and now @jpgvm's) have good answers grounded in a specific scenario which really makes their point well, but yours worries me.

>> But should they be writing highly concurrent software, without thinking bloody hard first I mean?

> Yes, by necessity.

So we should not think about it first, just do it?

And not consider whether there are better solutions such as checking we're using the right algos or cache-aligning our accesses or using a probabilistic filter before hitting the DB or..

Edit: no offense intended, the original question was meant as a rhetorical device to warn against dumbness and bandwaggoning. I am sure you too would not plough in unthinkingly but I'm afraid our industry's reaction is too often to do just that and it's such an expensive mistake, repeated over and over. It's a thoughtless jumping ahead that will lead to slower software, not faster. And buggier.


I think you might be confusing parallelism with concurrency somewhat. Writing highly parallel software is only important in certain domains. However concurrency is pervasive due to I/O, you will struggle to find a domain that does no I/O (especially one that doesn't benefit from parallelism).

Structured concurrency is about managing concurrent operations with less cognitive overhead. The most common case being handling stuff like cancelation in a reasonable way. These problems are relevant everywhere I/O is relevant which I would wager is most programs people write and interact (see I/O!) with on a daily basis.

Should you be using concurrency without thinking? Well ofcourse not but if you aren't using it there is a very good chance you are "doing it wrong" for which wrong is some approximation of providing a poor user experience or writing software that makes inefficient use of resources and/or wall-clock time.

On the topic of highly parallel software however I do mostly agree, there is a few things that benefit from highly parallel architectures but generally it's going to make whatever you are doing way harder. Some exceptions are forms of limited parallelism like making use of SIMD in tight loops, this is great because it doesn't require you to make your program logic parallel, data operations just execute in parallel on multiple inputs on a single thread.

Anyways. My point is almost all programs need some form of concurrency whilst most don't need parallelism. Concurrency is hard and has a multitude of competing solutions (async/await, threading w/monitors + atomics + friends, CSP) while parallelism is generally even more difficult but has much more limited solution space whist having some overlap in terms of abstraction layers.


> I think you might be confusing parallelism with concurrency

Very good point, I hadn't got myself straight on that, thanks


I do in fact plow unthinkingly into using concurrency quite frequently, and it works out quite well. I just unthinkingly use techniques like "just throw it in a dedicated thread", "put each stage in a thread, move objects between threads via channels", and "Pin one thread each to N cores, distribute incoming events across threads" whenever they seem like they might make a thing good, and they keep working out really great pretty much every time.

If you're writing C, you're going to have a bad time, but we've built some really great tools with modern type systems that make it far easier to treat concurrency as a thing that you can just rely on being able to safely use.

When you're using a language with misuse-resistant core primitives, like structured concurrency, and like Rust's Ownership, Mutex, and Send/Sync traits, it really is a meaningfully different programming experience. You make small use of concurrency all the time, because you just know by default without investing any time at all to check that you haven't made any kind of dumb mistakes.

When you use concurrency all the time, and get instant feedback from the compiler describing the precise data dependency that would make your idea a dumb choice, you get a ton of real direct feedback to learn about how to use concurrency correctly, and what designs it's a good fit for.

I agree that concurrency isn't a replacement for checking for algorithmic improvements, profiling and tuning your memory access patterns, using probabilistic filters, caching, etc. But just like how you can just unthinkingly drop a probabilistic filter before hitting a DB, I think you can and should be able to just unthinkingly spread a bunch of work out across a bunch of cores. This should be a simple, obvious, normal thing that people do by default whenever they care at all about performance, and it can be with good safe tools.


I’m excited about concurrent Kubernetes clusters to compile our JavaScript frameworks to run on virtualized mainframes and achieve roughly equivalent performance and functionality of a pascal program running on an 8086 in 1985.


Serious note, I have worked on interfaces in pascal on a 80286 in that period, and the UI performance was crisp.


> And not consider whether there are better solutions such as checking we're using the right algos or cache-aligning our accesses or using a probabilistic filter before hitting the DB or..

It's mostly for io-intensive workloads. And for nicer debugging experience compared to callback hell. Nothing to do with NoSQL, or right algos, or cache-aligning when you just need to do 10M http requests/db-inserts/rpcs as fast/efficient/nicer-dev-experience as possible.


> So we should not think about it first, just do it?

It is much easier to bake concurrency and parallelism from the start than to retrofit it in an existing sequential project, even more so as it avoids sub-optimal patterns.

> And not consider whether there are better solutions such as checking we're using the right algos or cache-aligning our accesses or using a probabilistic filter before hitting the DB or..

Why are you declaring that those are incompatible with concurrent programs, exactly?


Well I don't know about most people but to speak for myself I mostly write high performance network servers, databases and queues and in my world concurrency (if not parallelism) is strictly necessary. For me it's also not a recent development or a fad, it's been this way my entire ~15ish year career.

I imagine folks writing Web software, UIs and heavy desktop applications will also benefit from these developments but those areas are out of my core expertise so I can't speak to exactly how useful structured concurrency will be for them however I can see very clear applications for my domain.


Yours is definitely not the case I was disputing.


If you do any kind of UI work, you are already considering carefully what to run on the UI thread (unless you're using JS but then you brought this upon yourself). Additionally, many reactive patterns require you to collect a flow and this blocks the whole thread. Launching a coroutine on a background dispatcher is safe and simple.

There is a middle ground before highly concurrent, and it's "I don't really care what you do, but please use the cores i can have and don't block the main thread thank you"


As elsewhere, yours is definitely not the case I was disputing but as UIs are not my area in the way you are clearly familiar with them, a question: What kind of application are you doing your UI for that can use all the cores?


Yep. The other response has it pretty right. I don't really care that i use _all_ the cores. Ideally i have one or two dedicated to IO, some for general computation, but most importantly i need things to get the fuck out of the main thread as fast as possible. User clicks somewhere, offload any computation to a coroutine on the background. If you don't do this, your UI will die. Even small, 50ms calculations are dreadful if ran on the UI thread. That means 3 dropped frames.

Structured concurrency just means that i can both not really care where it goes, be generous on coroutines (I have only screwed up once, and that was by having hundreds of threads deadlocking on database access. Otherwise, you most likely have... 50 running at worst?). Structured concurrency also means that the moment i drop out of a screen, every computation launched in the context of this screen and not a higher level is just dropped immediately, meaning it runs just when I need it to


What kind of _small_ computation are you doing that requires 50 whole milliseconds?


You'd be surprised how easy it is to find junior devs iterating over a whole list to find something by its ID instead of using a hashmap.

Actually not just juniors I also do it sometimes but shhh


It's not a matter of using all the cores. It's a matter of not oversubscribing the one core that is receiving user input and updating the UI in response. UI frameworks are still (for the most part) extremely single-threaded, and pretty much leave it up to you to offload all your computation - or murder your UI responsiveness


Swift has task-based structured concurrency since last year, and it looks really nice. https://developer.apple.com/wwdc21/10134


Very interesting. Thanks for sharing!


Interesting article, which gives some good idea how to structure concurrent programs with less shooting at your feet :). A bit long winded until it comes to the actual core concept being presented: spawn concurrent routines within a block which doesn't exit until the last routine has exited. This is a good concept an can make a lot of code clearer. It prevents some data races, as you can reason that after the block no concurrent operations are still ongoing, but of course it could make the block lock up by itself, if one of the routines doesn't finish. This is certainly an interesting concept I might try out myself more specifically. It is also important to know, especially as the "go" statement of the language Go is put into the headline, that this very language already supports nursery-like structures, just doens't have the syntax sugar the python extension of the author has.

It is called a wait group. See for an example here: https://gobyexample.com/waitgroups

So, except for the syntactic sugar around it, the nurseries compare to creating a wait group in a block, spawn goroutines which each call Done() on the wait group on exit and at the end of the block call Wait() to errect the same boundary, nurseries do. Of course you can also pass the waitgroup object to any goroutine created inside goroutines. This is a very common pattern in Go, but indeed, it probably should be presented more clearly and up front in tutorials about goroutines.

So for that, I will keep the article around, it shows the concept nicely - perhaps I might do a pure "go" version of it, which then shows the go implementation of nurseries. Might be nice to add to the original article, that not only the presented python library is a solution, but also that there is a native go way of achieving that, as the article uses "go" as the negative example :p


I think you missed the point of the article. Its suggesting that goroutine style concurrency should be completely replaced.

Much like if/for etc has replaced goto, structured concurrency can replace goroutines. Note: goto can be used like an if or a for loop. You've just made the argument that goroutines can be used like 'nurseries'. You've essentially argued in favour of goto from the articles perspective

I want to note again that this article is not actually about go. It is the same in most languages with concurrency primitives.

The key takeaway is that having to manually manage waitgroups is room to make mistakes or to introduce spaghetti concurrency, while you might be used to it at this point, it doesn't make it the best system for the future of concurrency


No, I didn't miss the point of the article. I just feel, it is important to note that the proposed style of concurency is available in go and point out to how you would implement it, if you cannot use this Python library.

I would agree, it would be nice to have a similar syntactic construct in go to enforce this pattern, though it is actually possible to basically implement that with higher order functions.

And yes, I think goto has a place in any language, if you are aware of how you should and how you shouldn't use it. In most cases the typical constructs of structured programming are preferrable, but not in all.


It's true that with Turing completeness it's in some sense only ever a matter of style. But that's not a very useful observation in practice.

If you really thought the arbitrary jump "has a place in any language" you would not use basically any languages from the 1980s onwards since they all neuter this feature for a good reason as the article explains, and some of them do away with it altogether.

Even C++ - a language that has never seen a gun it won't supply loaded and pointed at your feet - does not allow you to goto labels in other functions, and will run the necessary constructors/ destructors when you enter/ leave scope with a goto statement.


> No, I didn't miss the point of the article. I just feel, it is important to note that the proposed style of concurency is available in go

Apart from the TFA covering a particular python lib, I'd like to point out how the Go philosophy worked before. Given:

Situation 1: "the proposed style of concurency is available in go" so let's have a free ride and we can have linters and human reviewers trying to catch all the concurrency bugs faster than contributors are inserting them. /s

Situation 2: the proposed style of concurrency is the only one available in the language.

Approach 2 would make sense.


You are missing a very importand third point:

Situation 3: we have understood the value of the proposed style of concurrency and think it is the right one for most situations, consequentely an according API has been created and it is strongly recommended for all usages of concurrency that fall into the pattern. This doesn't require the removal of the primitives used to build this API from the language. No linters needed, just a bit of common sense not to use the low-level API unnneeded.


In support of your point, this largely tracks way the "goto statement" shakeout has largely gone down in a number of programming languages over the decades - "available but discouraged" as in "Please do not write simple loops with if-and-goto."

Access to source inline|linked assembly is also often available. That is perhaps less of a hazard since people probably have a less easy time "convincing themselves they understand it."


But as the article points out, you can't use the goto statement as it existed before structured control flow. It's not "available but discouraged" it's just gone because it was a bad idea.


setjmp & longjmp still exist in C/C++. A great many latter day prog.langs also still allow assembly with its jump-oriented constructs. Lack of possibility is simply not the same as lack of use. Lack of use or even just lack of common use is all that is needed for the benefits (as long as the use is easily identified by either humans, compilers, or both).


std::longjmp is only defined if you're jumping "up" the call stack (the function you're jumping back into was still executing), and it would actually be safe to just abandon all the objects in your scope and containing scopes that will no longer be destroyed properly.

In all other cases it's Undefined Behaviour, all bets are off.


I'm not sure what your point is other than to elaborate constraints upon my examples. I never said it was "safe", "well defined", or even a "good idea". Only "available" and that this is an important distinction from "actually used". Even Rust has an "unsafe" construct available. I don't really think anything of what I am saying should be contentious.

Being "weakly supported" also does not imply "strictly unavailable". You can cobble together "unstructured" jumps on top of the structured exception system as in pygoto. [1] Yes, there may be logical wiggle room in some pedantic definition of "goto" or "unstructured" or "available" exactly matching what you think this article said. It is true enough for the general question of "code clarity guidance purposes" which is, in my view, the general topic both of the article and these comments. One can still meaningfully say, "Please don't use pygoto!" even though "goto" is not, exactly, "in" the language.

Many aspects of what is clear/confusing/easy/hard are not what is "strictly possible", but what a prog.lang or even its social community shepherd users to do, often through stdlibs. [2] Exceptions were originally sold as "structured error handling". Many people seem to dislike exceptions for their remaining non-local properties. In a lot of ways exceptions are often "optional", but if a stdlib uses them heavily then opting out becomes as impractical as replacing the stdlb.

[1] https://pypi.org/project/pygoto/

[2] nibblestew.blogspot.com/2020/03/its-not-what-programming-languages-do.html


In exchange for giving up go-to we gained a lot because it turns out that programme structure enables us to successfully write more complicated programs. There's no practical difference between what you claim is "weakly supported" (if I carefully read the assembly for each build to find the correct values to plug in for a jump instruction I can use in-line assembler from a language like Rust to "go-to" somewhere else) and "strictly unavailable".

In the case of C++ and Rust they'll both tell you what you want to do is "Undefined behaviour" although Rust also ropes it off with unsafe. Doesn't work? Too bad.

Keeping the flexibility you desire so much would have been tremendously expensive. It would mean all the programs real people write are slower and harder to understand just to support some unspecified "alternative" and so, as we see, nobody did it.

And that's how this insight applies to concurrency. Modern languages are much superior for being able to assume go-to isn't a thing, and the argument says they'd be much superior for being able to assume Go statement style concurrency isn't a thing.

The difference is that (aside from you apparently) it has been generally accepted that go-to was a bad idea, while prohibiting something like the go statement or Rust's thread::spawn are not yet seen in the same light. The contention is that if we did that we'd be able to understand more sophisticated concurrent programs. Seems reasonable to me.


> flexibility you desire so much .. (aside from you apparently)

Not only did I never advocate for broad usage, I explicitly said I didn't (as did @_ph_) and never expressed some love for goto or even "keeping it" whatever that means given it's often practically "possible". I neither use nor advocate Go, nor Rust, nor similar concurrency primitives in Nim, nor goto. Sheesh. Weird flaming attacks on strawmen there. You should probably just be ignored. Oh well.

>no practical difference

One practical difference is that safer/easier higher-level constructs can be written more easily in terms of lower-level - which @_ph_ was specifically suggesting a few posts up subthread. I hear the Rust ecosystem has a lot of usage of "unsafe".

Making "possible" things "too hard" can force core compiler teams into provide 1-size-fits-all solutions. All approaches have trade-offs. Google|MSFT|.. can fund compiler teams to do whatever. You may hire a bunch of kids out of college and don't want them to have too much power because they have yet to learn the responsibility Spider-Man picked up as a Teen Scientist.

>much superior for being able to assume go-to isn't

Compilers can usually analyze what the program is doing and observe "no go-to: do the better thing". They can even encourage users by saying "optimization only works if not doing PDQ" (like only tail calls get turned into iterations). APIs can always have caveats like "Incompatible with XYZ". If usage is rare and easy to identify (as I said) the situation is essentially the same from a clarity/expense of developing complex systems point of view, but I am just repeating myself to seemingly deaf ears.

To be more concrete, people don't use either setjmp or goto much in C++ because other higher level things are easier/they are taught not to. There are usually many unused dark corners of any prog.lang more than a few years old. They have a cost, but it isn't as monumental as you make it out to be and so removing it does not produce the major savings you imagine (relative to discouraging/making rare).

>Rust's thread::spawn not

So don't call it. Write a structured concurrency lib. If there aren't already 5 crates.

>are not yet seen in the same light

Seen by who? You? You speak for all systems programmers & professors now? The article author does? Or did 4 years ago? I would grant there are education problems on this and many other topics. Maybe you calibrate your "not seen" against a pool of underexposed people?

>be able to understand more sophisticated concurrent

Just use a higher-level API? I don't see a strong argument (either by you or in the article) that only prog.language support can "sell" this approach/keep things clean. @_ph_ suggested a lib. All I did was say history supported his side point and you started jumping all over me.

There are many abstractions you need for more sophisticated/complex systems..not just this. "Must..get..compiler..team..onboard!" jitters are usually misplaced. If one feels that way often about syntax needing to follow semantics, then I suggest Lisp or Racket or Nim or something where you don't have to wait for compiler teams.

It may be for "other Python reasons" that Smith claims you cannot get "safety" without prog.lang support, but Python is really just one lang and famously not good at safety in general. libdill [1] did this pretty manually as a C library (with, sure, probably a lot of other "C problems") back in 2016 - years before this article (which yes, mentions it in a snarky footnote). Pretty sure the ideas are all ancient. The same footnote mentions some Rust crates you could evaluate if that's what you like. Core concept systematization may well date back to Henry Ford's assembly lines (or earlier, but probably not, say, Aristotle!).

So, ok..Maybe it's catching on now in more application areas/stdlibs/a prog.lang or 3. I'd even agree that's probably a good thing. I never thought otherwise. I don't think I said anything to suggest that I did.

-----------------------

Look - I'm glad "fork-join, abstraction, and clean nesting" seem to have new fans in Smith, in @tialaramex, and in the world. To many, it may seem like "news". To many others, it is not "news" at all. Nor is the challenge of getting users to use safer, higher-level constructs when "cheats" are "always kinda possible" or maybe more sadly "what some popular intro/demo code uses". Bad examples propagating are a problem - not only in software even. There are surely interesting questions around the topic. I never challenged that, either.

Best wishes

[1] http://libdill.org/structured-concurrency.html


> Not only did I never advocate for broad usage, I explicitly said I didn't

It's not about "broad usage". You said it should be "available" and it isn't in any practical sense.

> removing it does not produce the major savings you imagine (relative to discouraging/making rare).

For setjmp/longjmp C++ already took the major savings you're thinking they won't get. Hence the Undefined Behaviour. If they wanted to not have undefined behaviour for go-to they would need to substantially re-engineer the language and that would have terrible performance. So that's not what they did.

The C++ goto keyword is completely de-fanged. Suppose you've got a loop in a scope with a complicated, expensive to destroy object. A 1960s go-to person would anticipate that if they use goto to leave the loop they are thereby skipping the expensive destruction - however in C++ it doesn't do that, the compiler performs the expensive operation to clean up properly because you are leaving scope.

> All I did was say history supported his side point and you started jumping all over me.

History does not support this claim for go-to as I've explained. You cannot actually use the go-to feature in modern languages, some offer a de-fanged goto keyword that does some local structured control flow change instead, many don't provide this at all. No that Python hack is just a hack, it's not actually implementing go-to in a meaningful sense, which is why their last example hits a recursion limit instead of just looping forever as go-to would.

> Just use a higher-level API?

The article's point is that this delivers much poorer results. I actually began trying to write an example showing how hard it would be to write structured control flow as a library in FLOW-MATIC and it hurt too much as I realised function calls don't exist and I need to begin again with that perspective, FLOW-MATIC has this idea of executing a range of instructions, but the instructions don't specify that range, the "caller" does, which today seems utterly backwards. I can't do it. Nobody would do it. At scale the resulting large programs would be - as the article observes - spaghetti code. Instead new languages were developed with structured control flow.

You've mentioned that you don't "advocate" Rust but I'll further assume you don't use it or maybe don't know it well. Historically Rust provided a mechanism that let you say well, see this object X exists now, and I just made a thread, I got a thread handle, and when I leave scope that thread handle gets joined, so, logically the thread can't exist longer than object X right? And thus it's safe to use X from inside the thread knowing it exists. That is inherently unsound and was abolished before Rust 1.0 (if you're not a Rust programmer the simple explanation is Rust has no problem with you just leaking the thread handle and then it won't get joined so the thread may outlive X)

Language-level structured concurrency would let you provide this sort of guarantee and take advantage of it in your programs. Rust is getting a very small taste of this, in the form of scoped threads in 1.53 as mentioned elsewhere in the HN threads, but the article's position, and I think it's made very well, is that doing this everywhere at the language level is desirable.

That is, rather than "just don't call it" the argument is "just don't provide it" as we saw with go-to. Are they correct? I think they might be. But it was frustrating to see several people in this thread insist that actually go-to isn't gone, citing things like longjmp which are far less powerful and explicitly have Undefined Behaviour in the circumstances discussed anyway.


That should say Rust 1.63, not Rust 1.53


I never said "should". Really. All I said was descriptive not normative. I explicitly gave examples where trade-offs could shake out various ways. My strongest positive claim in this whole subthread might be "lack of common use is all that is needed for the benefits (as long as the use is easily identified by either humans, compilers, or both)." I can re-qualify that as "most benefits" if you insist. You use qualified "maybe they're right" language yourself. I don't think we're even disagreeing on the only claim I made.

In the interests of clarity, when I said "available but discouraged", I meant to refer to what you are calling "completely defanged gotos". This is the real origin of our cross-talk (and maybe any you have with others). Others do not use the term "local structured control flow" for local goto - in fact I've never heard that before. You also let "defanged" do too much work. "local goto" is by no means universally considered harmless just because it not "as bad as FLOWMATIC". Local goto was very much in Dijkstra's 1968 _Goto Considered Harmful_, for example - the OG _Harmful_ article. (FWIW, Dikjstra's earlier 1965 work proposed structured concurrency notation with "parbegin .. parend" extensions to Algol 60.) I know people that hate "return" & "break" as well for being unstructured jumps. Human language is cooperative - you may need to adjust how you discuss local gotos.

My "largely tracks" history was intentionally a rough thing. I know (local) goto statements are becoming less common, but they're not quite like punch card column rules in Fortran77. D is a recent late 90s language that did. [1] V is brand new and it has it. [2] I'm sure there are many modern examples..and more still that can cobble it together like pygoto. Herculean lengths are usually not undertaken to block a pygoto/longjmp.

Since you say longjmp is "far less powerful", I suspect you have a misimpression. You can change whatever program counter is in the jump buffer and go to any (executable) address in your virtual memory that can handle the transition. One of the other things in the the jump buffer is the stack pointer. Bad ideas for most can be useful in more expert hands. E.g., people built green threads libs around longjmp before SMP/later multi-core. [3] With many stacks. Any "yield point" anywhere in a tangle of global cross-green thread control flow could become a valid jump-back target. Some early JVMs used this, too, IIRC. I don't see this as particularly less powerful than FLOWMATIC's jumps. It kind of seems more powerful to me, but power can be hard to quantify. In context, it was also defined well enough to provide cooperative multi-threading for many users. That gnu lib is still "available" to use in C++ and probably many other things in 2022. Any "modern" language with a C backend or maybe just C FFI can also probably call longjmp. Sure, maybe not desirable. I'm not really advocating anything except maybe clarity of argument/language and awareness of properties.

Sorry..I do not know/track Rust well enough to use examples from its historical twists & turns. "Retain hairy low-level, but provide nicer high-level things" is a really common layering and all this "keep the go statement" subthread was about, IMO.. There are also warnings not only errors. Such layering is probably elsewhere in Rust, but I do not know and declare no "should". Difficulty of writing "while" or function call stacks/whatever in FLOWMATIC is simply not relevant to such highly prog.lang-and-feature-specific questions, since various other kinds of abstraction power have also increased since FLOWMATIC.

[1] https://dlang.org/spec/statement.html#goto-statement

[2] https://github.com/vlang/v/blob/master/doc/docs.md#goto

[3] https://www.gnu.org/software/pth/pth-manual.html


> lack of common use is all that is needed for the benefits

This is already wrong though. To deliver the benefits modern languages abolished the go-to feature, it's gone as the article explains and as Dijkstra anticipated. Early on in his letter Dijkstra observes that with procedures the use of a textual index as go-to's destination won't work. I used the phrase "local structured control flow" to emphasise that the defanged goto in a language like C++, unlike this go-to idea, isn't just a jump, the compiler is going to emit all the extra work needed to alter control flow here, just as it would for break or return.

> Human language is cooperative

I have tried very hard to refer everywhere to the idea Dijkstra is talking about as go-to rather than goto for exactly this reason. The article takes pains to explain the distinction too. Co-operation is not one sided.

You cite D, which inherits the defanged goto from C but further constrains it (in D the goto can't cross an initialisation, so D's compiler needn't handle that), and then you cite V, a language whose main recent claim to fame is that it can take the game Doom written in C, transpile it to V, and then transpile that back to C, it's not a surprise to find V has lots of C warts preserved exactly as is to deliver this capability.

> cobble it together like pygoto

Pygoto is a cute hack, but it is not actually the go-to feature. Here's how pygoto "works": It takes your entire program, it snips off the lines before the one you want to "go to" and then it asks the Python interpreter to execute the resulting text with your current global state and exits with any resulting return code.

This is why, as I already mentioned, it runs into a recursion limit when you write a program which goes back on itself, if you actually have go-to then this is merely an infinite loop, but Python doesn't, so hence the recursion.

> You can change whatever program counter is in the jump buffer and go to any (executable) address in your virtual memory that can handle the transition.

The longjmp feature that's actually supported is far less powerful as I explained. What you're doing here is relying on implementation details as if they were features from the language or supplied library. There's no practical difference between this strategy and just writing inline assembler to do the jump, in a complicated program it probably just won't work for a variety of reasons and nobody is interested in helping you figure out why.

> Retain hairy low-level, but provide nicer high-level things

... provision of the "hairy low-level" in this case makes the nicer high-level things harder to use effectively. That's what the article is explaining. Provision of this capability is harmful, which gets us right back to Dijkstra's letter.


As I said, others complain about break & return. Defanged to you is not defanged to others no matter how often/consistently you say so. The statement & many of its claimed problems persist. Call the remaining problems claws not fangs. Safety/clarity is also hard to quantify. Persistence+present rarity shows abolishing was not needed for use to fade.

Portability of longjmp can mean over (at least) CPUs, libc's, OSes, assembler backends (gas vs Intel syntax). Semantic hair splitting about what difference achieves "practical", what support (by who,what,when) achieves "actually" or other squishy traits adds no insight. Sorry you're frustrated, but I don't think there's anything more to say on goto we haven't both said >=1 times. We'll just have to agree to disagree on goto.

--

On a far more practical note (that makes goto analogy even more dubious and even less worth debating), one problem with this <=1965 approach that is charming you lately is uniformity of job sizes (in time). Since the block waits for the last of N tasks to finish, utilization is capped by the slowest finisher.

This "must wait for last of N" is at least part of why this has been more popular in parallelism settings like OpenMP than in network concurrency settings. As one simple, heuristic model, the distribution of the sample max is the N-th power of the distribution function. [1] So, e.g. 1/2=F^N => (1/2)^(1/N)=F => median of max(N) is the N-th root of the median. E.g., 0.5^0.01=0.993 for N=100. The 99.3rd percentile of WAN latencies can be very long indeed (and often even minimal job chunks require facing that).

Often you won't know ahead of time who the slowpokes will be to "launch those in the background". You might test "in-data-center" and have utilization crash "cross region". It might only be "intended for in-data-center" and be misused or some in-data-center host may break. In a parallel setting, maybe some data is in L3, some in RAM, others on NVMe, others still on spinning rust or across a network + any of that other uncertainty. Disparity almost always grows with N. There are mitigations (e.g. elastic parallelism to at least keep things busy if possible, multiple pools & task migration, etc.), but duration disparity can be a real problem - things that work in tests can "perf fail" elsewhere.

The point here is that complexities & sensitivities to deployments/context exist in this replacement idea that simply did not for "goto". Technical debt acquired by "total removal" could cost more than you save by simplicity in a few ways. These things are not easy to quantify. Maybe this is all obvious. But maybe not. I did not see such mentioned in any of the 3 discussions so far (in search for text near "last" "wait" "long" "slow" "idle" or "util", anyway) in over 300 comments so far.

[1] https://en.wikipedia.org/wiki/Extreme_value_theory


> one problem with this <=1965 approach that is charming you lately is uniformity of job sizes

Er, no. I can imagine Dijkstra may have faced similarly confused people, who just couldn't understand how (for example) a loop feature should replace their use of a go-to for conditionals. Dijkstra does not propose a single new language feature to replace go-to one-for-one, and indeed makes clear in his letter that although this has been proven technically possible it's obviously the Wrong Thing™, likewise the replacement for go statements isn't a single "must wait for last of N" feature.

Just as modern programs typically have loop constructs, and conditional constructs, and procedure constructs even though a go-to statement could have expressed all these and more, the idea is that you could have several different concurrency mechanisms, each better suited for its purpose than the "go statement" and similar features today.

I think we'd want semi-short-cutting concurrency for example. Suppose there are computations F1, F2, F3, F4. In serial programs we can imagine writing f1() && f2() && f3() && f4() to have each computation happen only if the previous computation was unsatisfactory in some way because we have short-cutting && operator. But as a concurrency primitive you might like to have a way to perform F1, F2, F3 and F4 so that perhaps all four might happen however once you have a satisfactory answer from any of them the computation overall stops. Unlike the short cutting && operator this would not promise that F4 doesn't run if the others succeed but it would promise not to expend further computation after the answer is discovered.

Now, of course you can build this from the more powerful "go statement" primitive, but that's not the point here, you could build the short-cutting && operator from the go-to primitive too. The point is that having a sufficient arsenal of such structures provides clearer concurrent programming which empowers us to write more complex concurrent programs that are correct.


>> one problem {i.e. "insufficiency of arsenal" in your lingo}

> Erm, no [..needless snark, insinuates I'm confused.. proceeds to largely restate Smith's article and himself, though a little better this time; Good job! but does not solve the problem if a user wants even "most" of the N answers..]

I suppose I could have instead said "exist in this one replacement idea[..]could cost more than you save either in other constructs or in mitigations" or something similarly guarded/defensive. Seems you probably would have picked yet another strawman or willfully obtuse reading to attack rather than argue in good faith.

If you do not have positive "arsenal addition" suggestions to solve the essence of the problem I mentioned -- which I am pretty sure many care about -- whatever gripe you might have with how I describe it then I don't think we have anything more to say to each other here.


> If you do not have positive "arsenal addition" suggestions to solve the essence of the problem I mentioned [...]

To the extent that you're still looking for the drop-in replacement you're going to of course be disappointed, this is why that Python hack doesn't do what you thought it did. Composition is what makes the higher level constructs worth having despite the fact they aren't drop in replacements and don't deliver the raw power of the simpler idea.

I encourage you to think about such higher level, composable, structures. You can imagine them as a library calling your more powerful primitive if you like, but don't be surprised if that doesn't turn out to be the reality.


> Python hack doesn't do what you thought it did

Your rate of misreading & over-attribution with the seeming intent to inflame is high. Trying to be enough of a troll to rollback a bit the perception Rust has a welcoming community?

If you/anyone is curious about a multi-prog.lang project starting from structure and later building arsenals in the senses here, there are worse places to start than the 25+ year history of https://en.wikipedia.org/wiki/OpenMP . There may be records to be found of board meetings/rationales discussing trade-offs & limitations.


Here's what you wrote:

> You can cobble together "unstructured" jumps on top of the structured exception system as in pygoto.

As I explained, all pygoto is actually doing is recursively invoking Python's interpreter. It's a clever hack, but you can't actually use this as go-to, which is why the trivial loop doesn't do what you'd expect.

> Trying to be enough of a troll to rollback a bit the perception Rust has a welcoming community?

I'm sure Rust's "welcoming community" can point you to more agreeable people if that's what you're looking for.

> the 25+ year history of https://en.wikipedia.org/wiki/OpenMP

I am an old man, and this sort of thing makes me feel older every day. Yes I am aware of OpenMP, but I learned this stuff with MPI since OpenMP didn't exist until I was a graduate.

Did you notice, in reading that 25+ year history (it would be about 30 years for MPI) a gradual trend? That things everybody was sure were just too abstract and high level to be useful in 1995 are today mainstream and of course modern versions of these APIs must offer them? Complexity is inevitable, and so ease of composition becomes more important and the structured concurrent primitives are easier to compose.

Just the other week, I was eating lunch with some people from the floor above, who look after our supercomputers and they were moaning about a high value user who has sloppily left some old spin-when-done code in the version they're actually running. He's likely bringing in enough revenue to justify the supercomputer on his own, but it'd be nice if his code would correctly detect "I'm done, exit" and let other jobs run without manual intervention. He hasn't done this on purpose because he's evil, he just did what was easiest in the 1990s tools he's using. Structured concurrency could have made it easier for his program to have the same performance and exit cleanly with no effort.


Its not the same. Anything in the nursery can cause cancellation of all async work. Cancellation is safe to unwind in all workers by scope exit rules.

Completion is just a type of cancellation.

Spawn 10 workers to lookup Dog in 10 different dictionaries, the first one to get the answer wins. This is hard to do with out language/runtime cancellation support.

Note below is a practical lib to get close to this https://blog.labix.org/2011/10/09/death-of-goroutines-under-... .. https://github.com/go-tomb/tomb/tree/v2


You can easily cancel worker goroutines in Go using contexts and their associated cancel functions (listen on ctx.Done() in a select statement).


> This is hard to do with out language/runtime cancellation support.

It is trivial if the workers can communicate using a shared flag that represents "Dog is found."


This was also my immediate thought, wait groups and a channel for any error would work just like the nursery. However I still see the advantage for this new approach as a provider for sane defaults. Right now you need to structure the concurrent routines yourself and are free to mess up. Nurseries should make it the other way around. I'm not sold on that but may give it a chance.


At best, waitgroups are to the structured concurrency as gotos are to loops/ifs(structured programming)--you can implement the latter in terms of the former but the structured part is better (even if/because it is less powerful) as the history shows.


Waitgroups are not the same thing as they cannot be dynamically sized in the same way nurseries can.


Not sure what you mean by this. You can dynamically add to a wait group using the Add() method.


So I slightly misremembered the subtlety and it is more forgiving in that Add can be called while waiting if a thread is still executing in it.


I have been using go professionally for a while and I’ve found it to be quite remarkable and I wanted to share a few hints to people so they can find those remarkable parts faster than I did.

To get a better sense of go there are five essential concurrency features:

1. Go statements are a nice syntax to run background micro threads (as mentioned in this article)

2. Go Channels pass messages across those threads as fixed memory queues and powerfully as you add items to the queue you block until the item is added (the second part, blocking on add, is very powerful as it prevents the flow of concurrency from infinitely filling queues that are never consumed! There is more to go channels that is possible with fixed memory buffering to prevent the block, but that blocking is key to consider!)

3. Go Select Statements (not switch statements!) let you watch concurrent queues at once in a thread safe way. These are essential for using channels properly and they help manage almost all channel flow (consumer thread progression, error handling, done detection, etc)

4. Go Context objects let you cleanup background threads based on multiple criteria server errors and timeouts being the most common. The best hint you are in a concurrent-friendly function is usually that it passes context as an argument for cleanup purposes.

5. Go wait groups let you wait for all the go statements spawned to finish before proceeding (simple, but essential at times)

I know it’s hard to learn the entirety of a langue without using it daily, but I encourage people to try out go to experience these five parts together. Go is truly excellent at expressing some hard concepts well. That doesn’t make it easy — concurrency isn’t easy — but it is easier with go constructs than without.


> Go statements are a nice syntax to run background micro threads (as mentioned in this article)

That's literally the opposite of what the article says. Which btw isn't about Go specifically, but about concurrency in general.


Well go statements are as nice as you can get with old-school concurrency primitives. Author of the article believes we need to have new completely concurrency primitives.


Yes, you may agree or disagree with that, and please do share your thoughts, but the discussion should stay on topic. The parent comment goes on a lengthy tangent why they like Golang that is completely unrelated to the article.


Apologies if it didn’t come across, but the point I was trying to make is that these five features together are the elegant part of go concurrency. So the comparison of go statements alone to other concurrency approaches is missing the best parts of the language (contexts and channels)

For example, if you use go statements without go context you can’t clean up background threads with error handling, and if you don’t use channels + select statements together then you can’t synchronize across threads easily. These features have to be used together and are in my experience (C++/game development) much better than most other async approaches.

All that said, these features are quite low level and not obvious when first learning (hence my explanation above) so higher level libraries definitely need to exist (and do exist). It’s just these libraries are much easier to write in go because these primitives are quite powerful.


Related:

Go Statement Considered Harmful (2018) - https://news.ycombinator.com/item?id=26509986 - March 2021 (82 comments)

Notes on structured concurrency, or: Go statement considered harmful - https://news.ycombinator.com/item?id=16921761 - April 2018 (230 comments)


In releated news:

"Considered Harmful" Essays Considered Harmful https://meyerweb.com/eric/comment/chech.html


I can't take a hypocritical article seriously.


Brilliant solution, terrible name.

Took me a while to get it — child processes belong in nurseries. Bad abstraction, because the key here is processes. Lots of thing haves child nodes.

And what happens in nurseries? Growing? Maybe they were thinking watching — babysitting and it’s a cultural terminology difference.

But it would be just as silly to call a thread monitor/manager a babysitter as a nursery.

Like I said, it’s a great concept and a valuable abstraction, but I fear it will need a better name to take off.


> Took me a while to get it — child processes belong in nurseries. Bad abstraction, because the key here is processes.

Abstractions have nothing to do with their names. They are not good/bad based on what they are called. You might be conflating metaphors/analogies with that.


I hear the term "threadset" in other discussion about structured concurrency, i think "threadset" would be a better name.


A nursery is just an errgroup (https://pkg.go.dev/golang.org/x/sync/errgroup). I almost never have to use the `go` keyword directly, only through errgroups. Now I can see that it's because `go` is usually too low level to be used on its own. Not sure I agree with removing `go` entirely though.


The article does not make clear whether the cases where Go struggles with concurrency are also cases where structured concurrency improves the situation.

Pretty sure "sync.WaitGroup is too hard to reason about" is not a real issue people are having.

AFAIK most of the challenges occur within that structured block, so to speak. Robust communication between concurrent processes is the hard part, not managing their basic lifecycle.


After a long time experimenting with a lot of patterns, i found the "operationqueue & operation" building blocks from objective-c the most versatile and powerful construct. They let you do all those things the other alternatives i've tried often fall short :

- you can cancel them

- you can pass a pointer to the operation from places to places

- you can set dependencies between operations so that one doesn't start until another is finished

- you can set its execution priority (on the queue)

Syntax may not be the best, and there may be a few problems with encapsulation (an operation can do any kind of memory manipulation), yet i keep getting back to them whenever i have to do serious and robust work.


So 4 years later where is Trio? Looks like OP does not contribute to the lib anymore?


https://github.com/python-trio/trio

AFAIK it's a very popular and active library still.


Well they maybe thought leader. Its up to others to implement their greatest ideas.


Read the article but honestly don’t fully understand it. How does using this not end up with one big nursery started somewhere in your main passed down everywhere and basically scoped to the entire app lifetime? Getting you in back in the same spot as before.

My (rust) code using Tokio starts a bunch of tasks in main that live for the entire lifetime of the app. They’re independent and communicate over channels with each other – and possibly the outside world.

Hard to see what problems this causes that Trio can fix. But maybe I’m zooming in on the wrong use case?


I think the idea is that you can do that for tasks that are meant to run forever. It's global scope. But you use smaller scopes for tasks that are supposed to shut down. In Go terms, it's like having an automatic waitgroup so you can't leak goroutines.

It doesn't sound like you need it for what you're doing.


Kind of a contrived example but let’s say you’re going to kick off a job that downloads a “Post” and associated data to display it, like a “Profile” and “Comments” (with sub comments, profiles, etc) but you decide that hitting an unhandled error in the Profile loading stage should cancel loading all the rest of the sub-jobs you’ve kicked off. Nurseries do this for you, you scoped the whole chunk of concurrent operations and if an unhandled exception bubbles up to the nursery supervisor, you can cancel all the subsequent parts of it. Then you can handle that issue if you choose to, or just ignore loading that post. If you scope that to the whole program, you’d have to handle that failure in the whole program. If you scope these things more finely, you can just cancel and retry or ignore that chunk, without bleeding that logic down into the stack and increasing complexity.


I can see how this works, but my personal experience is that concurrent code that is finely scoped is easier to reason about than larger-scoped concurrency already anyways. Hardly a controversial statement, I guess. So if nurseries help me reasoning about finely scoped stuff even better that's great of course, but only solves part of 'the problem'. And maybe not the most interesting part.

Just quickly running a task to perform a handful of stuff concurrently for a single purpose (like doing a few network requests and packing the results) is hardly where I encounter big issues. The compositional behavior of Futures really helps here I think. A bunch of `Future<Result<A, Error>>` go a long way.


I’ve liked using Kotlin’s coroutines which put in place the behavior described in the article, and I like them in comparison to unscoped concurrency that exists in Go and other languages by default, or the addition of a context that needs to be passed to every function, because I don’t like it polluting the type signature of the functions.

I actually have found rusts futures/results to be more cludgy than scala, were I could flatmap in a fairly nice syntax provided I was fine with using transforms.

Do you pass an early cancellation variable to your future returning rust functions?


I don't see the link between golangs go statement and goto except they cause a fork in control paths. Go's go statement is not bad.

I wrote a userspace M:N scheduler which multiplexes N lightweight threads onto M kernel threads. It currently preempts lightweight threads and tight loops round robin fashion but I could implement channels between lightweight threads and implement CSP.

I created a construct for writing scalable systems concurrently called Crossmerge. It allows blocking synchronous code to be intermingled with while (true) loops and still make progress and join as a nursery does. There is a logical link and orthogonality between blocking and non-blocking and sometimes you need both in the same program.

https://github.com/samsquire/ideas4#118-cross-merge https://GitHub.com/samsquire/preemptible-thread

I added a multiconsumer multiproducer RingBuffer ala LMAX disruptor to the M:N thread multiplexer and it handles IO in a separate thread. If I add epoll and io_uring I could also handle asynchronous IO.

My goal is to add an LLVM JIT and then I have an application server.

I write a lot of concurrency and parallelism in ideas4

HTTPS://GitHub.com/samsquire/ideas4


> I don't see the link between golangs go statement and goto except they cause a fork in control paths.

This is a lot of why he goes into the history of "old-school goto" and its problems in the post. One of the issues with "old-school goto" is that you could never be sure if you called into a function whether it would actually return out of that function, or end up "goto"-ing somewhere else completely different, without going into it and inspecting it. Similarly, one of the issues with calling a function in Golang is that you can't be sure it didn't spin off a goroutine which is still doing some random work or other.

I mean, yeah, you shouldn't just randomly fork off a goroutine which retains references to state passed into a function without clearly documenting it. But there's nothing to stop you from doing it.


> But there's nothing to stop you from doing it.

And the crucial point is that because you can do it languages have to have weaker guarantees because they have to assume you could do it.


I feel like the article understates Erlang's approach to structured concurrency. Yes, you can spawn processes willy-nilly, but there's a strong emphasis in OTP on constructing supervision trees such that child processes don't outlive parents (that being the main problem which Trio, too, seems to address). Even at a lower level, using `spawn_link` instead of `spawn` would readily address that issue.


We have stayed away from Go exactly for this reason. The resulting entropy with Go is much higher.

This is a good article for sure. But it's just my opinion that Martin Sustrik who originally coined the term structured concurrency explained the concept better than this article at much better depth https://250bpm.com/blog:71/


> https://stackoverflow.com/questions/55273965/how-to-know-if-...

Q&A's like this have contributed to me avoiding Go.


Out of curiosity, what do you think is wrong with the answers that were provided?

Goroutines are purely managed by the Go runtime in userspace. They never ‘just disappear’.


> Out of curiosity, what do you think is wrong with the answers that were provided?

I can't speak to the accuracy. I didn't imply that the answers somehow affect my opinion of Go, but both the questions AND accepted answers.

Somehow Go managed to release with a fork that was worse than PHP's first stab, and that's impressively incompetent. Goroutines are the same as js promises without the native state management...and the detachment of declaration and startup? Jesus that's prone to abuse/bugs. Coroutines are a horrific display of the base language on their own. Yes you can create a custom/library wrapper, but that's a leaky abstraction, at best.

There are things to be admired about Go's choices, as a language, but there is enough to deride that I avoid bothering to even toy with it.


Indeed. It would seem to me that the code calling "isAlive" would serve the same purpose as the panic handler, or a simple flagging mechanism.


The answers are over complicating things though. There are built in libraries that make it trivial to do this. A waitgroup is relatively low level and gives you the function you can call in your defer to signal completion. An errgroup builds on the waitgroup and gives you start & wait primitives, similar to the nursery discussed in the HN post.


Rust will soon have thread::scope() which will be quite awesome for structured concurrency.


What about async/await?

It seems to be a safe pattern under the criteria stated in the article as long as all async calls are awaited. This pattern guarantees the flow will return back only after all async task have finished.

The article says modern languages allow domesticated forms of goto (statements like break, continue, return...) that are scoped at function level. What about exceptions and exception handling? An exception thrown in a function and caught in a parent works as a wild goto. The railway pattern comes to my mind.


> What about async/await?

> It seems to be a safe pattern under the criteria stated in the article as long as all async calls are awaited.

Which means it's not unless that awaiting is mandatory in all cases.

Which it is not because pretty much all such systems have a `spawn`-type construct (when it's not the plain default as it is in Javascript, or C# IIRC) which just forks off an independent task. Your `await` acts as a `join`, but that's still just a join, which is optional, and which most systems have (except go, because goroutines don't have handles).

The argument of structured concurrency is the binding of mandatory joining (or awaiting) to the lexical scope, in the way structured flow bound jumps to lexical scopes.


Async await is a different concept to structured concurrency. It just allows that two async tasks can run on the same work thread, but it doesn't directly give you access to do so. That's what goroutines, task spawn methods or structured concurrency give you.

At least in rust, calling an async function won't do anything by itself. It must be spawned or awaited. However, I wouldn't be surprised if other languages spawned on every async call and the await was just a join handle.


This might help with some of the issues with go statement mentioned in the article:

https://blog.labix.org/2011/10/09/death-of-goroutines-under-... https://github.com/go-tomb/tomb/tree/v2


> few languages still have something they call goto, but it's different and far weaker than the original goto. And most languages don't even have that. What happened? This was so long ago that most people aren't familiar with the story anymore, but it turns out to be surprisingly relevant. So we'll start by reminding ourselves what a goto was, exactly, and then see what it can teach us about concurrency APIs.

Exceptions? :p


Exceptions only propagate upwards the call stack, which can be equivalently done with return statements and some logic.


Hmm

With return you can jump only 1 level higher at once


Plus exceptions don't return to the call location - control jumps down to the catching block. Exceptions are not goto, but they are not like regular function returns either. Some weird thing in between.


I was struck by a partial similarity to Ada Tasks. Anyone with more knowledge able to contrast the Nursery paradigm with task/rendevous in Ada?


I am no expert on Ada, but it was probably inspired by Dijkstra's 1965 Cooperating Sequential Processes with his "parbegin/parend" additions to Algol 60.

Or perhaps by the wildly popular Communicating Sequential Processes by Hoare in 1985 (the 3rd most cited thing in Computer Science for over 20 years after publication) which called the control brackets "cobegin/coend" in Hoare's section 7.2.2. [1]

[1] https://en.wikipedia.org/wiki/Communicating_sequential_proce...


I like the nursery idea, but the supervisor idea - that someone might silently give you a special nursery that restarts failed tasks sounds like a bad idea. Because the function you pass it to might pass it to someone else that passes it to someone else that passes it to someone else that really doesn't failures retried.


What confused me most about the article was that the code examples (and the author's library) were not in Go.


Because the article isn't about go. It's about structured concurrency. Goroutines were just a convenient placeholder for the current unstructured concurrency primitives most languages have (and it fit their goto/go narrative)


Bit unclear (the article wastes much space talking about goto), but I _think_ the author has re-invented Occam's PAR construct.

Edit: I see someone else noted this below. I had always assumed that golang's concurrency model was somewhat influenced by Occam, via ALEF (Phil Winterbottom).


Interesting article; although goto statements does seem worse than go statements; as they can make simple things hard.

I wonder if we just ever come to the conclusion that doing many things at once is difficult no matter what we do - at least if there is shared data involved...


TL;DR - the article seems to invent its own terminology for what many others might know as (roughly) "OMP parallel for" or a fork-join pattern [1] or even in POSIX shell a "a & b & c & wait" pattern. IIRC, the Python multiprocessing module calls it "imap_unordered". The "map" part of "map-reduce". Etc. I'm sure it has many names in both parallel and concurrent contexts which (at this level) share some control flow concepts.

When this pattern applies, it is indeed easier to use than less structured alternatives and error handling & resource clean-up & API design are perennial topics, but I'm not sure we need even more diverse terminology. For example, what the article dubs "nursery", most call a "pool" or "group". "Nurseries" are declared a new thing in a boldfaced paragraph with innovation claims only softened later or in footnotes. I think this is why a lot of pushback happens, but also see the funny [2] mentioned elsethread [3].

I did love the domesticated-wolf comparison pictures, though. :) (And I say that earnestly, not snarkily.)

[1] https://en.wikipedia.org/wiki/Fork%E2%80%93join_model

[2] https://meyerweb.com/eric/comment/chech.html

[3] https://news.ycombinator.com/item?id=31956840


The nursery concept is being added to core Python 3.11 asyncio as the `TaskGroup`.


Just use elixir or Erlang already.

You get all of these things with the simplicity of uncolored functions, plus additional guarantees and customizability about blast radiuses of crashes.


Please see related TLA+ thread: https://news.ycombinator.com/item?id=31956018

I've been seriously turned off from go (after dedicating years of my life to it :(). It is fundamentally flawed IMHO, a house of cards built on quicksand.

I wonder what we'll think of GO in 5, 10, 25 years. Within 50 years I kinda doubt many humans will give a crap about it at all.


I've never had an issue with data race conditions in go. Can I ask what you did to lose faith?

The answer, for me, is always channels. What you put around the channels is the challenging part.

Channels are built to pass information between goroutines. If you are doing this via any other signaling, just don't. You can loop over channels for long running sequential reads, you can test them to see if they're ready to read or ready to write.

Honorable mention to context as well, extremely useful structure.


I read the Uber article linked in parent.

"Data Race Patterns in Go", posted 21 days ago: https://news.ycombinator.com/item?id=31698503

Even though I'm extremely careful, I'm sure I've done one or more of the anti-patterns they outline where the outcome is completely incorrect program behavior compared to my intentions as the architect of the program.

A good language wouldn't encourage or even compile with such nonsense.

---

Isn't it funny how Go offers channels but the stdlib never uses them at all?


> Isn't it funny how Go offers channels but the stdlib never uses them at all?

    go/src $ grep -r 'chan ' | grep -v test | wc -l
    402

    go/src $ grep -r 'Mutex' | grep -v test | wc -l
    362


Yes, that Uber article hit close to home.

About 80% of these problems can be covered with new linter rules. Go as a language already strongly depends on linter rules. I mean, there are a few must-haves already, and adding a few more won't make much difference. This isn't optimal, but it's not a deadly flaw of Go ecosystem.

Remaining 20% need solid test coverage (using -race). You are right, these absolutely need attention from Go language creators/architects. In their terms, these kinds of promises which shouldn't have been made for v1.0.


Hmm, the std lib uses channels for a variety of scheduling hook sorts of things - contexts, timers, os signals for example. That’s maybe just a little narrower than how Go users can use them - they aren’t really for transport, but for synchronization (just as useful for no or small payloads as large ones).

It doesn’t similarly expose mutexes that package clients have to fiddle with.


I have similar concerns on go routines. The good thing is that in practice 99% of microservices in line-of-business applications don’t need concurrency, so go routines don’t cause too many problems.


I didn't read all of the article. There is something to be said about being able to get your piece across without it becoming story-time. I think the author could have benefitted from getting their point across with fewer words.

I think I understand the gist of where the author wants to go, and I do agree that nurseries sound like a good idea. From observing myself and people around me, I think people learn Go (concurrency) in several phases. I won't be so arrogant as to presume I understand how to always structure goroutines and channels correctly, now. But looking back it was "easy", then it got harder, then it got easier again as my understanding of how to structure things evolved. But I won't say that it is ever easy. Concurrency isn't easy. And it is important to keep in mind that stuff that is supposed to help you won't necessarily make everything easier. Concurrency is still fundamentally hard.

The hard bit is to actually design this idea in a way that doesn't make Go ugly or which creates bifurcations in how you use the language. If something causes a bifurcation I would actually claim that it isn't worth doing. There is value in leaving languages alone rather than complicate and potentially make them messy. Languages become harder to use when you have to start deciding on what parts of the language you want to use. Adding stuff to a language with established practices is much harder than designing a language to begin with.

Also, syntax matters. I didn't actually mind that Java involved a lot of typing because the syntax wasn't that complex or particularly ugly. It wasn't hard to grasp when reading. Just horribly verbose. But comparing how people I met thought about Java in relation to other languages taught me something important: a surprising number of programmers live at the surface, only caring about syntax, and don't actually understand what the syntax really does.

If someone can implement the idea I'll have a look. But if it complicates, bifurcates or makes Go exhibit less developer empathy, it probably shouldn't be inflicted on Go. Perhaps it can be applied elsewhere.

And I actually have a recommendation for where to start: scrap the posting and attempt a concise description first.


This opinion is just gross. Always do your required reading kids.

- nobody uses semi colons, that's a huge red flag because the tooling will literally remove these symbols from your code, did you even run any go code? clearly not

- goroutines are not threads and galloping past that obvious chasm is also telling: you're not really sure what these magical things are but, somehow, you know they're bad. this is a flawed line of reasoning that was never going to convince me

- because you don't know how goroutines and the userspace scheduler work, you skip over all the benefits they provide and thusly nurseries give me nothing than a hand holding experience less effective than the existing tools I have. Thank you for wasting my time because I was genuinely holding out for _any_ kind of empirical reasoning. My mistake.

- As proof of the author's research negligence: Not a single mention of channels. Not one. Homework was clearly not completed

I'm all for debating a language's efficacy and tradeoffs, but not when the opposing side has no idea how anything works. Then it's not a debate, it's just the ignorant proclaiming a need to interfere with others. Go away, please.


- Channels are woefully inefficient. I usually use the stdlib as a good reference for performant code, and I don’t see any use of channels in there.

- If you’re not using channels because they are slow, now you’re letting some nebulous “context” bleed into every function to handle closure and cancellation. This is basically provided implicitly by the nursery concept, but stapled on to every function in go because it was an afterthought, then google started using it, so it’s everywhere.

- The Unscoped nature of go routines does make the program more complex to reason about.


I think you read a different article, or read stuff into it that the author was not talking about.

This is about contrasting structured vs unstructured concurrency. The language, the threading model, how threads communicate and coordinate ... none of them are germane to the issue.

Wherever you have a fork without a join in the same scope, it is unstructured and gives rise to some concurrency issues.

Structured concurrency promotes composition, similar to ordinary function composition. For ordinary functions, the caller's control flow is logically blocked from proceeding until the callee is done. Similarly, in a structured concurrent setup, a function is logically blocked from proceeding until the tasks it spawned have finished and returned to the spawning scope. This makes it easier to reason sequentially. "finish doing a bunch of tasks, then do this, then do that" etc. The key word here is 'finish'. Not 'spawn a bunch of tasks in the background'. One can then reason about the effect of the function and all its nested tasks as a single unit.

The most elegant example is Occam, where SEQ and PAR are language constructs. (For those interested: http://www.transputer.net/obooks/72-occ-046-00/tuinocc.pdf)


This is not a go vs non-go debate. This is a structured concurrency discussion. Threads and goroutines are both forms of concurrency so they can be discussed together.


1. Complaining about syntactic style

2. Inferring that the author doesn’t know what threads really are when they could just be using slightly different terminology than yourself (i.e. terminology where “threads” are not narrowly defined as “OS threads”)

3. Just based on the bad inferrence in (2) + complaining about they did not appreciate goroutines enough + complaining about “hand holding”

4. Proof that the author is incompetent: they didn’t even mention this one particular concurrency construct. Cherry-pick ergo preconceived conclusion

Unsolicited inferrence by myself: peeved that the author said that `go` was harmful—even though the author also said that this is a general problem—, and thus you needed to concoct some half-assed arguments in order to justify making a reply.


Are you being facetious or did you just not read the post? (I need to ask, because it's not at all obvious!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: