Hacker News new | past | comments | ask | show | jobs | submit | eplawless's comments login

I was this person for my dad's care, heading back to Canada from California after his third ambulance ride to the hospital and subsequent discharge a few hours later. Turns out his first doctor at the ER had correctly identified the life-threatening condition he had developed, and when the shift changed the new physician ignored the handoff instructions and sent him home. If I hadn't pushed to get the ER to look again he would be dead now. I think the syndrome described is real, and enough doctors are bad enough at practicing medicine that it saves lives.


Pardon, but I dont think you fit the mold if you were right? Or if you were saying that you got lucky (right for the wrong reasons) then that wouldn't mean much about doctors?


I think the Daughter From California will be perceived and treated the same way by care providers regardless of whether they end up being right, and regardless of their actual vs perceived motivation, so from that perspective I think it still fits.

Doctors are just people - they don't appreciate it when somebody parachutes in to question their work, and they make mistakes like anybody else.


Yeah, you drive things with https://developer.mozilla.org/en-US/docs/Web/API/window/requ... since it's dynamic



That's sick, great work.


It might be worthwhile to expand a bit in the docs on the problems you see with JS concurrency (why you think it sucks, maybe some examples) and then show how ts-chan fixes those problems.


That's a good idea, thanks :)

Honestly, I've been struggling to come up with examples that aren't extremely contrived, but are still self-contained enough to easily demonstrate. It might actually be easier to just document patterns, though.


Agreed. I don’t know go. So I still don’t understand what this is trying to solve


I think y'all have very fair points, for the record.

Unfortunately it is a lot easier to write documentation for an audience that shares the same context / background / experience. The README was written with an audience in mind consisting primarily of those familiar with Go (or more convoluted "communicating sequential processes" implementations), who were frustrated that things that are very easy in Go, are so much harder in JS. It's not something I considered deeply, but I was imagining that it was unlikely that someone would be searching for "channels in JS" without a base level of understanding.

I work/have worked with some pretty talented people, but (in the past) I've found it difficult to convey the value of the sorts of patterns that `ts-chan` is intended to enable, to those without first-hand experience with such patterns.

Documentation is hard :P


I’m pretty sure you could articulate how exactly this “better” pattern works versus the default “bad” one. Right now the readme is a pile of illegible jargon to me as a non-go person.


Sure, I intend to give it a shot.

I will say though, I personally get the impression that attempts at "concurrency" in JavaScript (in production code) are quite rare, which I attribute to how difficult it is.

That is to say, I don't know if there really _is_ a "default pattern".


> That is to say, I don't know if there really _is_ a "default pattern".

Here:

    const results = await Promise.all([task1(), task2()]);
Could you give a side by side comparison (with and without ts-chan) so we can better understand what kind of problem it is attempting to solve?

My understanding was that Go channels / CSP solves concurrency in a multithreaded environment where reads/writes need to coordinated, but since JavaScript is single threaded, I'm not sure I understand why they would be useful in JavaScript. In JavaScript concurrent tasks can simply communicate by writing/reading shared variables.


I am working on better examples, but they are going to take me a while, at the rate I'm currently going.

To be clear, `ts-chan` is not intended to target any use case already addressed by promises or async/await.

You mentioned CSP so I'll assume you've got context re: that topic. I believe I understand your point re: synchronisation between threads, which is fair, but I'd point out that race conditions still exist in JavaScript - I'd even say they are common, at least in my experience. It is easiest to maintain the integrity of the internal state of complex data structures when only a single logical process can mutate that state at a time.

Example in a similar vein: Firewall daemon that accepts commands over RPC, and performs system configuration, in a linear, blocking fashion, to avoid blowing things up (say it runs `iptables` and/or `nft` commands, under the hood). It would be trivial to have a select statement, with a channel per command (or just one, perhaps), receiving the input payload. In JS, the response would probably be via callback, rather than a ping-pong channel recv then send, or the like.

It wasn't a firewall daemon (although it did interact with firewalld and more), but that's exactly a pattern I've implemented in Go, for a past employer. I don't imagine anyone is keen to implement such a thing in JavaScript, but it's a pattern that applies to anything that mutates state, especially if that state is fragile or complex.


IME, race conditions are quite rare and pretty easy to solve in JS, because the flow of code execution is only susceptible to be interrupted at known locations (async function calls). Here's an example of how you could solve the problem you mentioned in a few lines of JavaScript:

    function createRunExclusive() {
      let runningTask = Promise.resolve();
      return async (asyncFn) => {
        runningTask = runningTask.then(async () => {
          return await asyncFn();
        });
        return runningTask;
      }
    }

    // Example usage:
    // The idea is that any command that should not overlap should use the same "runExclusive" function
    const runExclusive = createRunExclusive();
    function handleIpTablesCommand() {
      runExclusive(async () => {
        await doSomethingWithIpTables();
      })
    }
Although it's probably best to just use one of the queue libraries on npm. This one for example: https://www.npmjs.com/package/p-queue


Hey, that's a neat little trick to implement locking in JS, thanks.

I oversimplified my example perhaps - it also involved handling interruptions (certain system events), maintaining a lifecycle (set up and tear down), and scenarios where it allowed a certain subset of operations to be performed, while performing one of several operations. That last requirement was due to it using shell scripts to perform configuration of the system, and it needing to extract runtime and configuration information from the main daemon.

Still though, thanks very much for your comments, I've enjoyed reading them.


> It is easiest to maintain the integrity of the internal state of complex data structures when only a single logical process can mutate that state at a time.

I agree and this is exactly what js event loop provides. So I don’t understand ts-chan


An operation may take longer than a single tick of the event loop, and may have it's own rules regarding state transitions.

To be clear, I'm not saying "don't do any communication by sharing state", just that there are use cases where it's possible to make it much simpler to reason about.

As an example, you might control the state of "making a HTTP request to perform a search", within the frontend of a single page app that has a map, search filters, and results.

One strategy is to use a buffered channel (1 element), and, when the search filters are updated, drain then re-send the request to the channel.

The logic processing these requests would then just need to sit there, iterating on / receiving from the channel. It could also support cancellation, if that was desired.

(I'd imagine the results would be propagated via some other mechanism, e.g. to a store implementation)


Sounds like a generator then?


Generators have lots of really nice uses, yep.

I'm not sure what specifically you were imagining, but I've added an example of how "vanilla JS" can achieve fan-in, using an AsyncGenerator: https://github.com/joeycumines/ts-chan/blob/main/docs/patter...

It uses one of the patterns suggested in a comment chain above, which I think is pretty neat, and wasn't one that readily occurred to me: https://news.ycombinator.com/item?id=38163562

I'm not making a case for using ts-chan for any situation where a simple generator-based solution suffices. I wouldn't call the example solution (in my first link) simple, but it's something I'd personally be ok with maintaining. Like, I'd approve a PR containing something similar without significant qualms, _if_ there was a significant enough motivator, and it was sufficiently unit tested. I might suggest `ts-chan` as an alternative, to make it easier to maintain, but wouldn't be particularly concerned either way.

That's all very subjective, though :)


I think it would be useful to generally explain what these primitives do and how they interact with each other. A lot of JS/TS users haven’t used golang, but would appreciate a better solution if they understand it (me included).

Regarding the default vs better, a comparative example with a real concurrent task coded with/out your library would be my preferred way to understand it clearly.


I'll definitely keep that in mind, thanks :)


I somewhat disagree with this take, as I felt the intro and "The microtask queue: a footgun" [1] section in the README does an adequate job of laying out the 'why' and the problems with JS's concurrency model. However, it does presume some understanding of Go's channels, so a more explicit example contrasting ts-chan with native JS concurrency could better clarify its benefits for those less familiar. Granted, there is an /example directory, but the benchmarking complexity muddles the readability. Regardless, upon a quick run-through, it looks to be an A-grade library that seems promising for practical use, plus well-referenced, composed, and quite thorough.

[1] https://github.com/joeycumines/ts-chan#the-microtask-queue-a...


Maybe I'm dumb but that section didn't explain the problem to me in the slightest.


Not necessarily, and after giving it more thought, I somewhat retract my previous comment. You do make a good point; it's explained well, but not in concrete terms without assumptions. So, I'll take a stab at it: The core idea is that async functions A and B can communicate through Chan instances, with the Select class overseeing multiple Chan operations, waiting for one to be ready before proceeding. While ts-chan might seem unnecessary for just two async functions, what if you had to manage 8, 16, 32, or more? At some point, Promise.all won't cut it, and that's where ts-chan comes to the rescue. It defines a protocol for channels to better manage communication between asynchronous functions in JS, offering a structure similar to how goroutines communicate in Go.


The only thing that told me was that the author is used to Go primitives and doesn’t like switching to Javascript.

That might be completely wrong, but it’s the impression I get when someone says that something the rest of the world uses without issue sucks.


I might be completely wrong, but the impression I got from your comment is that you haven't been exposed to many implementations using non-trivial concurrency :)

Fair call though, I guess. It doesn't really matter, but I'm certainly used to TypeScript and JavaScript.


> you haven't been exposed to many implementations using non-trivial concurrency

That is entirely accurate. I struggle to imagine scenarios in which I’d need two parallel routines to communicate with each other.


Maybe because we probably shouldn’t be doing that in JavaScript. I primarily use Go and work with Go routines often but I’ve never wanted to do anything remotely close to it in JavaScript. If I wanted proper concurrency, I would be using Go, not JS


Real JS concurrency = Worker threads, which already make use of channels for communication.

Promises are for asynchronous programming, which are not concurrent.


First instalment of docs/examples complete, feedback appreciated: https://news.ycombinator.com/item?id=38183241


Here's a recent study which provides physiological evidence of aphantasia: https://www.sciencedaily.com/releases/2022/04/220420092150.h...


I'm curious which features highland provides which RxJS doesn't. From what I understand, composable streams from any data source with backpressure support is pretty much the definition of Rx.

Sometimes simplicity is a feature, too, though.


Rx doesn't handle back-pressure or laziness, so it's for only really for handling events.


RxJS advocates are unhappy with this comment so I'm going to qualify it a little. Apologies for any misunderstanding...

Rx doesn't handle automatic back-pressure (like Node Streams) but does have mechanisms to avoid overwhelming slow consumers. Rx also has delayed subscription which you can call lazy, but not by turning the stream into a pull-stream (allowing you to sequence actions in the way Highland does).

If any of the above needs further qualification or comment please weigh in on the issue by commenting here... but for now I'll leave it at that. I actually list RxJS in the blogpost because it's a good example!


Coming in 2.3, we will have full capabilities for backpressure. We already have window/buffer/throttle, etc. But, I think it's naive to have only one style of backpressure because many are valid. Just an example of RxJS, and what can be done, which includes a style in which you can do several forms of backpressure, and yes, push to pull based models: https://gist.github.com/mattpodwysocki/9010149

Still fleshing it out, but pretty close to calling it complete: https://github.com/Reactive-Extensions/RxJS/tree/master/src/...

We're more than open to pull requests though if anyone thinks we're missing something here.


since `fastSource.map(slowThing)` automatically pauses the source while the slow thing is processing, how is this different from iteration? It also states that in the case of a non-pausable source it will buffer the data. How does it know when to pause vs when to buffer.

Is there some way to know what kind of source you've got, or are the the sources constructed in a way that chooses which behavior you get?


I disagree as fervently as possible; always use reference counting smart pointers. It is substantially harder to guarantee exception safety if you don't make use of them. You'll also be able to program more quickly without devoting extra mental cycles to make sure everything is cleaned up properly.

Using valgrind IN ADDITION is a good idea, but there is no reason to avoid smart pointer memory management.


The best way to deal with exceptions is to disable them. Arbitrary interruptions of control flow will screw up just about any algorithm. Otherwise, you will have to reason about everything using RAII semantics, which work well much of the time. Smart pointers have the same problem. You may believe that everything is cleaned up properly with your smart pointers, until a cyclic reference happens one day.


I run a Windows install natively, as it's far more reliable than any desktop Linux distribution I have ever used. I then run Ubuntu 9.10 in a virtual machine on one of my monitors or in Unity mode or what have you. This works very well for me; when Ubuntu inevitably breaks, it's trivial to revert to a working snapshot, and it doesn't take down the majority of what I'm doing.

Linux still has a long way to go improving usability and user experience, as well as improving the generally poor quality of Linux desktop software.

I really like Linux. I use it daily, and I would like it to succeed. I hope that the things I've mentioned are improved to bring it up to par with newer versions of OS X and Windows.


That's pretty much how I use Linux on my Windows 7 notebook, except I only need to use it once a week or so. I really don't have the time to bother with all the hardware issues I remember facing back in the day, when I did have time to tinker around.

I see Linux's future being in a VM rather than on the desktop. I think more and more "power users" will eventually switch to using virtualized Linux on Windows or Mac, to get the power of Linux without any of the hassle.


Maybe the future of all desktop OSes is to run in a VM. I run Windows in a VM to sandbox it and ease rolling back if the system gets unstable or crufty. I run Linux natively, but like to check out new distros or test upgrades in a VM. It's incredibly easy to install a modern OS in a VM, especially because of the stable virtualized environment. I won't start a flame war about what makes the best host OS, but it certainly doesn't need to be a full-fledged desktop OS. It would be great to keep my own personal desktop VM on an encrypted thumbdrive and walk up to any machine, plug it in, and run it.


That was the first solution I had for #1. I'm currently at the following, but I think there has to be a better one because it's not clear that it would be possible.

G places the amulet in his safe, attaches his padlock and locks it with his key, then sends it to K, keeping his key. K receives G's safe, attaches his own padlock in addition to G's (there is a "large clasp", no mention of how large or how many padlocks it could support) and sends it back to G without sending his key. G removes his own padlock, and sends the chest back. K removes his padlock and has the amulet.


I think your current solution is clearly the right one: It gives me the 'duh' moment.

Edit: But in real life the first solution might work better. Hard to imagine the henchmen who would be sure to spot you if you left your room even for a moment but would take no notice of servants shuffling back and forth three times with a [double-]padlocked safe.


reason it is unsatisfactiry (I had the same) is that one of the safe is not used, imo


It's a charasteristic of the real world that there are things not used in the solutions of the various problems that need to be solved.

It is one of the most brain-dead characteristics of school and university exams that one is not allowed to have extras - it's somehow "not fair."

I think not using some of the resources provided is perfectly reasonable.


apple and oranges, if the beauty of the problem is in a clean elegant solution that does not require all of the pieces, it is not a well posed riddle, or solution.

That's why the solution to the wolf-sheep-cabbage problem is not "just add more floating stuff so you can bring all at once, idiot".

The comparison to school and exams, appears unrelated to me: there are situations where external aid kills the point (e.g. calculator for arithemitic in firs grade).


Then I must not have explained myself clearly enough. I'm not talking about extra equipment, I'm talking about information that's not needed for the solution to the problem.

Recently (for some definition of "recent") I was setting an exam question. In it I gave various lengths, heights, and so on, and I was told that it was too hard because I gave details that weren't required. On the other hand, if one is given only the information required, there is already an artificial clue - all the information given must be used. That's unrealistic.

I've interviewed people for jobs who performed fantastically well on exam style questions, but when given free-form problems simply didn't know where to start.

For me, whether a solution is clean and elegant is independent of whether you've used all the information given. It's the solution itself that's elegant. It's not only plausible but likely that we have different concepts of elegance - I'm a mathematician. When solving math problems, real math problems such as required to get a PhD or publish a paper, there's no way you're given just the information required and no more.

Along those lines I've been moved to submit another puzzle - you can find it here:

http://news.ycombinator.com/item?id=1014092


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: