Hacker Newsnew | past | comments | ask | show | jobs | submit | fouronnes3's commentslogin

Very cool article!

I also implemented a spreadsheet last year [0] in pure TypeScript, with the fun twist that formulas also update backwards. While the backwards root finding algorithm was challenging, I also found it incredibly humbling to discover how much complexity there is in the UX of the simple spreadsheet interface. Handling selection states, reactive updates, detecting cycles of dependency and gracefully recovering from them is a massive state machine programming challenge! Very fun project with a lot of depth!

I myself didn't hand roll my own parser but used Ohm-js [1] which I highly recommend if you want to parse a custom language in Javascript or TypeScript.

> One way of doing this is to keep track of all dependencies between the cells and trigger updates when necessary. Maintaining a dependency graph would give us the most efficient updates, but it’s often an overkill for a spreadsheet.

On that subject, figuring out the efficient way to do it is also a large engineering challenge, and is definitely not overkill but absolutely required for a modern spreadsheet implementation. There is a good description of how Excel does it in this famous paper "Build systems a la carte" paper, which interestingly takes on a spreadsheet as a build system [2].

[0] https://victorpoughon.github.io/bidicalc/

[1] https://ohmjs.org/

[2] https://www.microsoft.com/en-us/research/wp-content/uploads/...


The idea of backward updating is fascinating but is not generally feasible or computable. What kind of problems can you solve backwardly?

> not generally feasible or computable

You'd be surprised. It really depends on how you define the problem and what your goal is. My goal with bidicalc what to find ONE solution. This makes the problem somewhat possible since when there are an infinity of solution, the goal is just to converge to one. For example solving 100 = X + Y with both X and Y unknown sounds impossible in general, but finding one solution is not so difficult. The idea is that any further constraint that would help choose between the many solutions should be expressed by the user in the spreadsheet itself, rather than hardcoded in the backwards solver.

> What kind of problems can you solve backwardly?

This is the weakness of the project honestly! I made it because I was obsessed with the idea and wanted it to exist, not because I was driven by any use case. You can load some premade examples in the app, but I haven't found any killer use case for it yet. I'm just glad it exists now. You can enter any arbitrary DAG of formulas, update any value, input or output, and everything will update upstream and downstream from your edit and remain valid. That's just extremely satisfying to me.


Have you looked into prolog/datalog? You're dancing around many of the same ideas, including backwards execution, constraint programming, stratification, and finding possible values. Here's a relevant example of someone solving a problem like this in prolog:

https://mike.zwobble.org/2013/11/fun-with-prolog-write-an-al...


> I haven't found any killer use case for it yet

You might dig into an operations research textbook, there are a number of problems solved with linear programming techniques which might make sense for your interface... In fact might be more intuitive for people that way and with commercial potential.


I am not sure if I know what I am talking about or if it counts in this scenario but constraint solvers come to mind. I am mainly familiar with them in a CAD context so I am struggling to think of a use for them in a spreadsheet context. But I think being able to say given these endpoints find me some values that fit could be a very valuable tool.

But like I said I am not sure that I know what I am talking about and I may be confusing backwards calculation with algebraic engines. I would love for algebra solvers to be a first class object in more languages.


I implemented bi-directional solving in a very simple "Proportion Bar" app --- sort of --- one side would calculate at the specified scaling factor (so 100% could do unit conversions), the other would calculate the scaling factor necessary to make the two sides agree.

While the general problem is not always tractable, some of the special cases are pretty important.

Take, for example, backprop in machine learning. The model operates forwards. Then you solve backwards to figure out how to update the terms.


Speaking from experience, I find budgeting spreadsheets to be a great usecase for this.

I feel you. I don't think I've ever finished reading a sentence that started with "I asked <LLM> and he said..."


I find the consistent anthropomorphization to be grating as well


The "I asked <LLM>" disclosures vary between a) implying the LLM is an expert resource, which is bad, and b) disclosure that an LLM was referenced with the disclosure being transparent about it, which is typically good but more context dependent.

Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).


These are the worst. I'm fine with you dumping your own half formed thoughts into an LLM, getting something reasonably structured out, and then rewriting that in your own voice, elaborating, etc.

But the "This is what ChatGPT said..." stuff feels almost like "Well I put it into a calculator and it said X." We can all trivially do that, so it really doesn't add anything to the conversation. And we never see the prompting, so any mistakes made in the prompting approach are hidden.


My take is orthogonal. Overall, I've become less tolerant of token-generators of all kinds (including people) of bad quality (including tropes, bad reasoning, clunky writing, whatever). But I digress.

If we want human "on the other end" we gotta get to ground truth. We're fighting a losing battle thinking that text-based forums can survive without some additional identity components.


I work for a political party (not Ameican) and the President is addicted to using chatgpt for facebook posts.


The only thing worse is "I asked my AI and he said"

You don't possess an AI, you are using someone's AI


> You don't possess an AI, you are using someone's AI

I'm reasonably sure the instance of Olmo 3.1 running locally on this very machine via ollama/Alpaca is very much in my possession, and not someone else's.


Did you train it? Is it meaningfully different from every other instance of the same model?

No? Then it's not "your" AI, it's an AI that you are using.


> Did you train it?

I didn't hand-wire the transistors and hand-write the software that constitute the computer on which I'm writing this comment, but said computer is rather unambiguously mine and mine alone. Why would the local copy of the LLM on that same computer be any different? Does my coffee mug cease to be mine if someone else happens to have an identical one?


This is usually an "auto-skip" for me as well.


Still preferable to just pasting it without revealing the source. LLMs have become a brain prosthesis for some people which is incredibly sad.


> "I asked <LLM> and he said..."

An alternative I tried was sharing links my LLM prompts/responses. That failed badly.

I like the parallel with linking to a Google/DuckDuckGo search term which is useful when done judiciously.

Creating a good prompt takes intelligence, just as crafting good search keywords does (+operators).

I felt that the resulting downvotes reflected an antipathy towards LLMs and the lack of taste of using an LLM.

The problem was that the messengers got shot (me and the LLM), even though the message of obscure facts was useful and interesting.

I've now noticed that the links to the published LLM results have rotted. It isn't a permanent record of the prompt or the response. Disclaimer: I avoid using AI, except for smarter search.


Mindshare death is a very large overstatement given the massive amount of legacy C++ out there that will be maintained by poor souls for year to come. But you are right, there used to be a great language hiding within C++ if the committee ever dared to break backwards compat. But even if they did it now it would be too late and they'd just end up with a worse Rust or Zig.


The biggest problem with C++ is that while everyone agrees there is a great language hiding in it, everyone also has a remarkably different idea of what that great language actually is.


I don't agree there's a great language hiding in C++. My high level objections would be that the type system is garbage and the syntax is terrible, so you'd need a different type system and syntax and that's nothing close to C++ after the changes.

After many years of insisting that "dialects" of C++ are a terrible idea, despite the reality that most C++ users have a specific dialect they use - Bjarne Stroustrup has endorsed essentially the same thing but as "profiles" to address safety issues. So for people who think there is a "great language" in there perhaps in C++ 29 or C++ 32 you will be able to find out for yourselves that you're wrong.


There are multiple great languages hiding within it


As proven a few times, it doesn't matter if committee decides to break something if compiler vendors aren't on board with what is being broken.

There is still this disconnection on how languages under ISO process work in the industry.


The C++ standards committee’s antiquated reliance on compiler “vendors” holds it back. They should adopt maintenance of clang and bless it as the reference compiler.


And you will be the one telling the losers that their compiler, operating systems and OS doesn't count?

By the way this applies to the C language so beloved on this corner as well.

As it does to COBOL, Fortran, Ada and JS (ECMA is not much different from ISO).


Only on HN do we explain social interactions using network protocol analogies, and not the other way around!


I'm not even sure if this is a sarcastic dropbox-style comment at this point.


I do all my programming by only making self sustaining full scale universe simulations that contain a copy of myself, so that by the strong anthropic principle the code has already been written.


It is quite powerful that when you see a common keyword in the wrong color you can immediately deduce a syntax error.


Congrats on the launch! This is a very exciting project because the only decent autodiff implementation in typescript was tensorflowjs, which has been completely abandonned by Google. Everyone uses onnx runtime web for inference but actually computing gradients in typescript was surprisingly absent from the ecosystem since tfjs died.

I will be following this project closely! Best of luck Eric! Do you have plans to keep working on it for sometime? Is it a side project or will you abe ble to commit to jax-js longer term?


Yes, we are actively working on it! The goal is to be a full ML research library, not just a model inference runtime. You can join the Discord to follow along


A life hack I'm trying for 2026 is to stop setting an alarm clock in the morning, and set a bed time alarm instead. Yes, even when I have an important meeting in the morning. This does two things:

- provide a strong incentive to go to bed at the correct time for my body every day because that's the only way to not over sleep

- enjoy the joy of waking up without an alarm every day

- provide some of this clear thinking time. Either at night when I'm sitting in bed not quite super tired yet, or in the morning when I woke up a bit early before everyone else


The most fascinating thing about AI is how in a thread like this one, answers range between 0% and infinity.


To be accurate, it’s between negative gains and infinity.

Personally I do not trust for a second self-reports anyways. They are bound to be wrong.


To be fair, for my coding at work, AI is “only” like a 2x booster because stuff at work is a lot less greenfield.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: