Wow, the Mandelbrot set example really put things into perspective.
Unoptimized code would easily take tens of minutes to render the Mandelbrot in 640x480x256 on a 486. FractInt (written by Ken Shirriff) was fast, but would still take tens of seconds, if not longer -- my memory is a little hazy on this count.
Around that time I worked in a shop that had an Amstrad 2386 as one of our demo machines - the flagship of what was really quite a budget computer range, with a 386DX20 and a whopping 8MB of RAM (ordered with an upgrade from the base spec 4MB, but we didn't spring for the full 16MB because that would just be ridiculous).
Fractint ran blindingly fast on that compared to pretty much everything else we had at the time, and again it could show it on a 640x480x256 colour screen. We kept it round the back and only showed it to our most serious customers, and our Fractint-loving mates who came round after hours to play with it.
Maybe actually making the interviews less of a hazing ritual would help.
Hell, maybe making today's tech workplace more about getting work done instead of the series of ritualistic performances that the average tech workday has degenerated to might help too.
Ergo, your conclusion doesn't follow from your initial statements, because interviews and workplaces are both far more broken than most people, even people in the tech industry, would think.
Well it looks like if companies and startups did their job in hiring the proper distributed systems skills more rather than hazing for the wrong skills we wouldn't be in this outage mess.
Many companies on Vercel don't think to have a strategy to be resilient to these outages.
I rarely see Google, Ably and others serious about distributed systems being down.
After reading this post, I fear that LLMs and their ilk will make humans terrible at reading and comprehension, and impair their ability to think, much as how the advent of a car-first society resulted in many humans following a sedentary lifestyle to the detriment of their health.
Great, but why?
For some things I want to think, but for some things I want the information with subjectivity taken out of it. I think it depends on intention.
For newspapers and other sources with known biases, I think there's value.
As with many things, information rarely exists in a vacuum. In this case if we don't think with intention about the framing of such an article, then we've already outsourced part of our thinking to the authors who intend to shape it.
You're concerned about the author's bias contaminating your thinking, so the solution is to outsource your thinking to the LLM, because it's impossible for one of them to have any sort of bias at all.
This, instead of actually thinking yourself, and examining your own biases - or those of the people who wrote what you read
stripping something down to the objective parts isn't that hard for an llm as it's all about language. Sure they can and do have biases, although in this case it's a relative matter, and undoubtably the guardian is well known as left wing (in case somehow it isn't obvious just from looking at this article). So I'd say it's more steps forward than backwards.
It's not either or. Removing subjective fluff from such a language is a function of thinking for oneself.
using an llm to remove bias doesn't mean you need to then say "ok and now it's 100% objective".
I recommend chomsky on the subject, who for instance purposely speaks in monotone so as not to infuse emotion into what he's saying.
enjoy thinking what somebody else decided for you.
I still don't know what your point is.
don't use llms to remove bias as they also have bias? is it just nihilism for the sake of nihilism?
If so I can kind of get that, but then it leads to nowhere right? if something doesn't work perfectly I'd still use it if it's better than a less good alternative. I see it as a matter of relativism.