Hacker Newsnew | past | comments | ask | show | jobs | submit | mangamadaiyan's commentslogin

... and bear more load as well.


Wow, the Mandelbrot set example really put things into perspective.

Unoptimized code would easily take tens of minutes to render the Mandelbrot in 640x480x256 on a 486. FractInt (written by Ken Shirriff) was fast, but would still take tens of seconds, if not longer -- my memory is a little hazy on this count.


Around that time I worked in a shop that had an Amstrad 2386 as one of our demo machines - the flagship of what was really quite a budget computer range, with a 386DX20 and a whopping 8MB of RAM (ordered with an upgrade from the base spec 4MB, but we didn't spring for the full 16MB because that would just be ridiculous).

Fractint ran blindingly fast on that compared to pretty much everything else we had at the time, and again it could show it on a 640x480x256 colour screen. We kept it round the back and only showed it to our most serious customers, and our Fractint-loving mates who came round after hours to play with it.

It still took all night to render a Lyapunov set.


Transputers were a 1980s CPU innovation that didn't live up to their original hype, and have little to no connection with TransMeta.


Indeed. It is the enigma of success in an industry with no franchise value.


Maybe actually making the interviews less of a hazing ritual would help.

Hell, maybe making today's tech workplace more about getting work done instead of the series of ritualistic performances that the average tech workday has degenerated to might help too.

Ergo, your conclusion doesn't follow from your initial statements, because interviews and workplaces are both far more broken than most people, even people in the tech industry, would think.


Well it looks like if companies and startups did their job in hiring the proper distributed systems skills more rather than hazing for the wrong skills we wouldn't be in this outage mess.

Many companies on Vercel don't think to have a strategy to be resilient to these outages.

I rarely see Google, Ably and others serious about distributed systems being down.


There was a huuuge GCP outage just a few months back: https://news.ycombinator.com/item?id=44260810


> Many companies on Vercel don't think to have a strategy to be resilient to these outages.

But that's the job of Vercel and it looks like they did a pretty good job. They rerouted away from the broken region.


After reading this post, I fear that LLMs and their ilk will make humans terrible at reading and comprehension, and impair their ability to think, much as how the advent of a car-first society resulted in many humans following a sedentary lifestyle to the detriment of their health.


Great, but why? For some things I want to think, but for some things I want the information with subjectivity taken out of it. I think it depends on intention. For newspapers and other sources with known biases, I think there's value. As with many things, information rarely exists in a vacuum. In this case if we don't think with intention about the framing of such an article, then we've already outsourced part of our thinking to the authors who intend to shape it.


You're concerned about the author's bias contaminating your thinking, so the solution is to outsource your thinking to the LLM, because it's impossible for one of them to have any sort of bias at all.

This, instead of actually thinking yourself, and examining your own biases - or those of the people who wrote what you read

Right. Good luck!


if you look for problems, you'll find them.

stripping something down to the objective parts isn't that hard for an llm as it's all about language. Sure they can and do have biases, although in this case it's a relative matter, and undoubtably the guardian is well known as left wing (in case somehow it isn't obvious just from looking at this article). So I'd say it's more steps forward than backwards. It's not either or. Removing subjective fluff from such a language is a function of thinking for oneself. using an llm to remove bias doesn't mean you need to then say "ok and now it's 100% objective". I recommend chomsky on the subject, who for instance purposely speaks in monotone so as not to infuse emotion into what he's saying.

enjoy thinking what somebody else decided for you.


I think you just proved my point - but hey, to each their own.

Good luck, again!


I still don't know what your point is. don't use llms to remove bias as they also have bias? is it just nihilism for the sake of nihilism? If so I can kind of get that, but then it leads to nowhere right? if something doesn't work perfectly I'd still use it if it's better than a less good alternative. I see it as a matter of relativism.


Your claim is that LLMs are better at "x" than the alternatives, for some value of x.

My point is that your claim is unsubstantiated.


The article, you mean?


Also, where's it going to _go_ to?


They've explained why. Now why do you think the economics are not horrible?


I have the same question as my sibling comment. Where did they get the numbers?


Question 1: is this indeed what is most valued at the moment?

Question 2: Do you think this will ever become valuable?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: