Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't believe anyone would say this, it is basically the opposite of reality.

CPUs and GPUs have come a long way in two decades. Even CPUs are incredibly fast, but between memory allocations, memory layout, multi-threading and just weeding out terrible algorithms, most software can probably be sped up by 100x.

> The high-performance software industry is low-key carrying the hardware architecture industry on its back.

That doesn't even make sense. Do you think GPUs aren't more powerful than 20 years ago and that the difference between quake 3 and Order 1888 is software?

> On my Nokia 3.1, I had to wait for internet synching. The difference between slow and fast software far outstretches that of slow and fast hardware.

Right - doesn't this contradict everything you just said?



These are steady, predictable, iterative improvements. We have nothing groundbreaking like 1970s-1990s. Of course I'm not saying modern hardware is slow, but it's not progressing like it used to.


That is both not true and not what you said at first. Transistor density has continued to rise but there is only so much you can do when people mostly run javascript on a single thread.

There were a lot more break throughs in the early days of electronic components too because everything was new.

CPUs are a world away from where they were two decades ago, but people don't notice because typing into a facebook window still lags. It is an incredibly superficial nonsense way to look at what has really happened at the hardware level.


If you only have a single core (common on battery powered devices) then you should only run a single thread. Excess parallelism doesn't make things go faster and uses more memory. The important thing is avoiding unnecessary waits on non-CPU resources like I/O.


This reads like a reply to a different comment. The person above was saying there haven't been any advancements in CPUs or GPUs which is obviously not true. No one said anything about battery powered devices and even those have had multiple cores for many years now - as a result of transistors still shrinking.


They're multicore devices but often only one core is free. Depending on how important you are, the rest are busy doing other things and you're not going to get scheduled.


This thread was someone saying there haven't been any advances in CPUs and GPUs in general.

You seem to be in a completely different goal post shifting context of some mythical mobile device that has only one core, but actually has multiple cores, but all them are pegged at 100% all the time. This doesn't make any sense in any context since it isn't even true, but it has nothing to do with what you are replying to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: