Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How might software development have unfolded if CPU speeds were 20x slower?
118 points by EvanWard97 on April 9, 2024 | hide | past | favorite | 154 comments
I was pondering how internet latency seems to be just barely sufficient for a decent fast-paced online multiplayer gaming experience. If human cognition were say, 20x faster relative to the speed of light, we'd be limited to playing many games only with players from the same city. More significantly, single-threaded compute performance relative to human cognition would effectively be limited to the equivalent of 300 MHz (6 GHz / 20), which I suspect makes it a challenge to run even barebones versions of many modern games.

This led me to wondering how software development would have progressed if CPU clock speeds were effectively 20x slower.

Might the overall greater pressure for performance have kept us writing lower-level code with more bugs while shipping less features? Or could it actually be that having all the free compute to throw around has comparatively gotten us into trouble, because we've been able to just rapidly prototype and eschew more formal methods and professionalization?




I feel like every time CPU speeds double, someone comes up with a Web UI framework that has twice as much indirection. With 20x slower compute, we might not have UIs that fire off an event and maybe trigger an asynchronous network request every time you type a character in a box, for example.

Windows 95 could do a decently responsive desktop UI on an 80386. Coding was a lot less elegant in one way - C code that returns a HWND and all that - but with the number of levels of indirection and abstraction these days, we've made some things easier at the cost of making other things more obfuscated.


I saw a talk about tigerbeetle the other day - which is a small, fast database for handling financial transactions that apparenty runs orders of magnitude faster than Postgres. The database binary has no dependencies and compiles to 500kb. Its authors were joking they could distribute it on floppy disks if they wanted.

It’s written in Zig, not C. But that style of programming is still available to us if we want it. Even in more modern languages.

Honestly I’m really tempted to try to throw together a 90s style fantasy desktop environment and widget library and make some apps for it. There’s something about that era of computing that feels great.


Keep in mind, the thing that matters most for tigerbeetle’s speed is they are domain specific. They know exactly upfront how data looks like. They don’t have to be general purpose.

The style of programming does work for general purpose computing, but their requirements enable a significant % of “orders of magnitude faster than postgress”.


While your take makes sense, I feel like even general purpose programming frameworks should be orders of magnitude faster for what they are doing.


Definitely! It's just your benefits aren't going to be "all" the orders of magnitude speedups, just some of them. And I agree, that's nothing to shy away from.


Yeah; and we have plenty of other general purpose databases which can run laps around postgres and sqlite.

There's plenty of optimisations postgres and friends leave on the table. Like, the postgres client driver could precompile queries and do query planning on behalf of the database. It would introduce some complexity, because the database would need to send information about the schema and layout at startup (and any time it changed). But that would remove a lot of unnecessary, repeated makework from the database process itself. Application servers are much easier to scale than database servers. And application servers could much more easily cache the query plan.

Weirdly, I think part of the problem is economic. I think there's a lot of very smart people around with the skills to dramatically improve the performance of databases like postgresql. Across the whole economy, it would be way cheaper to pay them to optimize the code than needing companies everywhere to rent and run additional database servers. But it might not be worth it for any individual company to fund the work themselves. Without coordination (ie, opensource funding), there's not enough money to hire performance engineers, and the work just doesn't happen.


Would odbc/jdbc protocols have to change to support client-planned queries?


Back when Java had just introduced Swing to supplement/complement AWT, I remember you had a set of components (that you could even style in different ways with themes), a fairly object-oriented approach, and with the open-source MiG layout manager (that still exists today) a powerful way of laying out forms with constraints to adapt to changing screen/window/font sizes. I feel like UI framework progress from Windows 3.1 to Swing+MiG was much greater than anything I've seen since.


I feel that way about the old Interface Builder for macos, back in the Xcode 3.x days. VB6 was pretty good too.


Yeah, the interface builders back then are also unsurpassed to this day. Much as I have my issues with VB as a language, the way you could build a GUI with a GUI was amazing. Better still, it worked back on school computers that of course didn't have any "programming" tools, but did come with MS office and the full VB "experience" installed for some reason.


Visual Studio's wpf editor and Avalonia are beyond vb6 interface builders!


Just curious, what is preventing people from creating something similar to VB6 and Delphi in modern days? Different underlying frameworks? But I'm sure VB6 still sort of works on Windows 10 (VBA).


The big problem is that nobody wants yet another platform-specific UI framework. Even Microsoft is using electron (or something like it) to build Microsoft Teams.

If you build a cross-platform UI framework, it'll probably end up looking a bit ugly and custom on every platform (eg Java Swing). Making a cross-platform UI toolkit that looks native everywhere is an insane amount of work, because different platforms have very different UI toolkits, with different built-in components and different platform conventions. This problem becomes 10x harder if you want to make it work on mobile as well.

Some people try anyway - and bless their cotton socks. But electron (and arguably Qt) are the closest we've got. And even then, most electron apps seem to be full of custom components.


Thanks, that makes sense. Where can I find a Windows specific (or Linux specific)? I actually don't care about cross-platform. Desktop is dominated by Windows anyway.

QT is not bad and I used it for a small project. But it follows native so I'm not sure how to go back to native Win2000 style.


That's not really the cause. As you say, Electron apps don't look native either. The issues are elsewhere.


In theory, nothing will stop you. In practice, nothing stopped GAMBAS (https://gambas.sourceforge.net/en/main.html), which can do a lot of what VB did.

Problem is, the multitude of platforms, screen sizes, capability sets etc. that we must support in practice make a very compelling case for a unified runtime, which ended up being the current web tech turducken clusterfuck we have today.

If the CPUs were slower, hovewer... one can only wish.


Depends what you mean by "similar to VB6/Delphi".

The VB6/Delphi era (called at the time RAD for Rapid Application Development) was oriented around proprietary UI and component frameworks. They were planned out a long way in advance and solved a lot of problems up front, but, they were tied to Windows and then you had to buy the dev tools on top.

These days people usually want to write web apps if they aren't doing mobile. The web as a UI framework is basically evolved rather than designed. The features you need to support RAD don't exist. Moreover the web UI community would view them as bad practice!

RAD tools were typified by a few things:

1. A standard way to define components that is statically reflectable (so you can populate property and event editors), and get documentation on the fly, etc.

2. MVC: a standard way to separate code from layout.

3. Layout management that's integrated with the component hierarchy.

4. A standard toolbox of high quality, high effort components.

5. A marketplace for components (which means they need to be distributable in binary form).

The web has none of this and isn't going to any time soon. For example, there's no agreement on how components work. React has one or more different solutions, Web Components is a different one, Vue yet another. MVC has been abandoned: React doesn't know anything about that. Code and layout are intermingled together. JavaScript is dynamically typed, largely reflection-free language. You can't automatically work out what the props are for a React component short of parsing the source code. You can't easily distribute proprietary components either because there's no way to enforce licensing (so a lot of users wouldn't pay, in reality).

And then there's the components. Browsers don't provide good enough components out of the box, unlike Windows-oriented RAD tools, so devs have to invent their own. Then React makes things worse by conflating theming with the component library. The result is that every web app has its own unique set of widgets with its own unique set of limitations and bugs, very few of which ever reach a high quality level. Widget development is expensive, which is why RAD tooling centralized it and supported a marketplace, so devs could amortize out the cost of writing widgets across many projects. The whole React/Jetpack Compose approach basically spurns all of that and says if you want to change the way your app looks you get to redevelop the widgets from scratch. And everyone does want that, so there are floods of duplicated work that's open sourced and then abandoned, but nothing ever gets critical mass.


Thanks. It looks complicated. I'm learning Windows desktop development and will take a look of QT first. I used it for a small project years ago and the experience was OK.

> 1. A standard way to define components that is statically reflectable (so you can populate property and event editors), and get documentation on the fly, etc.

This was what I really loved about Visual Basic back in the day. Everything is within clicks -- even with the legacy IDE it still feels slick.


> Honestly I’m really tempted to try to throw together a 90s style fantasy desktop environment and widget library and make some apps for it. There’s something about that era of computing that feels great.

SerenityOS might be exactly what you're looking for. Join the community and make some apps, it's great (both the community and the OS/dev experience)!


Is there any way to run something like "serenity's WM and file manager, but on my favorite linux distro?" Or is it all more tightly coupled than that?


It's a Unix like OS with great separation of concerns. There probably is a way - their browser runs on Linux - but I don't think anyone has done it yet. Could be a fun project to look into!


In 2007 I put together a bunch of servers running openSUSE 10.2. They run some specialized application software that's version-dependent and is unlikely to ever be upgraded. The working network is not connected to the internet, so there's no reason to upgrade the OS.

They're full KDE installs, and openSUSE 10.2 wasn't a lightweight distribution in its day. But now, even running in VMs, UI and network response are noticeablly snappy, and are sparing of resources by modern standards.


Do you think you'd write your 90s style widget set and desktop environment in Rust, or do you think Rust itself tends toward bloat (in non-embedded applications)? I know you use Rust in other projects, which is why I'm asking about that language specifically.


Please do. I'm also wondering why no one created something similar to VB6 and Delphi.


  if [ "$os" = "Darwin" ]; then
    arch="universal"
    os="macos"
  elif [ "$os" = "Linux" ]; then
    os="linux"
  else
    echo "Unsupported OS."
   exit 1
  fi
This is nothing to emulate.


Why not? Does everything really need to be built in a cross platform way? I feel like that’s how we got into this mess.


Yes, bugs are often found by running the same code on similar but not identical platforms. We've had this conversation multiple times.

https://news.ycombinator.com/item?id=28978086

What mess are we in because of POSIX?


I’m talking about the mess of electron and friends. I’ve spoken to engineers who work on it. From their point of view, it’s the only sane way to build software that works across multiple operating systems.

The nice thing about windows and macOS back in the day was that the programs we ran were all written with the same native UI toolkit. All the controls matched between applications. Applications were small and efficient - binary sizes and memory usage were in the megabyte range. Any program written like that today starts up instantly, and is incredibly responsive.

But last I checked, Hello world in electron is ginormous. It uses about 100mb of memory. It takes time to start up. Vscode and Spotify are great but they don’t look or feel native. It is legitimately great that people now ship apps for Linux. But we’ve lost platform cohesion in the trade.

So, so what if tigerbeetle is written for Linux? I’m ok with the developers choosing not to pay the cross platform tax.


That's just the codepath from bootstrap.sh that downloads the binaries. I don't know if it will compile and run on, say, FreeBSD. It's quite possible no one tried.


I did, but there are literal compile-time checks for various platforms.

    pub fn monotonic(self: *Self) u64 {
        const m = blk: {
            if (is_windows) break :blk monotonic_windows();
            if (is_darwin) break :blk monotonic_darwin();
            if (is_linux) break :blk monotonic_linux();
            @compileError("unsupported OS");
        };


That genuinely seems platform-specific. Should be easy enough to write a patch to add support for $other_system.


According to the talk I watched, tigerbeetle makes heavy use of io_uring on linux - which isn't part of POSIX.

Adding freebsd support should be pretty easy. If it supports darwin, it'll probably already have an implementation built on top of freebsd's kqueue. Its probably just a case of wiring it up to use kqueue when built for freebsd.


Yes, that was my impression as well when I looked at it yesterday. The monotonic_linux() bit that's quoted is platform-specific because it uses CLOCK_BOOTTIME with clock_gettime(), but that seems supported on BSD systems as well. It's probably just that no one tried to run it, and no one spent any effort on it. I can't find a single mention of "BSD" in the issue tracker.


> I can't find a single mention of "BSD" in the issue tracker.

Sounds like this is something you can help with. If you care, open an issue.


> Honestly I’m really tempted to try to throw together a 90s style fantasy desktop environment and widget library and make some apps for it. There’s something about that era of computing that feels great.

If you're interested in that, you should definitely check out Serenity OS!


VB6 had intellisense. sure it was much less powerful than today's, but you had a project with a dozen classes, and when you typed the name of an instance variable and then hit ".", it would immediately show you a list of accessible members of the according class and update the preselected member as you started typing. This was absolutely immediate, on a 133mhz machine with 32mb RAM. Even on 66mhz it was still usable.

I remember switching to VB.NET on a 600mhz machine and the IDE was a sluggish piece of garbage.


Every IDE is a sluggish piece of garbage after Delphi 7.


I'm curious if you've used Lazarus and what you think about it. I have not used enough similar IDEs to compare it (or maybe just forgotten how it was) and didn't have access to Borland tools back in the day. One crazy thing I remember about Lazarus is how it can compile the IDE quite fast (given its codebase size) when installing plug-ins.


Unfortunately no, had no use case for it in an industry dominated by Python and C++. Wouldn't be surprised if Lazarus is very fast and pleasant to use, and I think the problem with more popular IDEs lies in underlying technology: C++ is traditionally very slow to compile and analyze, and many popular languages are too dynamic for reliable autocompletion to be feasible. When I briefly worked with C# and Java in 2000s the IDEs were very much fully functional (compared to C++ at the time), but sluggish enough to be unpleasant without a powerful PC which I did not have — and Delphi ran fast on cheap hardware.


I'm used to newer computers, so Visual Studio 2008 was my favorite Visual Studio. With VS 2010 and later, Microsoft decided to scrap the existing Win32 based UI and rewrite it in WPF, which made it take 20 seconds to start instead of less than a second. Even after it's started, it's far less responsive than 2008. If only Microsoft offered a "Visual Studio 2008 but with modern standards support" product...


It’s an evolving economics problem. Very few (I mean super extremely few) people really know how the web UI works without a framework and there is no incentive to know this because the more skilled a person becomes in that regard the less employable they become.

As a 15 year former JavaScript developer I can see that things are changing in this regard with hiring slowing down, but there remains close to no incentive to be good at any of this. Even if you are good at this and defy all extreme expectations by finding a job that values bleeding edge performance over hiring you won’t be paid any more for the rarity of your talent/experience so why bother?


Doesn't it make you the go-to person when something goes wrong, and someone has to debug it?

My own experience from a time at a SpringBoot/iBATIS/Hibernate etc. shop (so server-side not UI side) is that it's all well and good running on "rocket fuel" as the consultants say, until something goes wrong. Then at some point you need the person who understands HTTP and SQL and other antique stuff to diagnose the problem, even if you can fix it at the abstraction layer.

One of the problems we had one day, was that a particular script "just wasn't working" and it turned out the file was being sent correctly, but with the content-type set to "text/html" by mistake because someone's "clever trick" interfered with SpringBoot's "magic" so the content-type detection wasn't running the way you'd want. Easy to fix, but you need to know what's going on in the first place.

Over time I also developed a feel for code smells of the form `return SomeItemDAO.fetchAll().size()` which runs just fine on the test server with a few thousand items, then you deploy it to prod where there's tens of millions and the database is on a different machine. It turns out SELECT COUNT is a thing!


> Doesn't it make you the go-to person when something goes wrong, and someone has to debug it?

Assuming anyone cares enough. Alternative approaches include, ignoring the problem, working around it, or changing business requirements so that the feature isn't available anymore.

There are a lot of people in the industry that just throw up their hands at roadblocks they can't solve and say something is not technically possible. These people include "tech leads" or other high-level software roles at companies.


> These people include "tech leads"

I have known a tech lead, one-self a java developer, who could not compile java code to class/ jar file that could be added to a docker layer in a pipeline.


There are people who do care about that, but it’s astonishingly rare. Just consider it abandoned knowledge. Most of the people who continue to care about these things do so in open source contributions far away from their employment.

As a proof occasionally somebody will post something on HN about performance (asking for guidance, showing off a refactor, claiming performance is a critical must, whatever). I show them how to achieve load times of less than 100ms on a OS like GUI with full state restoration and the feedback is always the same. Either they can’t hire anybody who is capable of following that guidance or the juice isn’t worth the squeeze. Nothing in the guidance is challenging. It’s just not framework bullshit.


> they can’t hire anybody who is capable of following that guidance

This is the main constraint when designing some solutions. Oftentimes I know the best path (or that something deemed impossible is completely doable), but it might be arcane knowledge and I would be the only one to know how it would work and be responsible to keep it working.

Bus factor of one strikes this right away, unless there are no alternatives.


Again, it’s an economics problem. More specifically what you are addressing is not an availability problem, as everyone most commonly believes, but a selection problem.

Seriously, think about this logically. The compile target of the browser is the DOM. What is it these developers are so deathly afraid of: the DOM. Yes, it is raw emotional fear processed in the amygdala, qualified with poorly formed bullshit excuses. So, what about this induces fear? It’s not the technical challenge, because it’s not that challenging and easily taught to non-developers. I know this from experience. That’s how I know it’s a selection problem.


It’s kinda like adding lanes on the freeway: for a time traffic is nice but people get used to it and then more people start driving and it becomes just as congested as it was before.



Yes, but Jevons paradox only says usage goes up. That usage could have gone to things that help with productivity of the end user rather than going to sluggish frameworks.


Good observation. I believe the name for that is "induced demand".


No, that's not what induced demand is.

Induced demand is the idea that if you upgrade things, people will use it more (duh). (And it will need upgrading again shortly)

The post you're replying to is just how benefits of improvements scale off with time.


I do wonder about this often. Are these new UI frameworks iterating upwards towards some unknown featureset archetype, where if given another 10-20 years, the designers will say "Okay, we've reached diminishing feature returns, let's pack it up and optimize"

Or, are we simply re-inventing the wheel each time, where the set of features over the last few decades really haven't change that much or cycles through featureset phases.


I would say, to go with the metaphor, that none of our wheels are close to round yet, and every now and again someone invents a new wheel that has a different kind of bump.

Concretely, "shadow DOM", reactive (web) components and js frameworks etc. etc. are all ways of trying to get a set of rich UI components (like we're used to from desktop applications) into an environment that was originally built for static text pages, and has been expanded by patchwork ever since. DOM updates are slow and cause flickering because the original model was you submit a form and the entire page reloads; updating part of the page in response to an asynchronous request or a local event wasn't in the original design space, and every solution we've tacked on so far sucks in a slightly different way. Not helped by the complexity of being able to dynamically change the layout and size of every single element - CSS is incredibly powerful, but setting a button to have a 1px border only on hover for example has a tendency to make other page elements "jiggle" in ways you don't want.


> Or, are we simply re-inventing the wheel each time, where the set of features over the last few decades really haven't change that much or cycles through featureset phases.

The problem space is much larger now than it used to be. 20 years ago, you didn't care about things like responsive design, accessibility, fractional scaling, even internationalization was basic/non-existent. There's a long tail of features (often extremely complex, like accessibility) which are not immediately obvious for an english speaker with normal vision staring at a standard sized display.


Microsoft specifically cared about all that, way more than 20 years ago - Windows, for all the criticism it takes, from early days had internationalisation (they wanted to sell across the world after all), accessibility features, ways to adapt to different screen and DPI sizes etc. I think Explorer windows since at least Win 98 did something responsive-ish in that various side bars/panels would disappear if you made the window narrow enough. Of course, the icons would rearrange themselves into fewer or more columns as you resized the window too.


I would expect some overhead, but none of the features you describe seem to justify performance hit apps have taken.


Performance is a feature just like any other. Customers are looking for a good combination of features, including performance. At some point, the feature (in this case performance) is good enough, and it doesn't make economical sense to improve it.


> Or, are we simply re-inventing the wheel each time,

This is how it has been since I've been in the industry (2005). it's part of why I got out of web development; it felt like a whole lot of relearning how to do the same thing every few years. At first, there was incremental feature gains, but after a while, it felt like newer frameworks or approaches were a functional step back (i.e., NoSQL, the fact that it eventually came to stand for "not only SQL" tells all).


"It doesn't matter how much faster you make the hardware, the software guys will piss it away and then some more." - Unknown

Edit: Thanks to the other comments, I can see it's a crude re-stating of Wirth's Law [0]

0. https://en.wikipedia.org/wiki/Wirth%27s_law


Jevon's Paradox applied to compute

https://en.m.wikipedia.org/wiki/Jevons_paradox


And its called Wirth's law: "What Andy [Grove] gives, Bill [Gates] takes away".

https://en.wikipedia.org/wiki/Wirth%27s_law


I write C++ for high-performance Windows desktop applications that are used on a wide variety of form factors. This means that I still optimize a lot of things, such as what happens when a user edits a property in an edit box. How can that edit be minimized? How do I make sure that commands operate in less than a second? How can we hide latency when a long execution time can't be avoided? 99% of the time, optimizations are about doing less, not doing something faster or with lower-level code. You'll never write faster code than code that doesn't run.

I think the GPU would do a lot more work in most applications than it does today. If a process needs to be super fast, when applicable, I write a compute shader. I've written ridiculous compute shaders that do ridiculous things. They are stupidly fast. One time I reduced something from a 15 minute execution time to running hundreds of times per second. And I didn't even do that good of a job with the shader code.


Sidenote: MSVC had an optimization option for Windows 98, I think it was /OPT:98 (/FILEALIGN:4096). It sets the "file alignment" of sections in the portable executable (PE) file to 4096 bytes (0x1000) = 1 memory page -> noticeably more efficient because it's a direct copy operation. It's sacrificing space (padding empty space with thousands of zeros) for time (one copy operation instead of many to shuffle data into memory pages).

(The file alignment still defaults to a 512-byte (0x200) sector size which means the inefficiency is there today even though you may not notice it in isolation, but the "sector"/buffer size has been at least 4096 bytes since 2011. [2])

> The /FILEALIGN option can be used to make disk utilization more efficient, or to make page loads from disk faster. [Assuming it matches the page size = 4096 bytes.] [1]

> All hard drive manufacturers committed to shipping new hard drive platforms for desktop and notebook products with the Advanced Format sector formatting [4096-byte or greater] by January 2011. [2]

[1] https://learn.microsoft.com/en-us/cpp/build/reference/fileal...

[2] https://en.wikipedia.org/wiki/Advanced_Format


People like you remind me that I'm still an amateur at all this :)


Tangential but funny story from some years ago, did the same on a virtual reality app (Qt, Oculus SDK) so talking multi-threaded renderer, tons of background activity, even spawned a mini helper server to process tasks and such and did custom hacking (registry, window flags) to override windows features to make the app snappy. Distinctly remember spending weeks on startup time to get the app to consistently drop the user into a session between 250 - 500ms even from cold launch which involved something like a mini page file to capture state and other things only for my boss at the time to come and say the app was "too fast", users couldn't see the splash screen so we added a random(1.f, 3.f) second sleep...


LOL, you just can't make all the people happy all of the time, right? I have done similar things with timers, for the same reasons.


> One time I reduced something from a 15 minute execution time to running hundreds of times per second

That's too good a story not to have just a little more detail. Are you willing to share more?


Sure. It was a fairly complicated image processing algorithm, but not necessarily something that you would want to go through a lot of trouble to implement on the GPU. At least not until you're desperate. And I should add, the results are pretty boring. It doesn't even generate anything interesting.

I read the paper that described the algorithm and implemented code on the CPU, thinking, quite stupidly, that it would be fast enough. Not fast, but fast enough. Nope. Performance was utterly horrible on my tiny 128x128 pixel test case. The hoped-for use cases, data sets of 4096x4096 or 10000x10000 were hopeless.

Performance was bad for a few key reasons: the original data was floating point, and it went through several complicated transformations before being quantized to RGBA. The transforms meant that the loops were like two lines total, with an ~800 line inner loop, plus quantization of course (which could not be done until you had the final results). In GLSL there are functions to do all the transformations, and most of them are hyper-optimized, or even have dedicated silicon in many cases. FMA, for example.

So I wrote some infra to make it possible to use a compute shader to do it. And I use the term 'infra' quite loosely. I configured our application to link to OpenGL and then added support for compute shaders. After a few days of pure hell, I was able to upload a texture, modify the memory with a compute shader, and then download the result. The whole notion of configuring workgroups and local groups was like having my pants set on fire. Especially for someone who had never worked on a GPU before. But OpenGL, it's just a simple C API, right? What could go wrong? There's all these helpful enumerations so the functions will be easy to call. And pixel formats, I know what those are. Color formats? Oh this won't be hard.

But once everything was working, it only took a few more days to make the compute shader work. The hardest part was reconfiguring my brain to stop thinking about the algorithm in terms of traversing the image in a double nested for loop - which is what you would do on the CPU. Actually, the first time I wrote it, that's what I did, in the shader. Yes, I actually did that. And it wasn't fast all. Oh man, it felt like I was fucked.

But in the end, it could process the 4096x4096 use case at 75 FPS, and even better, when I learned about array textures, I found that it could do even more work in parallel. That's how I got it from 15 minutes to hundreds of frames per second.


Do you happen to have any pointers or recommendations regarding C++ for desktop applications? Especially towards state-management and user-interaction?

I am primarily doing game development and HPC; I am decently familiar with C++, but desktop UI has been a pain point for me so far. Most GUI tools I write in C++ are using ImGui, or they are written in C#.


Desktop UI is painful. It doesn't help that Microsoft is seems to have quite a few competing UI frameworks and technologies these days.

1. What is your goal? Do you need to run on Windows and Linux? QT isn't bad, although I personally think the UI looks a little weird. It is definitely highly opinionated and parts of it are quite strange IMHO. There's probably lots of jobs writing with QT, which might be a nice side bonus from learning the framework.

2. Do you need a totally custom UI? If so, I would stay with ImGui. You might find Windows UI development extremely frustrating, especially that you have to owner draw a lot of stuff to get a really custom UI. That can be an extremely difficult and terrible experience, and I don't recommend it to anyone who isn't already an expert at it.

3. State management? You mean like the state of the UI? Is a button pressed? Could you be more specific?

4. User interaction? This is such a broad area. Could you be more specific? Like filtering mouse and keyboard messages? Windows has several APIs for this.

EDITED TO ADD: In my experience, which is significant, either use a GUI framework and operate within its capabilities, or draw everything yourself. In Windows, your life will become exceedingly difficult if you use a framework when you want to do a lot of custom components, or if you want a lot of custom look/feel. If it were me, I would draw everything myself. People don't need the consistency of the Windows UI anymore, provided you stick with common and well-known metaphors like text boxes and property editors, etc.


> I would draw everything myself.

I wouldn't be too fast to recommend this. I have quite a lot of experience with Qt[1], and I manged to get a good look and feel across different operating systems. Yes, you'll need to customize Qt Quick components yourself. But that's easy. Also, Qt is improving its support for native components, they now support native dialogs and file pickers in Qt Quick[2]. Another important thing, is that you can always extend your app using open source libraries - for example - qwindowkit allows you to create native frameless windows.[3]

I highly recommend Qt. And related to this post, you can write some extremely responsive and fast applications with it.

[1] https://www.get-plume.com/

[2] https://doc.qt.io/qt-6/qtlabsplatform-index.html

[3] https://github.com/stdware/qwindowkit


You're right. QT is a good choice if you are willing to work within the bounds of the framework. QT is definitely not a good choice if you want to make a lot of customizations. If you want to make an app like Spotify, don't use QT.


Spotify just uses electron, if you want your app to look like an electron app, use that. I think it's hideous.


Why exactly? With QML it’s incredibly easy to creat custom, animated and complex UIs.

Also, if I remember correctly there was a time when Spotify was written in Qt.


When you write your own UI, you get used to quickly + easily being able to create custom elements that do/behave exactly as you want and even iterate on those elements to get the best user experience.

Lets say I want to control pan/tilt/zoom/focus/aperture/etc of a remote camera. If I ask lets say an expert in UI framework Z to do it, it will take them 10x longer to create a very painful experience using standard elements with poor input latency, so someone actually trying to setup a camera over/under shoots everything, but it technically "ticks every box". The path to create a better experience just isn't really there and it is difficult to undo/change all the boilerplate/structure, so version 1 isn't improved for years because it took so long to create the first iteration.


Sounds like a very specific example. But I'm still unconvinced - what will make this particular UI slow using Qt? The camera view? My Qt note-taking app is faster and more responsive than native apps like Apple Notes and best-in-class Bike Outliner. Both in loading speed (4x) and resizing (with word-wrapping) of a large text file (War and Peace).


Maybe the user wants to do real time exposure/color correction, so you want to minimise the number of frames from the moment of the input to seeing the output. To do it properly the user also would want to see analysis graphs on the screen on the same frame that's being displayed? And do this for 10 cameras at once?

Maybe your definition of "fast" for a large text file is War and Peace, and mine is multiple 1/10/100 GB text files that you want to search/interact with in realtime at a keystroke level.

I've probably written 100+ completely different "very specific examples" in very different industries, because that's where you can create much better experiences.

Generally your expectations are based around what you get as standard from the library, but if you want to get a much better experience then it immediately becomes a lot more difficult.


I believe you have proved my point that you're speaking of very niche examples. Even Sublime Text won't load a 100GB instantly on a normal machine. And I consider it a very well-made app. While of course there might be apps that will load such files instantly, they are highly optimized for such a task. My point is that Qt is more than enough to replace all those Electron, and other web-based apps, while performing as good or even better than native apps.

At the end of the day, Qt can also be just a wrapper for your highly optimized engine - for example, NotepadNext[1] is using the very performant Scintilla engine while its UI is written in Qt. From my (unscientific) tests, it's even vastly faster than Sublime Text.

BTW, I'm not saying that rendering and creating your own UI is always a bad idea. Many people do it because it's fun and challenging, or to push the boundaries. That's what Vjekoslav Krajačić is doing with Disk Voyager - writing a file explorer in C from scratch[2][3]. But for many people, that's too much. I believe Qt C++ with QML is the best combo for most people, for most applications.

[1] https://github.com/dail8859/NotepadNext

[2] https://diskvoyager.com/

[3]https://www.reddit.com/r/SideProject/comments/103b9fy/disk_v...


If everything you do stays in the rails of QT you're going to be fine. But you try to do something simple like load a 2GB file and everything starts to fall to pieces, then you're going to assume the people that wrote this are super clever and that a 2GB file is too complicated, too niche, too hard of a problem, it needs to be "highly optimised", etc.

The reality is QT/whatever program/framework is doing 100 things you don't care about when loading/rendering a file. If we only care about 1 thing we can do that much better because we don't care about the 100 other things our naïve code we wrote in 5 minutes outperforms the standard element by a factor of 1,000.


Is there a way to get a Win95 look in Win10 QT?


I'm building a complex greenfield app in WPF, so your "Desktop UI is painful" comment does not resonate with me at all. They will have to drag me back to web development kicking and screaming. I absolutely love building UIs in this framework. No blockers, no bullshit. So fluid and easy.

Not to mention, the exact same paradigm translates to the other Microsoft desktop/mobile/x-platform frameworks, so if you insist that WPF is "old" or out of date, everything you build can be ported/refactored quite easily to the newest framework(s).

I have built non-trivial desktop apps in every framework except QT, and you would have to pry WPF from my cold dead hands.


I also use WPF. Like you, I love it. But it still can be painful if you're doing non-trivial things. Sometimes desktop UI is painful. Sometimes it requires a lot of work to deliver a perfect user experience.


The primary goal is building tools that other (typically less tech-savvy) people can use to create various types of content (often video game related) and to semi-automate repetitive tasks. An example here would be our texture selection / marking tool [1]. As a more advanced example, think of an editor found in most modern game engines, like Flax Engine [2].

Windows is the primary target for these tools, but I'd really like them to be also available on Linux to lessen our Windows dependency. I've used Qt in the past, before they introduced Qt Quick. I also heard about complicated licensing changes when they moved to Qt6, which made a lot of KDE devs worry. And stuff like not being able to download Qt without an account; or the framework coming with everything and the kitchen think nowadays, where I am only interested in desktop UI; no networking, no JavaScript-like scripting language, etc.

I don't want to build a complete UI system from the ground up, but there are certain points where I'd like to be able to customize things, like adding new widgets and having some way to render 2D things without needing a graphics API surface -- think HTML canvas. I feel like ImGui does a pretty good job here, giving you drawing primitives.

For state management I am mostly concerned with the life-time, ownership, and connections between objects. Where other languages, like C#, don't really have to worry about this due to garbage collection, in C++ you typically want things to be more strictly organized. I'd prefer a UI framework to facility object life-time management in a streamlined manner. Like, if it opts to use shared_ptr for everything, that's fine, but it also needs to prevent me from accidentally building cycles and provide a way to dump the dependence graph so I can see directly why a certain object is retained (and by whom).

To clarify the difference between C# and C++ here, think about how the implementation of an observer pattern is vastly more complicated in C++ to be safe as object life-time is not managed automatically for you. Copy & move semantics only adds to this in terms of complexity.

State management and user interaction are closely related here, as almost all user interaction results in state modification. Looking at HTML/JS frameworks, some leverage a 2-way data binding approach, where others bind data only 1-way and use events for the other way. In immediate mode GUIs I am updating the underlying state directly -- practically having the view and model tightly coupled. Here I'd like for a framework to be explicit about what is happening, without being too cumbersome to extend a UI with new functionality. E.g. I don't like signals that can be used across the whole code-base, where suddenly a function executes and you have no idea what originally triggered it. On the other hand, having to handle and forward every basic event from one component to its parent isn't an option either. If that makes any sense.

[1] https://github.com/ph3at/image_tool [2] https://flaxengine.com/features/editor/


> I don't want to build a complete UI system from the ground up, but there are certain points where I'd like to be able to customize things, like adding new widgets and having some way to render 2D things without needing a graphics API surface -- think HTML canvas. I feel like ImGui does a pretty good job here, giving you drawing primitives.

Yes, Qt may not be super friendly with this. However, it is perfectly possible. Qt lets you integrate an "external canvas" that you can render with your favourite graphics API (e.g. OpenGL) and integrate it in the Qt Quick scene (or widgets if you prefer). For example, I did this with my notetataking application, Scrivano [1], for handwriting, where the main canvas is a separate OpenGL view that renders content using Skia, while the rest of the UI is standard Qt Quick.

[1] https://scrivanolabs.github.io


> Might the overall greater pressure for performance have kept us writing lower-level code with more bugs while shipping less features?

Are you living in the same world as the rest of us? Nowadays programs are shipped with plenty of bugs, mostly because patching them afterwards is "cheap". In the old days that wasn't as cheap.

So having lower powered computers would have made us write programs with less features, but also less bugs. Formal coding would be up, and instead of moving fast and break things most serious business would be writing coq or idris tests for their programs.

Bootcamps also wouldn't be a thing, unless they were at least a couple of years long. We'd need people knowing about complexity, big O, defensive programming, and plenty of other things.

And plenty of things we take for granted would be far away. Starting with LLMs and maybe even most forms of autocomplete and automatic tooling.


1. Get it working

2. Get it right

3. Make it fast ... pretty

Weird observation and from personal exp a good percentage of development stops at 1 with periodic blips to 3 when issues popup (of course with an eventual rewrite coming when new people come onboard) as a consequence of not focusing on 2 due to how we build today.


One way to answer this question is to look at the software produced when clock speeds were 20x slower.

The limitations, and features we had then are a minimum starting point.

So I'm thinking around the era of a 486 100mhz machine. We'd have at least that (think mylti-player Doom and Quake era as a starting point.)

We had Windows, preemptive multi threading, networks, internet, large hard drives, pretty much the base bones of today.

Of course cpu-intensive things would be constrained. Voice recognition. CGI. But we'd have a lot more cores, and likely more multi-thread approaches to programming in general.


Right, clock speed seems like a distraction to me.

A 20x reduction really isn't that significant in a historical context. Gray beards here have seen CPU performance increase by 200x or more over their computing careers since the late 80s or early 90s. And that is ignoring multicore/SMP gains.

I found this nice figure trying to summarize CPU performance trends over many decades: https://www.researchgate.net/figure/CPU-performance-Historic...

Prognostication depends on other unstated assumptions about the market or fundamental technological limitations. Generally, I'd say that if the single CPU core trend was more flattened, we would have seen more emphasis on parallel methods including SIMD, multicore, and the kinds of GPGPU architecture we're already familiar with.

The kind of programming model that is at the heart of CUDA, OpenCL, etc is exactly what the high-performance numerical computing researchers were using back in the late 80s to early 90s when computers were much slower. They were simply applying it to exotic multi-socket SMP machines and networks of computers, rather than arrays of processors on a single massive chip.


Even modern cellphone chips a far more than 20x the speed of a 100Mhz 486 outside of extremely pathological workloads. At minimum we’re still talking 64 Bit chips.

However, IMO simply thinking in terms of actual chips that existed isn’t that interesting. What would computing look like if the PIII was a 12 CPU at 500 MHz. That’s a little closer to 5% of modern chips and something nobody worked with.

Alternatively what the 486 era would have looked like with gigabytes of RAM and an SSD?


I can't buy into the idea of 486s but also SSDs. Why doesn't the speed limitation of CPUs extend to controllers, busses, SoC, transistor sizes, etc? If the 2GHZ CPU is now 10Mhz, then presumably the memory bus is no longer 100Mhz, but 5Mhz.


I think the basic assumption is some kind of change to the laws of physics and thus transistor frequency scaling otherwise it’s effectively just asking what it was like in the past. So dropping 6GHz 64 bit chips to 300 MHz doesn’t imply everything else is the same and where just using 32bit PII era hardware.

Similarly rather than NVMe 2TB SSD’s at 6000 MB/s we could have 2TB SSD’s at 300 MB/s. Which then opens the door for even more extreme differences.

If the “PIII was a 12 ^core^ CPU at 500 MHz” that’s quote odd by historic standards.


Since people brought the "a few decade earlier was like that" response:

Old software on older hardware was «responsive» because library they used came with much less built-in capabilities (nice ui relayout, nice font rendering, internationalization, ui scaling), and also, less code means less memory, and rotating disk swap meant huge slow downs when hit, so being memory hungry was just not an option.

People that remember fast software was just people that could afford renewing their computer a year or so in the 20% top bracket prices, and don't realize that today mere inconvenient slugginess in 6-7 years computer was just impossible to imagine back then.

For the «let's imagine current day from that past», I would say we would be mostly in the same place, without AI, with much less abondance of custom software, and more investments in using and building properly designed software stack. Eg, we would have proper few UI libraries atop of web/dom and not the utter mess of today, and much more native apps. Android might not have prevailed has it has, it relied a lot on cheap CPU improvements for its success.

Still safe language like rust would have emerged, but the roadblock in fixing compiler performance would have slowed down things a bit, but interest would have emerge even faster and stronger.


I'm not sure I understand the premise, because CPU speeds were 20x slower. Just go back a decade or two.

They weren't some halcyon days of bug-free software back then, quite the opposite.


Software wasn't bug-free, but it was responsive.


Not all of it was, some of it was very laggy and slow. Indeed the whole OS would frequently freeze up.


Sure, but nowadays all of it is laggy and slow. I cringe every time I'm faster than Slack, a text chat program.


Yep exactly. If we're talking about a cutting-edge app, I get the sluggishness.

But IRC used to respond instantly. Feels like apps are doing roughly the same thing but more slowly despite having computers that are orders of magnitude faster.


Be careful not to mix slow CPUs with not having SSDs. A OS freezing up is almost always because something is broken or because it's waiting for a HDD to spin up.


It may seem responsive if you run old software on modern hardware.

It was always slow on contemporary hardware. On affordable PCs Win 3.1 was so slow you could see it redrawing windows and menus. Win 95 was so resource hungry, people wrote songs about it (https://www.youtube.com/watch?v=DOwQKWiRJAA). XP seemed fast only at the end of its very long life, due to Longhorn project failing and delaying its famously shitty successor.

It wasn't just Windows. Classic MacOS for most of its life could not drag windows with their contents in real time. Mac OS X was a slideshow before 10.4, and Macs kept frequently beachballing until they got SSDs.


> Classic MacOS for most of its life could not drag windows with their contents in real time

There was shareware you could install which would do it though! Even on a 25 MHz 68030 it was surprisingly usable (more usable than the passive matrix LCD at least) https://www.youtube.com/watch?v=4cQo29SIIgU

It got a bit slower in color on an external display https://www.youtube.com/watch?v=peWIysrf7DY


That wasn't a hardware limitation. BeOS was outperforming Windows and Mac on the same hardware. If JLG hadn't demanded too much money, Apple would have merged with BeOS (and probably be a distant memory by now but that's a separate issue)

https://www.youtube.com/watch?v=cjriSNgFHsM&t=350s


Like when you were booting windows and waiting an extra 3 minutes for your single processor CPU to finish all the startup tasks before doing anything else?


Windows 3.1 was based on async programming, and wasn't responsive every now and then.


Wasn't Windows 3.1 or 95 also the one where things looked like they were going faster if you jiggled the mouse?


Yes it was Windows 95, probably the original release, as a lot of things were improved in OSR2: https://www.extremetech.com/computing/294907-why-moving-the-...


That's why I use vim!


Just go 9 years back actually. Computers were 20x slower 9 years ago according to Moore's law


Moore's law doesn't say anything about the growth rate of performance vs time.


More C/C++ based business apps that run locally. Cloud would be less relevant. No large browser engines, which means a lot less JS and of-course no Electron :)


There would be way more language and tool development around more efficient languages because more people would be required to use them. So C and C++ would probably be in a totally different state. There would be a huge amount of hate for people who use some C++ derivative with a framework on top of it using some program that allowed it to easily run on multiple systems.


Good points. Somehow typing latency might actually be better, lol.

V8 might just invent like 3 more execution engines though, 1 of which uses an external TPU (open source though!) to run code JITed to HVM (Higher Order Virtual Machine) that everyone is eventually compelled to adopt, one can't be too sure JS will lose. /s


Everything would be exactly the same, except 8.64years later.

Moore's law show that CPU speeds double every 2years. 2years * log2(20) = 8.64years, so we'd just be 8.64years late, that's it, literally no reason for anything to be any different apart from that.

95% of comments seem to completely overlook this fact and go into deep explanations about how everything would be different. It's pretty surprising that even a pretty sciencey community like Hacker News still doesn't get exponentials.


Moore's Law actually refers to the density of transistors on a chip doubling approximately every two years, not CPU speed. Yes, CPUs still get exponentially faster, but this affects multi core throughput, single thread performance improves way slower. For human perception and interactivity, this makes a huge difference. So it stands to reason if algorithms would have been parallelized earlier and better, but I doubt that the timeline would just be shifted a few years.


Arrogant and factually incorrect, a dangerous combination.

But apparently you also didn't get the question - hardware would stay slow, but software would continue evolving, the question is how, given the hardware constraints. It would definitely not be "exactly the same, except 8.64years later".


Still think mine is even more general :)

Obligatory XKCD: https://xkcd.com/435/

My comment: https://news.ycombinator.com/item?id=39977838


I'd probably be out of a job because we wouldn't be doing this crap in software.

You wouldn't have people wasting cpu cycles on pointless animation. You'd have people thinking about how long it takes to follow a pointer. You'd have people seriously thinking about whether the Specter and Meltdown and subsequent bugs really need to be worked around when it costs you 50% of the meager performance you still have.

I might ask if everything else is 20x times slower too. GPU speeds, memory bandwidth, network bandwidth.


We'd still be using triple-DES to protect data, arguing that the NIST time to break it was still far out beyond. And hash functions would be like the CRC32 in TCP, not the modern stuff.

CISC computers which did more in parallel per instruction would be common because they existed for concrete reasons: the settling time for things in a discrete logic system was high, you needed to try and do as much as possible inside the time. (thats a stretch argument. they were what they were, but I do think the DEC 5 operand instruction model in part reflected "god, what can we do while we're here" attitudes) -We'd probably have a lot more Cray-1 like parallelism where a high frequency clock drove simple logic to do things in parallel over matrices, so I guess thats GPU cards.


20x more compute isn't much in terms of cryptographic security concerns, no? Ah but triple-DES was recently depreciated.

Definitely sounds right that we'd get an earlier, heavier emphasis on parallelism and hardware acceleration. I'm guessing the slower speed of causality also applies to propagation delay and memory latencies, so there wouldn't be new motivation for particular architectural decisions beyond "God please make this fast enough for our real-time control systems or human interaction needs".

If we got deep learning years or decades earlier, that also seems scary for AI existential risk, as we are just barely starting to figure out how the big inscrutable matrices work, and that's with the benefit of more time people have had to sound the alarm bells and attract talent and funding for AI interpretability research.


Fun fact: 3DES is slower than modern standards, namely AES.


Only 20x? I started my career programming on a mainframe system with a 1MHz memory cycle time (think of this as it's 'clock speed') - it had 3 megabytes of memory and supported 40 timeshare users (on terminals) and batch streams. At one point we upgraded by adding 1.5Mb, it cost $1.25M

Compared to a modern CPU it was maybe 5000x slower, the early Vax systems that Unix ran on were maybe 6 times faster.

People certainly wrote smaller programs, we'd just stopped using cards and carrying more than a box around (1000) was a chore. You spent more time thinking about bugs (compiling was a lot slower, and they went in a queue, you were sharing the machine with others).

But we still got our work done, more thinking and waiting


One thing to consider is that the resolution and colour space of your computer's display also depends on available clock speed, so if you reduce that by a factor of 20, you'll also have to reduce the number of pixels in your display by the same factor. So, we'll have worse displays as well as worse compute.

As with all else - just look back to computers about 20 years ago, and that'll give you a good idea of what it'd be like. I guess the main difference is that we might have still been able to miniaturise the transistors in a chip as well as we do now, so you'd still have multi-core computers, which they didn't really do very often 20 years ago.


Don’t ICs have faster internal PLLs than the advertised clock speed? As long as those signals don’t need to move too far.

They could probably figure out a less efficient parallel bus with lots more leads rather than the pixel, line, and frame sync we have now, at least once we moved on from CRTs (I don’t know how those work wrt phosphors). It’d change the cost tradeoffs and mean more chips nearer the display but not really put us back, as long as other components kept up. I.e. pcie line rate is developing much faster than display size/framerate/bandwidth so limiting factor is the panel development and connection standards.


HN answer:

To stick with your analogy: There would be more optimization and the rate of releasing stuff would be slower because it would have to be tested. That's it. Remember catrdige based console games? How many patches or day one updated did you have to install there? How many times would they crash or soft-lock themselves? People tested more and optimized more because there were constraints.

Today we have plenty of resources and thus you can be wasteful. Managers trade speed over waste. If you can make it work unoptimized, ship a 150 GB installer and 80 GB day1 patch do it NOW. Money today, not when you're done making it "better" for the user.

Sci-Fi answer: We wouldn't be playing the same type of games. Why would we have to rely on something like our representation of graphics? If the cognition would be 20x faster and more powerful we probably wouldn't need abstractions but would have found a way to dump data into the cognition stream more directly.

I think the idea that 20x faster cognition would just mean "could watch a movie at 480fps" is too limited. More like you could play 24 movies per second and still understand what's going on.


For the Sci-Fi answer our language would be optimized for extremely fast communication, maybe making sounds from our mouths alone would be way to inefficient. We probably would have easily made stuff that caught up with our cognition. The current hardware and software is more a representation of human limits than other limits.

I think the frame of wasteful is not correct. It's wasteful not to use resources if other resources are restricted and can be substituted by the plentiful. Of course the allocation of current resources can be debated but that is not caused by extra CPU performance, storage and RAM that is available.


I think there are plenty of ways to make far better use of the hardware we currently enjoy. If you don't focus on web based stuff, but go with just what's possible in a Win32 environment, for example... it was all there in the late 1990s, VB6, Delphi, Excel, etc.

We've had quite a ride from 8 bit machines with toggle switches and not even a boot rom, nor floating point, to systems that can do 50 trillion 32 bit floating point operations per second, for the same price[1].

Remember that Lisp, a high level language, was invented in 1960, and ran on machines even slower than the first Altair.

The end of "free money" is over, as is the era of ever more compute. It's time to make better use of the silicon, to get one last slice of the pie.

[1] The Altair was $500 assembled in 1975, which is $2900 today. I'm not sure how best to invest $2900 to get the most compute today. My best guess is an NVidia RTX 4080.


No electron apps.


A lot more chess games online instead.

Probably higher IQ as the IQ lowering social media we use would barely work.


Looking at average benchmarks, current consumer CPUs are about 20x faster than in 2007-2008 [0]. That means games like Call of Duty 4 and Crysis. Likely not much more online chess than today. And in tflops the RTX 4090 is 20x faster than the GTX 970 from just 10 years ago. But it's easy to overlook that progress if you just look at the performance of the average app.

[0] https://www.cpubenchmark.net/year-on-year.html


Sounds right to me. Without being able to rely on flashy visuals and low-latency so much, games would've had to be somewhat more strategic and intellectual to sell (although I imagine graphics would eventually catch up due to its fitness for parallel processing). Even if brain rotting visual spectacles were just pushed 7 years down the line, they still would probably have a more sophisticated flavor that might be cemented with time (e.g. this counterfactual TikTok might have given users much more direct control over their feed algorithm).


We had DOOM and Quake and Fallout 2 long before CPUs were 20x slower than today.


Everything is optimized for efficiency, size and speed. Like it was in the early days. With sparkles in creativity for finding a way to achieve O(n)

I think the only solution to the problem is to keep the memory and disks space very low.


Look no further than developers on Ethereum, who are still doing shit like voluntarily writing assembly for basic software to account for compute constraints. I can say from some brief experience, it’s a reality that I’m glad we don’t all occupy.


Easy, we would enjoy the software practices that were common with compiled languages until the mid-2010's, when people started using scripting languages for application development instead of OS scripting activities, with Zope, Django, Rails and friends, ending up in mostrosities like Electron, despite Active Desktop and XUL failure.


That’s not a hypothetical, is it? Given Moore’s Law, just look back a decade or so and you’ll get a sense of what software development was like when CPU speeds were 20x slower. And if you take it even further, looking back six decades or so, you’ll see things like the Story of Mel that would never happen in software development today.


Maybe software would have been more efficient to do the same, and software developers would still begin with an understanding of hardware and what's happening at a lower level (Assembly) before sending it instructions in an interpreted language.

The sharding of the developer has made things more inefficient in some ways.


At any time there are platforms with 20x more or less speed or space than the average, from tiny embedded processors through PCs and on up to clusters and mainframes. So, to see how a 20x slower computing platform could be you can look at small and power limited device development.


I think a better question would be “how fast would our software be if it was programmed by people who didn’t waste all that cpu power on frameworks, terrible algorithms, and layer after layer after layer of cruft”


AAA games would still look like Quake. The web would be much more static.


I think more than the speed itself, it is how fast we get there.

If instead of getting major hardware wins each year it will be a decade things will be much better because now there is pressure to make it so.


We wouldn't have AI.


Problem is one of mentality, imho.

See eg. the countless HN posts "hey look! I've used X to do Y" showing off some cool concept.

The proper thing would be to take it as that: a concept. Play with it, mod it, test varieties.

Like it? Then take the essential functionality, and implement in resource-efficient manner using appropriate programming language(s). And take a looong, hard look at "is this necessary?" before forcing it onto everyone's PCs/mobile devices.

But what happens in practice? Proof-of-concept gets modded, extended, integrated as-is into other projects, resource frugality be damned. GHz cpus & gobs of RAM crunch trough it anyway, right? And before you know it, Y built on top of X is a staple building brick that 1001 other projects sit on top of. Rinse & repeat.

A factor 20 is 'nothing'. And certainly not the issue here. Just look what was already possible (and done!) when 300 MHz cpus were state-of-the-art.

Wirth's law very much applies.


Didn't we have just that?, i'm sure there are tons of history that can fill in any gaps you may have in what it was like.


Not to be a jerk but it's a question of allocation of resources, which is basically what capitalism does. It was used because it existed.

If there is a prolonged economic slowdown (not crash, please!), then resources will be allocated to optimizing CPU cycles and all that hype-based developments will have less resources allocated to them.

It can be for some of us an imperative to fight for efficiency but we shouldn't do it in a way which is in a all or nothing approach. Know its advantages and disadvantages and work within that knowledge-framework.


We wouldn't have software that does the same things, at the same speed, but with 10000x faster hardware


We're definitely prioritizing features and just more applications and use cases over optimization. If CPUs were 20x slower, we'd probably see quite a few of the things that are possible right now. But with a lot more well optimized custom solutions rather than bloated frameworks.

And in some cases, multi-threading would be the only way to do things. Where right now, single-threaded file copy, decompression or draw-calls are largely a thing because it's way easier to do and there is no need to change it outside professional applications.

Also, some things might actually be better than they are right now. Having to wait for pointless animations to finish before a UI element becomes usable should not be a thing. If there was no CPU performance for this kind of nonsense, they wouldn't be there.

Please don't mix clockspeeds with performance. A Athlon™ 5350 from 2014 is >20x slower single threaded than a Core i9-14900K. Yet it's 2 GHz vs. 5.8 GHz. Architecture, Cache and Memory Speed matter A LOT.


> internet latency seems to be just barely sufficient

What? I played Quake 1-3, TFC over 56k with 300ms latency, on a CPU at least 20x slower than modern CPUs. Tribes 2 with 63 other players. Arguably more fun than the prescriptive matchmaking in games these days.

Games are a product of their environment. You don't let a pesky think like lag stop people having fun.


This would be modern version of Steampunk :)


Ask HN:


Same as now. Driven by incompetent management.


One word: DELIBERATELY.


One thing that slows down our machines is all the trackers that run in the background as we browse the web. Surveillance capitalism FTW! /s


> If human cognition were say, 20x faster relative to the speed of light

What would that even mean, being 20x faster than the speed of light? What does it imply?


'Relative to' rather than 'faster than', as in the speed of light being 20x slower or human perception and reflexes being 20x faster, or some mix of the two. If people were thinking way faster then the lag would be unbearable, and there would be no way around it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: