Optimizing for the hardware you are on is demonstrably an effort and skill issue. Everyone understands that with enough time and engineers, any piece of software could be optimized better. If only we had large volumes of inexpensive "intelligence" to throw at the problem.
This is one of my back-of-mind hopes for AI. Enlist computers as our allies in making computer software faster. Imagine if you could hand a computer brain your code, and ask it to just make the program faster. It becomes a form of RL problem, where the criteria are 1) a functionally equivalent program 2) that is faster.
This is what I was thinking, too. For so long, the default mode of operating a software company has been:
"Developer time is so expensive, we need to throw everything under the bus to make developers fast."
The kinds of things often thrown under the bus: Optimizations, runtime speed, memory footprint, disk image size, security, bug fixing, code cleanliness / lint, and so on. The result is crappy software written fast. Now, imagine some hypothetical AI (that we don't have yet) that makes developer time spent on the project trivial.
Optimistically: There might be time for some of these important software activities.
Pessimistically: Companies will continue to throw these things under the bus and just shit out crappy software even faster.
My favorite part of this phenomenon is every company that interviews developers on data structures and algorithms, then puts out a calculator app that takes half a gigabyte of storage and nearly as much RAM to run.
I have not had to use Windows in ages but every time I touch it I am amazed at the fact that it takes like 10-15GB for a bare installation of the latest version, while it does about the same amount of work as XP was able to do in under 1GB. Yes I am aware assets are a thing but has usability increased as a result of larger assets?
You can, with some programming languages, require a proof of this (see: Rocq, formerly 'coq').
I think a more interesting case might be showing functional equivalence on some subset of all inputs (because tbh, showing functional equivalence on all inputs often requires "doing certain things the slow way").
An even more interesting case might be "inputs of up to a particular complexity in execution" (which is... very hard to calculate, but likely would mean combining ~code coverage & ~path coverage).
Of course, doing all of that w/o creating security issues (esp. with native code) is an even further out pipe dream.
I'd settle for something much simpler, like "we can automatically vectorize certain loop patterns for particular hardware if we know the hardware we're targeting" from a compiler. That's already hard enough to be basically a pipe dream.
It's not even guaranteed to be functionally equivalent when compiled on the same hardware with the same compiler etc. Undefined behaviour can do what it wants. (And implementation defined behaviour also has a lot of leeway.)
However, if you stick to only defined behaviour, they are 'functionally equivalent', if your compiler doesn't have a bug.
This is one of my back-of-mind hopes for AI. Enlist computers as our allies in making computer software faster. Imagine if you could hand a computer brain your code, and ask it to just make the program faster. It becomes a form of RL problem, where the criteria are 1) a functionally equivalent program 2) that is faster.