Spreadsheets didn’t replace accountants, however, it made them more efficient. I don’t personally believe AI will replace software engineers anytime soon, but it’s already making us more efficient. Just as Excel experience is required to crunch numbers, I suspect AI experience will be required to write code.
I use chat-gpt every day for programming and there are times where it’s spot on and more times where it’s blatantly wrong. I like to use it as a rubber duck to help me think and work through problems. But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.
Spreadsheets actually did put many accountants and “computers” (the term for people that tallied and computed numbers, ironically a fairly menial job) out of business. And it’s usually the case that disruptive technology’s benefits are not evenly distributed.
In any case, the unfortunate truth is that AI as it exists today is EXPLICITLY designed to replace people. That’s a far cry from technologies such as the telephone (which by the way put thousands of Morse code telegraph operators out of business)
It is especially sad that VC money is currently being spent on developing AI to eliminate good jobs rather than on developing robots to eliminate bad jobs.
Many machinists, welders, etc would have asked the same question when we shipped most of American manufacturing overseas. There was a generation of experienced people with good jobs that lost their jobs and white collar workers celebrated it. Just Google “those jobs are never coming back”, you’ll find a lot of heartless comparisons to the horse and buggy.
Why should we treat these office jobs any differently?
Agree - also note that many office jobs have been shipped overseas, and also automated out of existence. When I started work there were slews of support staff booking trips, managing appointments, typing correspondence & copying and typesetting docuements. For years we laughed at the paperless office - well it's been here for a decade and there's no discussion about it anymore.
Interestingly at the same time as all those jobs disappeared and got automated there were surges of people into the workforce. Women started to be routinely employed for all but a few years of child birth and care, and many workers came from overseas. Yet, white collar unemployment didn't spike. The driver for this was that the effective size of the economy boomed with the inclusion of Russia, China, Indonesia, India and many other smaller countries in the western sphere/economy post cold war... and growth from innovation.
US manufacturing has not been shipped out. US manufacturing output keeps increasing, though it's overall share of GDP is dropping.
US manufacturing jobs went overseas.
What went overseas were those areas of manufacturing that was more expensive to automate than it was to hire low paid workers elsewhere.
With respect to your final question, I don't think we should treat them differently, but I do think few societies have handled this well.
Most societies are set up in a way that creates a strong disincentive for workers to want production to become more efficient other than at the margins (it helps you if your employer is marginally more efficient than average to keep your job safer).
Couple that with a tacit assumption that there will always be more jobs, and you have the makings of a problem if AI starts to eat away at broader segments.
If/when AI accelerates this process you either need to find a solution to that (in other words, ensure people do not lose out) or it creates a strong risk of social unrest down the line.
The plan has always been to build the robots together with the better AI. Robots ended up being much harder than early technologists imagined for a myriad different reasons. It turned out that AI is easier or at least that is the hope.
Actually I'd argue that we've had robots forever, just not what you'd consider robots because they're quite effective. Consider the humble washing machine or dishwasher. Very specialized, and hyper effective. What we don;'t have is Gneneralized Robotics, just like we don't have Generalized Intelligence.
Just as "Any sufficiently advanced technology is indistinguishable from magic", "Any sufficiently omnipresent advanced technology is indistinguishable from the mundane". Chat GPT will feel like your smart phone which now feels like your cordless phone which now feels like your corded phone which now feels like wireless telegram on your coal fired steam liner.
No, AI is tremendously harder than early researchers expected. Here's a seminal project proposal from 1955:
"We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. “
GP didn't say that AI was easier than expected, rather that AI is easier than robotics, which is true. Compared to mid-century expectations, robotics has been the most consistently disappointing field of research besides maybe space travel, and even that is well ahead of robots now.
I am not working in that field, but as an outsider it feels like the industrial robots doing most of the work on TSMC's and Tesla's production lines are on the contrary extremely advanced. Aside from that what Boston Dynamics or startups making prosthetics came up is nothing short of amazing.
If anything software seems to be the bottleneck for building useful humanoids...
I think the state of the art has gotten pretty good, but still nowhere near as good as people thought it would be fifty years ago. More importantly, as of a year ago AI is literally everywhere, hundreds of millions of regular users and more than that who've tried it, almost everyone knows it exists and has some opinion on it. Compare that to even moderately mobile, let alone general, robots. They're only just starting to be seen by most people on a regular basis in some specific, very small geographical locations or campuses. The average person interacts with a mobile or general robot 0 times a day. Science fiction as well as informed expert prediction was always the opposite way around - robots were coming, but they would be dumb. Now it's essentially a guarantee that by the time we have widespread rollout of mobile, safe, general purpose robots, they are going to be very intelligent in the ways that 20 years ago most thought was centuries away.
Basically, it is 1000x easier today to design and build a robot that will have a conversation with you about your interests and then speak poetry about those interests than it is to build a robot that can do all your laundry, and that is the exact opposite of what all of us have been told to expect about the future for the last 70 years.
Space travel was inevitably going to be disappointing without a way to break the light barrier. even a century ago we thought the sound barrier was impossible to penetrate, so at least we are making progress, albiet slow.
On the bright side, it is looking more and more like terraforming will be possible. Probably not in our lifetimes, but in a few centuries time (if humanity survives)
I think the impact of AI is not between good jobs va bad jobs but between good workers and bad workers. For a given field, AI is making good workers more efficient and eliminating those who are bad at their jobs (e.g. the underperforming accountant who is able to make a living doing the more mundane tasks whose job is threatened by spreadsheets and automation)
I think AI, particularly text based, seems like a cleaner problem.
Robots are derivative of AI, robotics, batteries, hardware, compute, societal shifts. It appears our tech tree needs stable AI first, then can tackle the rest of problems which are either physical or infrastructure.
I would guess that "they" are "the capitalists" as a class. It's very common to use personal pronouns for such abstract entities, and describe them in behaving in a goal-driven matter. It doesn't really matter who "they" are as individuals (or even if they are individuals).
More accurate would be something like "reducing labor costs increases return on capital investment, so labor costs will be reduced in a system where economy organizes to maximize return on capital investment". But our language/vocabulary isn't great at describing processes.
Thanks. Not putting this onto you so I'll say "we/our" to follow
your good faith;
What is "coming for our jobs" is some feature of the system, but it
being a system of which we presume to be, and hope to remain a part,
even though ultimately our part in it must be to eliminate ourselves.
Is that fair?
Our hacker's wish to "replace myself with a very small shell-script
and hit the beach" is coming true.
The only problem I have with it, even though "we're all hackers now",
is I don't see everybody making it to the beach. But maybe everybody
doesn't want to.
Will "employment" in the future be a mark of high or low status?
The problem is that under the current system the gains of automation or other increased productivity do not "trickle down" to workers that are replaced by the AI/shell script. Even to those who create the AI/shell script.
The "hit the beach" part requires that you hide the shell script from the company owners, if by hitting the beach you don't mean picking up empty cans for sustinence.
> Will "employment" in the future be a mark of high or low status?
Damn good question.
Also, +1 for beach metaphor.
My (ignorant, evolving) views on these things have most recently been informed by John and Barbara Ehrenreich's observations about the professional-managerial class.
An interesting view is that people would still "work" even if they weren't needed for anything productive. In this "Bullshit job" interpretation wage labor is so critical for social organization and control that jobs will be "invented" even if the work is not needed for anything, or is actively harmful (and that this is already going on).
> But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.
isn't it sad that real engineers are going to work as cleaners for AI output? And doing this they are in fact training the next generation to be more able to replace real engineers...
We are trading our future income for some minor (and questionable) development speed today.
AI might help programmers become more rigorous by lowering the cost of formal methods. Imagine an advanced language where simply writing a function contract, in some kind of Hoare logic or using a dependently-typed signature, yields provably correct code. These kinds of ideas are already worked on, and I believe are the future.
I'm not convinced about that. Writing a formal contract for a function is incredibly hard, much harder than writing the function itself. I could open any random function in my codebase and with high probability get a piece of code that is < 50 lines, yet would need pages of formal contract to be "as correct" as it is now.
By "as correct", I mean that such a function may have bugs, but the same is true for an AI-generated function derived from a formal contract, if the contract has a loophole. And in that case, a simple microscopic loophole may lead to very very weird bugs. If you want a taste of that, have a look at how some C++ compilers remove half the code because of an "undefined behaviour" loophole.
Proofreading what Copilot wrote seems like the saner option.
That is because you have not used contracts when you started developing your code. Likewise, it would be hard to enforce structured programming on assembly code that was written without this concept in mind.
Contracts can be quite easy to use, see e.g. Dafny by MS Research.
I think this is longer off than you might expect. LLMs work because the “answer” (and the prompt) is fuzzy and inexact. Proving an exact answer is a whole different and significantly more difficult problem, and it’s not clear the LLM approach will scale up to that problem.
Formal methods/dependent types are the future in the same way fusion is, it seems to be perpetually another decade away.
In practice, our industry seems to have reached a sort of limit in how much type system complexity we can actually absorb. If you look at the big new languages that came along in the last 10-15 years (Kotlin, Swift, Go, Rust, TypeScript) then they all have type systems of pretty similar levels of power, with the possible exception of the latter two which have ordinary type systems with some "gimmicks". I don't mean that in a bad way, I mean they have type system features to solve very specific problems beyond generalizable correctness. In the case of Rust it's ownership handling for manual memory management, and for TypeScript it's how to statically express all the things you can do with a pre-existing dynamic type system. None have attempted to integrate generalized academic type theory research like contracts/formal methods/dependent types.
I think this is for a mix of performance and usability reasons that aren't really tractable to solve right now, not even with AI.
> If you look at the big new languages that came along in the last 10-15 years (Kotlin, Swift, Go, Rust, TypeScript) then they all have type systems of pretty similar levels of power, with the possible exception of the latter two which have ordinary type systems with some "gimmicks".
Those are very different type systems:
- Kotlin has a Java-style system with nominal types and subtyping via inheritance
- TypeScript is structurally typed, but otherwise an enormous grab-bag of heuristics with no unifying system to speak of
- Rust is a heavily extended variant of Hindley-Milner with affine types (which is as "academic type theory" as it gets)
Yes, I didn't say they're the same, only that they are of similar levels of power. Write the same program in all three and there won't be a big gap in level of bugginess.
Sometimes Rustaceans like to claim otherwise, but most of the work in Rust's type system goes into taming manual memory management which is solved with a different typing approach in the other two, so unless you need one of those languages for some specific reason then the level of bugs you can catch automatically is going to be in the same ballpark.
> Write the same program in all three and there won't be a big gap in level of bugginess.
I write Typescript at work, and this has not been my experience at all: it's at least an order of magnitude less reliable than even bare ML, let alone any modern Hindley-Milner based language. It's flagrantly, deliberately unsound, and this causes problems on a weekly basis.
Thanks, I've only done a bit of TypeScript so it's interesting to hear that experience. Is the issue interop with JavaScript or a problem even with pure TS codebases?
So is the off the cuff, stream of consciousness chatter humans use to talk. We still manage to write good scientific papers (sometimes...), not because we think extra hard and then write a good scientific treatment in one go without edits, research or revisions. Instead we we have a whole text structure, revision process, standardised techniques of analysis, searchable research data collections, critique and correction by colleagues, searchable previous findings all "hyperlinked" together by references, and social structures like peer review. That process turns out high-quality, high-information work product at the end, without a significant cognitive adjustment to the humans doing the work aside from just learning the new information required.
I think if we put resources and engineering time into trying to build a "research lab" or "working scientist tool access and support network" with every intelligent actor involved emulated with LLMs, we could probably get much, much more rigorous results out the other end of that process. Approaches like this exist in a sort of embryonic form with LLM strategies like expert debate.
I think the beauty of our craft on a theoretical level is that it very quickly outgrows all of our mathematics and what can be stated based on that (e.g. see the busy beaver problem).
It is honestly, humbling and empowering at the same time. Even a hyper-intelligent AI will be unable to reason about any arbitrary code. Especially that current AI - while impressive at many things - is a far cry from being anywhere near good at logical thinking.
I think the opposite! The problem is that almost everything in the universe can be cast as computing, and so we end up with very little differentiating semantic when thinking about what can and can't be done. Busy beavers is one of a relatively small number of problems that I am familiar with (probably there is a provably infinite set of them, but I haven't navigated it) which are uncomputable, and it doesn't seem at all relevant to nature.
And yet we have free will (ok, within bounds, I cannot fly to the moon etc, but maybe my path integral allows it), we see processes like the expansion of the universe that we cannot account for and infer them like quantum gravity as well.
I use chat-gpt every day for programming and there are times where it’s spot on and more times where it’s blatantly wrong. I like to use it as a rubber duck to help me think and work through problems. But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.