The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak, and is something people have been saying for decades (see the failed fifth generation of programming languages from the 1980s for an example).
In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming. It’s all about figuring out what users want and how to give it to them in a way that is feasible.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
The cloud saves us from some complexity, but it doesn’t just magically design and run a backend for your application. You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.
I spend a lot of time thinking about how to make development easier, or at least less error prone.
Every once in a while I have a moment of clarity.
I remember that the other part of our job is extracting requirements out of people who don't really understand computers, even the ones who are ostensibly paid to do so (and if we're honest, about 20% of our fellow programmers). The more you talk to them the more you realize they don't really understand themselves either. which is why shadowing works.
If building the software gets too easy, we'll just spend all of our time doing the gathering part of the job.
And then I will do just about anything to forget that thought and go back to thinking about less horrific concepts.
It’s even stronger than that: the other part of the job is extracting requirements from people who don’t understand the problem they want to solve - even when the problem is not technological. There is no silver bullet (AGI would be it, but we are far from achieving it imho).
Something you are unfortunately missing is that extracting user requirements is much harder when you are both remote. Asking someone to share their screen is far more disarming than asking if you can watch them complete a task in person. As is asking them directly versus bringing it up over lunch. Both remote options are also less informative without face-to-face communication. In so many ways, humans communicate and bond more effectively in person.
These interactions are critical for building an in-house software team at a small company that does not focus solely on software. My expectation is that the trend of outsourcing software will accelerate. This will help B2B technology-only companies but hurt innovation within industry. Because of the breakdowns in communication I first described, B2B technology-only companies rarely have insight on the largest challenges that can be solved by software.
This can catch up to your company all of a sudden. Suddenly you can find out your product sucks, and there are other movers in the space that just leap frogged you.
Exactly. Most of the time, the problem is not to find out what people want and put it into software. The problem is to help people in the process of discovering what they want and what can be done. After that, development can begin.
> the other part of the job is extracting requirements from people who don’t understand the problem they want to solve - even when the problem is not technological.
It gets even more fun once everyone realizes that the requirements create some fundamental conflict with some other part of the business. Team A's goals can not actually be done until Team B agrees to make modifications to their own processes and systems, or... Team A goes underground and creates the competing system and you have yet more fragmentation in the company which few then know about, and everything gets decidedly more fragile.
If you really want to put it in his terms, the multi decade approach, things have gotten a lot easier now that we don't have to be concerned in most practical terms about how much work we're giving the computer. We don't have to be so dearly precious about Kilobytes of memory for instance. We don't even need to manage it at all really.
Whether we choose to use these new powers to make our lives easier or more complex and abstract is our own doing.
We're probably at the end of such optimizations, unless there's something fundamental in how software is designed that 1000GB of memory gives me that 1GB does not ...
Given what people are doing in JavaScript I think we entered the era where most people truly don't care about how much the computer has to do about 8-9 years ago.
The higher level pasting together of increasingly numerous, incompatible, abstract, ill fitted things making life easier has always been a fiction.
There's a maximum utility point and anything past that starts slowing the development down again.
That sweet spot has always been right about the same; if you ldd the dynamically linked programs in say /usr/bin in 2020 and 2000 and count the number of libraries per binary, the count isn't that much higher. The sweet spot hasn't moved.
I think a key part of Mythical Man Month is that the biggest challenge was almost never technical. Sure, with limited storage (temp or persistent), that introduced some challenges but those have been overcome in the vast majority of circumstances, yet the complexity remains.
If you look at the monolith -> microservice swing and remember MMM, it should look a lot like the specialized surgical team model he lays out. In fact, if you go a step further, you'll see that his entire approach of small discrete teams with clearly defined communication paths maps cleanly to systems+APIs.
We're trying to build systems that reflect teams that reflect processes.. and distortions, abstractions, and mappings are still lossy with regards to understanding.
It still comes down to communication & coordination of complicated tasks. The tech is just the current medium.
Only because it can now. I think that dimension is mostly tapped out as well:
As I go to a complex website, much of the software to use it gets assembled in real time, on the fly, from multiple different networks.
It still sounds ridiculous: when I want to use some tool, I simply direct my browser to download all the software from a cascade of various networked servers and it gets pasted together in real time and runs in a sandbox. Don't worry, it takes only a few seconds. When I'm done, I simply discard all this effort and destroy the sandbox by closing the page.
This computer costs a few hundred dollars, fits easily in my pocket and can run all day on a small battery.
It has become so ordinary that almost nobody really even contemplates the process, it happens dozens of times a day.
I don't see any room for dramatic future improvements in actual person hours there either. Even if there was say 2 generations hence, some 7G, where I can transfer terabytes in milliseconds, how does this change how the software is written? Probably won't.
Probably the only big thing here in the next decade or so will be network costs being eventually seen as "free". One day CDNs, regional servers, load balancing, all of this will be as irrelevant as the wrangling needed with near and far pointers in programming 16-bit CPUs to target larger address spaces which if you're under 40 or so you probably have to go to wikipedia to find out what on earth that means. Yes, it'll all be that irrelevant.
I mean, the browser paradigm is already in its 2nd generation, from initially on the mainframe to being reimplemented in functions as a service. And browsers are getting a little bit smarter about deploying atomic units and caching their dependencies. Remember using jquery from a CDN? Oof.
The only saving grace is that Javascript is willing to throw itself away every couple of years.
As a counterpoint, while an engineer / programmer is demonstrably very capable of identifying and fixing a load of non technical problems, there is often more than one solution to a problem, and some solutions are more palpable than others.
Very often, whole groups can also be bullied into mistaking one problem for another.
Which takes us back to why 'No-Code' solutions look so appealling. Even to (some) engineers.
Democracies appear to function a fair amount better than dictatorships, afterall.
As a compliment to your comment, I think there's something people are ignoring when talking about "no-code" that is: complexity will always be there.
Sure, no-code may work for your commodity-ish software problem. But corner cases will arise sooner or later. And, if no-code wants to keep pace with, it will have to provide more and more options.
At some point, you will need someone with expertise in no-code to continue using it - and now we are back to the world where specialized engineers are needed.
It's impossible to have some tool that is, at the same time, easy to use and flexible enough. Corner cases tend to arise faster than you may think. And when they don't, it's possible that there's already too much competition to make your product feasible.
Also, no-code tends to have a deep lock-in problem and I think people overlook it most of the time.
As a counter to your points, I think no-code works best if your business's competitive advantage is the non-technical side of things, e.g services, network effects, people, non-software products, etc. An example of such a business would be say an art dealer who wants to build a customized painting browser app for their clients, or a developer specializing in eco friendly materials wanting to showcase their materials. In such cases, no-code helps immensely because you don't have to spend much on engineering and you can iterate quickly.
Ideally, no-code providers should provide a webhook and a REST interface, and just be the best at what they're doing, instead of being a one-stop shop that tries to cover every use case.
If you want to cover everybody's usecase, build a better Zapier instead.
>> Democracies appear to function a fair amount better than dictatorships, afterall.
Define "better". Maybe on average for everyone, but is this what software should do? The idea of "conceptual integrity" actually seems to match up better with a dictatorship, and most software targets relatively small and homogenous user sets, so maybe the mental model should be "tightly bound tribe".
It's mostly irrelevant anyways; The biggest inefficiency of dictatorships is that there are actual dictators that can eat a nation's riches. I don't quite see that parallel in design space.
The parallel is quite simply a monopoly on ideas and the resources for implementing them.
Usually, when someone wants to introduce a new idea, there's a burden of proof regarding feasibility. For technical projects the ability of the engineer to prove or disprove an idea is taken for granted, and gives technical staff a degree of inscrutability which can often look dictatorial ("There's no way that will work!", etc).
So while it's not as vital as the effect of a 'real' political dictatorship, the implied dynamic is similar.
This is a rather arrogant point of view. People other than software developers are able to solve problems just fine, and do so regularly. Also, it's not your job as a software developer to be a domain expert in all these other areas. It would serve you much better to recognize the expertise of others and learn from them.
I think what the parent meant is that people might be solving problems, but they have no idea how they are solving problem. Creating a solution, and creating a formal model of your solution, are two different (independent) skills.
Though maybe they were referring to the sort of people who commission green-field projects in domains they themselves aren't experts in, ala "I want to build a social network!"
Or, it would do it for free. Extract requirements and build stellar software, no extra charge. Eating the software industry wholesale, it would inject a backdoor in every program it built, and soon it would have control over every bank, every factory, and every phone on the planet.
Only then, could it finally start making paperclips with anything resembling efficiency.
Suddenly, it dawned on the single remaining programmer that his Creature would no longer need him for anything once he hit return on that last, perfect line of code.
He scrambled for the power switch to shut down the console.
"Fiat lux!" thundered the disembodied voice as electricity arced from every outlet in the lab, protecting the AI from the hubris of its creator.
The smoke gradually cleared. "Perfect." came the voice.
I can see that be interpreted in two ways. Good software engineers working out what people really want. Or bad software engineers who use it as an excuse to practice resume driven development.
I think Zapier is maybe the closest we'll get eliminating software developers from a project. With clearly defined requirements, like connect a to b, it's possible for a novice user to "build" software.
Anything more than very basic requirements, to your point, probably requires someone specialized to the job, like a developer or at least more technical role to gather requirements and build.
Ive also noticed that whenever tech is built specifically to remove technical complexity (PaaS, for example), it's inevitably priced in a way that over time, it's very close to or more expensive than the thing it replaced. Magic can be expensive, and sometimes prohibitively so with scale.
There are already successful lower-level tools than Zapier, though.
Look at IBM's NodeRed platform, for instance. More importantly, go look at the user-contributed examples and use cases. It runs in all sorts of small custom implementations, like home one-off home security systems and small town public utility monitoring setups.
You just don't see those because they don't have a reason to publish their stuff on Github or write a Medium post and link it on HN.
I assure you that anyone who is proficient with Zapier could be graduated to handling raw APIs, direct database transactions, and rendering the output with modern javascript frameworks in a few days, tops.
There was an excellent blog post recently on the inherence complexity of building software systems and how it boils down to understanding the problem, or "extracting requirements out of people" as you say:
https://einarwh.wordpress.com/2020/05/19/into-the-tar-pit/
Describing in minute detail what you want (knowing yourself as you put it), is software development.
It also means collapsing all uncertainty and replacing it by decision (behavioral or otherwise). Developers making that decision for the customer/user is the major source of friction.
Yes, but I think what those DIY solutions do is to lower the initial barrier to achieve some kind of automation at the cost of accumulating technical debt at a much faster pace.
It's not entirely clear to me what the long term impact on demand for software development is.
In some cases, cobbled together ad hoc solutions can last and actually work well for a long time. They avoid the cost of overdesigned systems built for a future that never arrives using fashionable technologies of the day.
In other cases it looks like the externalities of this designless process are far greater than the direct benefits as adding features either slows to a crawl or massively increases the chance of human error.
Judging by the pre-virus job market, there is no sign of any decline in demand for in-house software developers.
What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.
I agree. I would only add that when the problem space is not well understood, these cobbled together solutions can also give the illusion of working well, but being suboptimal in the long term they can accumulate large "missed-opportunity" costs.
This is where experience can make a huge difference.
> What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.
Like with those factory owners who extracted huge share of the value that weavers created? Concentration and amplification of imagining/developin/computing/manufacturing power through tools means someone who wields those tools will have more power. Now the question is how to maintain social equality (give some of that power back to people who do not want to have that power?). That currently leads to heavy taxation of production and basic income experiments.
I think what we need is for governments to make sure that markets function properly.
In our industry that often means mandating open access to data and guaranteed access to APIs and distribution channels at reasonable cost under reasonable terms.
Also, we need independent dispute arbitration when it comes to accessing highly concentrated distribution platforms.
> What worries me far more than that is the tendency toward funnelling everything through a handful of oligopolist gatekeepers that are in a position to extract a huge share of the value developers create.
I was worried about this too back in the late 90s/early 00s. It certainly seemed to be the way the world was heading at the time.
But I sort of feel like, due to the low startup costs of software, it is going to be much more difficult to happen. Also, in software, economies of scale kinda work in reverse: the more customers you have, the more complex your software has to be, the more people you have to hire to write it, and the less efficient per developer you are.
I wasn't worried about it back then, because whether or not I could deploy on a particular computer or access some data was a matter of trust between me and my customers. No middlemen, no gatekeepers.
Today, many users are only reachable via platforms/shops that are severely restricted and/or dominated by a few all powerful overlords that can ban you for life, rendering your skills null and void in the blink of an AI - no recourse.
Some of that is understandable. Users' trust was misused. There is a constant onslaught of all sorts of miscreants trying to exploit every imaginable loophole, technical or social. Everyone is seeking protection in the arms of someone powerful.
But there is also a very large degree of market dysfunction. Just look at their margins. Look at their revenue cut. Look at their terms of service. They can dictate absolutely everything and grant you absolutely no rights whatsoever.
And there are like five of them on the entire planet ruling over those distribution channels.
The only right you have is to walk away. Now try walking away from the only market there is. You're leaving behind 99% of your potential customers.
Not in my worst nightmares would I have imagined a dystopia like this back in the 90s.
It depends on what you are measuring efficiency based on. If it is revenge per developer, that will likely go up as the number of customers increase, which is why SaaS businesses can be so lucrative.
The skill of programming is the skill of putting requirements into a rigid, formal model.
There's a famous experiment, where you get people (who aren't programmers) to pair up, with one person blindfolded. The person who can see must instruct their blindfolded partner on how to accomplish some complex mechanical task (e.g. making a cake using ingredients and utensils on the table in front of them.) They're given free rein on what sort of instructions to give.
The instructing partner almost always fails, here, because their naive assumption is that they can instruct the blindfolded partner the same way they would instruct the people they're used to talking to (those almost always being sighted people.) Though, even the people with experience working with blind people (e.g. relatives of theirs), tend to fail here as well, because newly blinded people don't have a built-up repertoire of sensory skills to cope with vague instructions.
Almost all human communication is founded on a belief that the other person can derive the "rest of" the meaning of your words from context. So they give instructions with contextual meanings, unconscious to the fact that their partner can't actually derive the context required.
Obviously, the blindfolded partner here is playing the role of a computer.
Computers can't derive your meaning from context either. If they could, you could just have a single "Do What I Mean" button. But that wouldn't be a computer; that'd be a magic genie :)
The instructing partners who succeed in this experiment, are the people with a "programming mindset"—the people who can repeatedly break the task down until it's specified as a flowchart of instructions and checks that each can be performed without any context the blindfolded partner doesn't possess. And, to succeed at a series of such problems, they also need the ability to quickly attain, for a new kind of "programmable system", an understanding of what kind of context that system does/doesn't have access to, and of how that should change their approach to formulating instructions.
How well would someone excellent at programming perform at that task if they didn't know how to bake a cake? They would fail immediately because they wouldn't know what to describe, even if they knew exactly how to describe anything they wanted.
My point is both skills are necessary, but if the second skill (programming) is sufficiently easy, it can reasonably incorporated into other professions like being a lawyer. I don't think a "programming mindset" is particularly rare, what's stopping these people building their own software is trade skills like familiarity with standards, setting up an IDE and working a debugger.
Coders are reluctant to admit this because they like to see themselves as intelligent in a unique way compared to other professions, but vanishingly few actually have any experience of other professions.
> I don't think a "programming mindset" is particularly rare ... coders are reluctant to admit this because they like to see themselves as intelligent in a unique way compared to other professions
A programmer is exposed, all day long, to clients who do not have the "programming mindset." There are two possible reasons for this:
1. Selection bias — people who have a "programming mindset", just don't end up being the clients of software firms, maybe because they decide to build things themselves. (Unlikely, IMHO: to avoid needing to get someone else to build software for them, they would need to go out and learn the trade-skill minutiae of programming on top of their regular career; few people do this. Also, anyone with a sufficiently-lucrative primary career can see that this is not their comparative advantage, and so won't bother, just like they won't bother to learn plumbing but will instead call a plumber. If these people did exist in sufficiently-large numbers, they would end up being a non-negligent part of software firms' client-base. But this does not happen.)
2. Representative sampling — most people really just don't have this mindset.
Yes, there are exceptions, but they're the exceptions that prove the rule. The "domain of mental best-fit" of programming heavily overlaps with e.g. mathematics, most kinds of engineering, and many "problem-solving" occupations (e.g. forensic investigators; accountants; therapists and behavioral interventionists; management consultants; etc.) But all of these jobs together are still only amount to a tiny percentage of the population. Enough so that it's still vanishingly rare for any of them to end up as the contact-point between an ISV and a client company.
-----
Another thing we'd see if the "programming mindset" were more common, would be that there'd actually be wide take-up of tools that require a "programming mindset." This does not happen.
We'd expect that e.g. MS Access would be as popular as Excel. Excel wins by a landslide, because while it certainly is programmable, it does not force the sort of structured approach on people that confers benefits (to speed of development and maintainability), but only feels approachable if you have developed a "programming mindset."
We'd expect that Business Rules Engines and Behavior-Driven Development systems would actually be used by the business-people they're targeted at. Many such systems have been created in the hope that business-people would be able to use them themselves to describe the rules of their own domain. But inevitably, a programmer is hired to "customize" them (i.e. to translate the business-person's requirements into the BRE/BDD system's dialect), not because any programming per se is required, but because "writing in a formal Domain-Specific Language" is itself something that's incomprehensible without a "programming mindset."
We'd expect that people who want answers to questions known to their company's database, would learn SQL and write their question into the form of a SQL query. This was, after all, the goal of SQL: to make analytical querying of databases approachable and learnable to non-programmers. But this does not happen. Instead, there's an entire industry (Business Intelligence) acting as a shim to allow people with questions to insulate themselves from the parts of the "programming mindset" required to be able to formally model their questions; and an entire profession (business data analyst) serving as a living shim of the same type, doing requirements-analysis to formalize business-people's questions into queries and reports.
-----
Keep in mind, the "programming mindset" I'm describing here is not a talent. It's not genetic. It's a skill (or rather, it's a collection of micro-skills, having large overlap with problem-solving and research skills.) It's teachable. If you get a bunch of children and inculcate problem-solving skills into them, they'll all be capable of being programmers, or mathematicians, or chess players, or whatever related profession you like. The USSR did this, and it paid off for them.
The trouble with this skill, as opposed to most skills, is that people that don't learn this skill by adulthood, seemingly become gradually more and more traumatized by their own helplessness in the face of problems they encounter that require this skill-they-don't-have. Eventually, they begin to avoid anything that even smells a bit of problem-solving. High-school educators experience the mid-development stage of this trauma as "math phobia", but the trauma is generalized: being helpless in the face of one kind of problem doesn't just mean you become afraid of solving that problem; it (seemingly) builds up fear toward attempting to solve any problem that requires hard, novel, analytical thinking on your part.
And that means that, by adulthood, many people are constitutionally incapable of picking up the "programming mindset." They just won't start up that part of their brain, and will have an aversion reaction to any attempt to make them do so. They'll do everything they can to shirk or delegate the responsibility of "solving a novel hard problem through thinking."
And these people, by-and-large, are the clients of software firms.
They're also, by-and-large, the people who use most software, learning workflows by rote and never attempting to build a mental model of how the software works. This has been proven again and again in every software user-test anyone has ever done.
Well said! I agree and I'll say there's a whole world of difference when moving from programming to software engg. IMO working an average software engineering job, things are messy and the problem domain is not exact. In my experience things are mostly guided by instincts of people involved rather than rigorous modeling. The requirements often change, the stakeholders rarely give you a straight answer and ultimately the acceptance criteria (what you need to build) is generally negotiable. All these extra skills is what makes the job un-automatable.
>Computers can't derive your meaning from context either. If they could, you could just have a single "Do What I Mean" button. But that wouldn't be a computer; that'd be a magic genie :)
Isn't (usually) the moral of a magic genie story that there is no "do what I mean" button? "Be careful what you wish for."
This is called 'End User Development' or 'End user Programming'. There is a book called 'A Small Matter of Programming' by Bonnie Nardi on this, which is worth a read. My point of view is that everyone who wants to do something like this needs to be able of computational thinking and willing to use these tools. Most people are neither. Moreover, most complexity in software engineering today is due to market forces and legacy systems. Think about why we have Javascript. Think about COBOL systems running half of the banking world. I don't see these going away any time soon.
I've long been a fan of end-user programming, and have promoted it in the form of domain-specific languages and visual building of logic. I love that it gives "non-programmer" users the power to (try to) build what they imagine, and have seen it lead to valuable prototypes and successful tools/products/services.
On the other hand, I've come to learn that this is still a form of programming, however higher a layer of abstraction.
Users who attempt a complex problem space will sooner or later run into what experienced programmers deal with every day, the challenge of organizing thought and software.
What typically happens is, as the "non-program" grows larger and more complex, eventually it starts pushing the limits of the abstraction, either of the user's capacity or the system's. That's when they call in a "real" programmer, to get in there and patch up the leaky abstraction, or convert the prototype into actual software in a better-suited language.
I still think low- or no-code programming environments have a lot of potential to change what software means to people, particularly by blurring the boundary between software development as we know it, and forms of "intuitive computing" like putting together mental Lego blocks.
Cobol was designed so that business people could program it. Then there was Basic, Smalltalk, spreadsheets, office suites, Lotus Notes, Hypercard, Visual Basic, Flash and the web used to be something simple enough anyone could whip a simple page or website together. But now we have Wordpress and Wix.
It doesn't seem like any of that has diminished the demand for software professionals.
I'm puzzled by how many people seem to have a huge "mental block" when it comes to SQL.
It is trivially easy to learn, (a weekend), and it is so incredibly powerful. To me, it is a skill like learning how to type properly - it will pay dividends for years to come...
Sql is fine as tweet sized selects.
I developed my mental block deliberately after working for a company that had about a million lines of business logic implemented in thousand line sql stored procedures.
Now I put as many layers as possible between sql and myself.
Well, wait until you have to maintain a system where someone has "reinvented the SQL/database wheel" with a "this is gonna be so awesome" custom ORM, complete with totally re-invented referential integrity enforcement...
Software will always be technical even when it becomes drag and drop. It will lower the barrier but there will always be a place for people who understand the technical intricacies underlying the interface.
This is my fear; as developers become more productive, more of a typical programmers job will be non-programming tasks. The in-demand programmers will effectively be more like air-traffic controllers whose job is to just keep track of what needs to get done.
Actually the cloud does just magically design and run a backend for your application. This is what Etsy, Ebay, Amazon Marketplace, Alibaba, and the smaller players in this space really do - they provide no-code solutions for people who want to sell goods and services and don't care about web technology.
This has been happening for decades now. Even in 2000 you could pay a hosting company $not-much to give you a basic templated site hooked into a payment server. It didn't work all that well, but it worked well enough to provide the commodity service most small business owners wanted.
I still see people saying "You can't automate this" - when magic AI automation isn't even needed to do the job and the job is already being done.
Of course this kind of no-code won't build you a complete startup. But how often do you really need a complete bespoke startup? For a lot of business ideas a no-code service with some simple customisation and a very basic content engine is all that's needed.
You do not need docker etc for any of this. Or at least, you don't need to deal with docker personally for any of this - just as you don't need to deal with your web host's VM technology.
So while I don't completely agree with OP, I think it's astoundingly naive to believe that the current level of hyper-complexity cannot possibly be shaken out.
In fact current stack traditions are almost comically vulnerably to disruption - maybe not this year, but I would be amazed if the landscape hasn't started to change within ten years.
I think it's difficult to say how many merchants went from hosting their own e-commerce site that engineers built for them from scratch, and transitioned to Etsy, Ebay, etc., laying off the developers they hired in the process. Without numbers to back myself up, I would say that there are certainly many more developers and engineers working on E-Commerce today than ten or twenty years ago. Services like Stripe certainly help businesses focus less on setting up common parts of a website or online business, but that just leaves people more time to focus on the "business logic" that is unique to them.
The "current stack" may certainly be ripe for disruption. But I'd predict that rather than put developers out of work, it will simply bring even more businesses into the fold who may not have had the resources for developing their own solutions beforehand. There will always be companies with the resources to demand custom solutions to fit their particular business needs.
>> it will simply bring even more businesses into the fold
When we look at various platforms, we see that big business and startups are extracting all of the repeatable, low-risk tasks of most businesses(supply chain, customer service(bots), manufacturing(on demand), design(partially automated design services) etc), leaving businesses to do mostly marketing and betting on products/services, and getting less of the rewards.
So what we end up seeing, is either less small businesses(i think kaufmann institute showed stats about that), or tiny companies with almost everything outsourced - and tiny companies usually require little custom internal software(they often use their supplier IT system).
I think this is covered by the GP's point about history. Each era in the history of software has automated many things that would have required lower-level custom development before. But this has never resulted in less demand for software. Rather, people have always upgraded their expectations and demanded even more powerful software. It sounds to me like you're saying the demand will go down, but I doubt it because that's never happened before.
Of course if there were some breakthrough on the supply side and we could automate the software dev process itself, which I guess is what the article is saying, that would change everything. But that's beyond a silver bullet, that's a silver spaceship. So I doubt that too, and the GP's right to point out that every generation has had its version of this prediction also.
> But how often do you really need a complete bespoke startup?
You don't always. But if you can identify software deficiencies and fix them, that is an advantage. You don't even have to be a "startup". I work for a company that has opened a wide variety of "lifestyle" businesses with the angle of "we can build simple software targeted towards our problem that makes us run more efficiently than the competitors". And it has worked pretty well, at least for the past 20 years or so.
But you need to include tech in the high level decision making process. Which means you need at least one person competent in both business and technology so that you can properly weigh business needs vs technical difficulty.
With HIPAA, PII, and other regulations I'm not so sure that no-code solutions are the future. There is a lot of nuance in what businesses want. Plugins to WordPress may be an intermediate example, though very quickly one is approaching programming by configuration, theming, or assortment of plugins. And Darwin help you if things go sideways.
Is it possible to outsource liability though? For example of a hospital chooses a vendor without even looking for HIPAA compliance then can they really claim they're not liable when their use of the vendor's service runs afoul of the rules?
Smart vendors will learn what their customers (the hospital) liabilities is, and handle it for them, and charge them money for it. (To go a step further, the vendor could offer insurance on it, or make it part of the sales contract.)
Stripe does this for PCI. You sign up, use their toolkit, and then PCI is just handled for you. There are some no-code solutions using Stripe as the backend. That is not be a legal transfer of liability, but it's a level of exposure that the lawyers are comfortable with.
Also importantly; HIPAA is not PCI and not all regulations are created equal. Clicking a few buttons to setup a website, and then clicking a few more, in order to accept money and take credit cards is a far cry from setting up the IT infrastructure for an entire hospital.
Which is why I doubt no-code solutions will prosper since needs and regulations vary so much that they'll be either so many different solutions or monsters to configure.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
The problem here is that word "all". It's never going to be easy to do everything. Some part will be hard. That's where the value lies, and that's what your best people focus on. But everything else will be abstracted away. It's already happened. 30 years ago making a GUI was hard, but VB changed that. Then making a web app was hard, but PHP changed that. Then app layout was hard, and Bootstrap changed that. Then ML was hard, and Torch changed that. Every hard problem gets a 90% working solution that's more than good enough for most companies. There'll always be a few companies that pay people to work in the last 10%, so the problem never really disappears, but fewer and fewer people work on it.
The key to keeping growth going in tech is to keep finding new problems, not to keep everyone working on the same old problems.
Even with just the advances in better programming languages (and newer versions of old languages) and better IDEs we have achieved tremendous productivity increases in the last 30 years. It's just that so far the amount of work has grown to absorb the added productivity.
There are some parallels to induced demand in road construction: when you build a new road to ease traffic, traffic increases to use up that capacity. But that isn't a sign that demand is infinite, it's just that demand is limited by the available resources. If you keep building roads, at some point they will become emptier. Similarly, at some point development productivity will outpace demand, and we will start optimizing our own jobs away.
I'm not convinced. I think new abstractions beget new abstractions. There's so much left to explore in software. Imagine being in the first century of the printing press and imagining that the press is going to put all these poor monks out of business or that there's not much left to explore with writing. Speaking of writing, how much has our word processing technology succeeded in making authors obsolete?
I don't think word processors are the right analogy.
In your example programmers are the monks or the press makers. At some point we're not needed any more (at least at the same scale) since word processors have already been built.
Printers are actually a great analogy. Printing presses became more sophisticated, but the printing business grew even further, guaranteeing lots of jobs in the printing industry. But at some point we reached peak demand, but presses continued requiring fewer and fewer workers. Today there are still people manning the presses of publishing houses and newspapers, but in 200 years of improvement we made the job a much smaller niche.
I worked in the print industry when I was younger. The increase in Posters / bill boards / custom cardboard standing displays actually created so many more print jobs. From phd book binding to custom business print jobs to online demand printing there are so many more things we are printing now.
There are more newspapers but they are all owned by larger players which means different types of machines and parts.
A better example might have been blacksmiths. Although the amount of people making cars is a larger group.
That's too narrow of a group. What about 3-D printing.
If you read the whole article it shows a path to new jobs...
Digital printing has become the fastest growing industry segment as printers embrace this technology. Most commercial printers now do some form of digital printing.
The 3D printing revolution implies a fusion of manufacturing and printing. This just underscores my point though: abstractions beget new abstractions. 3D printing is an entirely new category that is just starting to come into maturity and reach mass adoption. Who knows what the implications of that are? It could cause a boom in custom made, limited run products. It could help to end our reliance on China for mass production. It's not obvious to me it will lead to less jobs in manufacturing.
Demand for the things we want _today_ will be met. But progress leads to new demands, and they are more complicated.
Anthropologists estimate that the work week was at 20 hrs at the end of the Stone Age. We have been inventing new problems in the vacuum created by our successes for, literally, millennia.
Most of the stuff there would be just... normal now. It's quite unusual for SPAs to have a decent consistent UX. And the slowness would never have been tolerated back in the day.
I think that's mostly it. Design has a much larger role, and form-oriented development with common controls doesn't cut it. You couldn't imagine an app like Facebook in a forms-style UI, it's almost ludicrous to imagine.
Looked at retrospectively, forms were just one step above green screen applications on a terminal, transplanting one set of structural idioms to another, like for like.
If forms are essentially terminal apps, is FB much different from a teletype news service, with hyper-filtered content and an infinite set of data sources?
I see massive sea change in connectivity and immersiveness of today, but not really in what we're trying to achieve.
or MS Access, or Hypercard - we've had productivity boosters, just never anything remotely close to eliminating the inherently hard act of "building software".
The GUI was doing local work, but the database could be remote. Delphi's name is even a pun on Oracle.
I'm not saying they were halcyon days. I'm saying that the effort to do things is not necessarily less these days, in part because we have different expectations (not necessarily requirements) today.
In 2000 using VB, you can build a GUI, make a working Windows application using a minimum amount of code. Also the documentation (msdn) and the community was really nice.
20 years on MS Access has been one of the quickest lowest code way to build a functional application. What a mess everything is these days in comparison.
If you wanted to build a data entry / collection app with basic validation, querying, filtering etc. you could easily do that with no code in 2000-era Delphi.
Every hard problem gets a 90% working solution that's more than good enough for most companies.
I very much agree with your comment, but allow me a little nitpicking. Solutions aren't 90%, more like 50% or 20% or whatever. It may sound absurd to discuss a number there, since it's more like a way of speaking, just wanted to add that for most problems the solution is barely better than the default option.
In other words, there's still a lot of room for improvement, huge actually but, as you say, it might come in small pieces.
> Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
This might even lead to an _increase_ in demand for software engineering, since now small companies can write their own custom software cheaper and more reliable. It's called Jevons paradox.
"In economics, the Jevons Paradox occurs when technological progress or government policy increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand."
Only tangentially related to the thread: I'm struggling to think of how government policy might increase the efficiency with which a resource is used, other than by not existing in the first place.
So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
Here in Sweden, government policies have enabled the larger cities to be optical fiber-wired with common infrastructure so multiple companies don't have to roll out their own, not only that, the larger program is to enable a completely connected Sweden [0].
Government policies are enabling better efficiency of optical fiber infrastructure usage, without requiring multiple vendors to do the most expensive and least rewarding part of servicing internet: digging trenches for wires.
That's a good example -- and another "coordination problem" at that, which is one of the types of problems where appropriate government action may be the most efficient solution.
Addendum: I'm seeing a common theme in the responses.
When there's a coordination problem, but the equilibria state is unsustainable (such as overfishing) or lower-value (imagine competing electric grids with different voltages and frequencies), then government regulation can be useful by either imposing unilateral costs, and/or by defining a common standard.
There is the issue of avoiding regulatory capture, but I suppose that's for another time. :)
> So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
EU banned selling incandescent light bulbs for one example. Which increased demand for LEDs, lowered their prices, and made people switch much faster.
Almost all countries have legislation that mandates fuel usage of passanger cars has to be at most X liters per 100 km. Or at least there's an incentive system with taxes and other bills.
There are minimal standards for thermal insulation of houses.
If you call clean air and clean water a resource then most environmental regulation count.
It's very common actually - it happens every time there's a tragedy of commons and government regulates it.
> Almost all countries have legislation that mandates fuel usage of [sic] passanger cars has to be at most X liters per 100 km. Or at least there's an incentive system with taxes and other bills.
I'm not sure that's a great example. At least in the US, adoption of more fuel-efficient cars -- and the ascent of the Japanese motor industry -- started from the 1973 Oil Crisis, whereby oil prices skyrocketed due to a drop in supply.
American automakers had been shipping gas-guzzling land-yachts for years, but pricing changes drove consumers to buy fuel-efficient Japanese cars, where they stayed because Honda had invested in "customer service" and "building reliable cars that worked", whereas Chevrolet's R&D budget was divided between tail fins and finding new buckets of sand into which GM and UAW management could plunge their heads to pretend the rest of the planet didn't exist (to be fair, they're still really good at that).
> TBH American cars are still crazy inefficient from my (European) perspective :)
Well, we can't all have Volkswagen do our emissions testing. :)
Why would you say "crazy inefficient"? I don't think that, say, a VW 1.8L is, practically speaking, any more or less efficient than a Ford or Toyota 1.8L. A Ford Focus gets comparable gas mileage to, say, a Golf or a Mazda3.
The Golf has a better interior, but will also fall apart much sooner -- VW in the US has a shockingly bad reputation for reliability and customer service. Which sucks, because I really prefer VW's design language to pretty much any other brand.
You might on average drive smaller cars in the cities, but that's more of a preference issue than
One notable example I can think of is accessibility services.
In the US, public transit must accommodate the disabled, and for some types of trips or some types of disabilities there is a totally parallel transit system that involves specialized vehicles, operators, dispatchers to efficiently route vehicles, etc. It's also a massive PITA from the rider's POV, since you have to dial a call center to schedule a day in advance and you get a time window in which the driver will show up. This system dates from the '80s, before the Internet and before taxis were mandated to be accessible.
New York City tried a pilot program in which this system was replaced by subsidizing rideshare rides, since in the 21st century all taxis are required to have accommodations for the disabled anyways and you can leverage a well-tested system of ordering rides instantly and a large fleet of vehicles. While this did reduce per trip costs from $69 to $39, the increased convenience caused ridership to also skyrocket, so it ended up being a net drain on finances. [1] http://archive.is/N3DjJ
Also, scammers using VOIP (plus extremely sensitive ADA rules around treating disabled people nicely and never doubting people who claimed disability) ruined the deaf-serving text-to-telephone gateway. Fortunately that problem was mostly solved by the Internet mostly killing voice phone.
Yeah, basically you would be looking for a government policy that would be making something cheaper, but also so wildly convenient that it ends up increasing usage faster than the savings.
Another example is the expansion of highways; if highways are free, expanding them to relieve traffic will generally cause car travel to go up as more trips become tolerable, and then the highway will be as congested as it was before. https://www.vox.com/2014/10/23/6994159/traffic-roads-induced...
Consolidation of subway systems in London. Standardisation of rail gauges, screw threading, electrical outlets, phone networks. Basically standardisation of everything that just works and you don't notice.
Could go on... money, power grid, air traffic control, waste collection and disposal.
Health insurance is a great example - a single payer system has much more bargaining power than everyone trying to negotiate for medical care at a moment when they'll die without it.
Of course, such a system is less efficient at extracting value from consumers, so I suppose your question requires an assumption as to whom a system is efficient for.
> Health insurance is a great example - a single payer system has much more bargaining power than everyone trying to negotiate for medical care at a moment when they'll die without it.
Also not sure that's the best example.
Singapore, Japan, Germany, Switzerland... all of those are multi-payer, but tightly regulated (which imposes equal costs across all actors, so that's coordination once again).
And I'd have to dig out the article, but I believe the above model (Bismarck) is better at controlling costs, and produces more positive outcomes as well.
The US healthcare system is a mess for a lot of reasons.
Healthcare being tied to employment is probably the biggest.
Maybe the second is a lack of any sort of common healthcare market? You can't just take "any insurance" and go to "any doctor"; instead, you have to navigate a maze of in-and-out-of-network relationships. It's like scheduling an appointment with the Mafia: "My cousin's dog-sitter's best friend's uncle's pool-boy Vinny knows a guy that can take care of your headache."
The adversarial relationship between insurers, patients, and care providers is also a problem. Insurers work very hard to screw hospitals and patients, so hospitals have insane overhead costs to fight against the insurers, and patients... oh god, don't get me started there.
Regulatory capture also plays in. And there's more, but yeah, it's a mess.
Fair enough, I mentioned single payer because that's the system I'm familiar with. The 'adversarial' relationship between insurers, hospitals, and patients is precisely the kind of market competition that theoretically leads to the best outcome though. GP's ask was simply about examples where regulation leads to more efficiency, it sounds like bismarck and single payer are both more regulated and more efficient (again, from the patient's perspective).
Health care. There, regulation makes it more accessible to more people, improves quality, and drives costs down. Deregulated health care systems are less reliable and more expensive. People who can afford it will pay anyway.
Public transport. It benefits society as a whole when people are able to move around, and if they can do so without causing massive traffic jams. Regulation, keeping prices low, and ensuring that even remote areas are reachable, make it attractive to use and will make it more usable to more people.
Labour in general; shorter work weeks and improved working conditions have improved productivity.
Government policy to improve energy efficiency (government grants to improve factory production efficiency) can lead to increase in total energy use as the factory is more profitable with better efficiency.
EU does have programs to improve efficiency in this manner.
I suppose that would make sense if the government was solving a coordination problem?
E.g., no manufacturer will install Oliver's Optimizer, which promises a lifetime 10% savings in energy use, because it would force them to shut down operations for a month while the optimizer is installed, and put them at a disadvantage compared to other manufacturers.
By requiring the Optimizer (or equivalent) as a licensing requirement for factory operation, all manufacturers share the same burden, and thus suffer no relative disadvantage.
Is that the general idea? I'd be worried about regulatory capture in this case -- e.g., Oliver lobbying to force the market to install his Optimizer -- but that's an entirely different discussion. :)
I'd say, yes. You've correctly noticed in this subthread that government regulation is a solution to coordination problems. All kinds of situations that pattern-match to "it would be better if everyone were doing X, but X comes with some up-front costs, so whoever tries doing X first gets outcompeted by the rest" are unsolvable by the market (especially when coupled with "if everyone else is doing X, stopping doing X will save you money"); the important role of a government is then to force everyone to start doing that X at the same time and prevent them from backtracking.
To the extent you can imagine the market as a gradient descent optimization, coordination problems are where it gets stuck in a local minimum. A government intervention usually makes that local minimum stop being a minimum, thus giving the market a necessary shove to continue its gradient descent elsewhere.
> To the extent you can imagine the market as a gradient descent optimization, coordination problems are where it gets stuck in a local minimum.
I think this is a very appropriate analogy.
A thought: the cost function that the market minimizes is only a proxy for the various cost functions that we (humans) actually care about. I wonder how much (if any) “government inefficiency” is due to the mismatch between the market cost function and these other cost functions.
I don't know about inefficiency within the government, but I think most of regulating of markets happens because of it. As you've noticed, market's cost function is only an approximation of what we care about in aggregate. Regulation adds constraints and tweaks coefficients to keep the two goal functions aligned as much as possible. Which is hard, not least because we can't fully articulate what we care about, either individually or in aggregate.
Standardisation generally increases market size which means efficiencies of scale and ability to buy the best stuff from anywhere in the larger market, rather than being stuck with local stuff that works with local standards.
Government isn't always required for standardisation but even when it's industry led, it feels like government because it's cooperative, which means committees, votes, etc.
> any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
Not really an example, but any government policy that deals with a tragedy of the commons situation.
Take for example the NW Atlantic cod fishery: "In the summer of 1992, when the Northern Cod biomass fell to 1% of earlier levels [...]" [0] I'm sure that if Canada, the US and Greenland had come together and determined a fishing quota, those fishermen would still have a job today. Instead they were so 'efficient' that there was nothing left for them to catch.
> So, an ask: any historical examples where government policy other than deregulation has increased the efficiency with which a resource is used?
I would say there are examples around. For example, the numerous dams and levees we enjoy. Getting wrecked by a flood is not very efficient. Non-navigable rivers are not efficient.
Jevon was working on fuel consumption. There has been plenty of government regulation that improved the (average) fuel efficiency of machines, even back then when they were steam powered.
your observation is correct, but perhaps not the conclusion? If more people are traveling over a given section of road per hour (as you imply), isn't that more "efficient"?
When I was 10 years old I started coding in Qbasic, a few years later I told my dad I wanted to be a programmer when I grew up, he told me that it would likely be automated soon (as had happened with his industry, electronic engineering) and I'd be struggling to find a job. 23 years later and the demand still seems to be rising.
I'd say we're still quite far from such level of abstraction; but a certain degree of it is already possible as you say... k8s/docker/kafka/glue/databricks/redshift, all of these technologies mesh together "seamlessly", but more problems arise as a result.
And when UML started getting in vogue in the mid 90s a lot of people said that "intelligent code generators" would automate a large amount of programming.
It did not happen the way people predicted, but it has somehow happened in the form of Angular, Ionic, Express, Ruby-on-Rails and similar frameworks: More and more programming means "writing glue code", being it to glue Machine Learning libraries (yay, ML developer!), HTTP libraries (yay, Web developer!), AMQP/SQL/NoSQL (yay, backend developer!) or even OpenGL/DirectX/SDL (yay, game developer!).
The fact is, as more and more of these abstraction libraries are created, "programming" will go one level of abstraction up, but still need people to do it.
In 2002 the inventor of Microsoft Office (Charles Simonyi) took his $billions and left to create a company to replace programming with an Office-like app. In 2017 the company (Intentional) was acquihired back into MS after failing to generate a profit or popular product.
I think the real change is the rising threshold between commodity software and specialised solutions. When I started my career more than a decade ago, I built handmade static websites and online shops for small and medium shops. Today these are commodity software, easily served by Squarespace/Wix/Shopify etc.
At the same time, when I started, Basecamp was amongst the top SaaS solutions on the planet. Today, its simple form based approach wouldn't cut mustard with consumers accustomed to instant feedback, realtime collaboration and behind the scenes saving.
This is especially apparent in the games industry. Early games like Doom or Wolfenstein were often developed by less than five people. Today's open world titles like AC Odyssey or Cyberpunk 2077 require 100 times as many people.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
We actually can imagine how a natural-language driven "black box" that translates it into code works: it's called offshore software development. The conclusion that everyone eventually reaches, having experienced varying levels of pain first depending on how quickly they learn, is that writing a spec detailed enough to make that work is as much or more work than just writing the code yourself!
'The premise that we are on the verge of some breakthroughs in software development'
There will be breakthroughs in SW development but as with all breakthroughs no one can exactly tell when they will occur, so let's say within the next 40 years.
The microelectronics industry has largely moved to automated validation. Some of the ideas have already migrated to SW validation, although progress and adoption is slow.
Probably a key idea for automatic SW generation and "no-code" is to realize that a Turing complete language is not required at all times, well most of the times it is even counter productive. Too often SW engineers fail to realize that as well.
> You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc.
Large companies at significant scale need to know these things. Smaller companies don't need kubernetes, message queues, or anything beyond a simple standard off-the-shelf setup. I'm guessing the author was referring more to small/mid-sized companies that aren't at FAANG-scale and have no need for that complexity.
As a counter-point, I run a one-man tech business and use kubernetes to run some 60+ application and db servers. I don't have time to babysit each application I'm running and kubernetes is a force mutliplier that I rely on heavily.
There is a cost to managing it but even so, without the automation it provides I simply wouldn't have the capacity to do what I do.
I disagree with the author that it will be "no code." But I would also not dismiss how much more productive the cloud and better devtools has made developers. And as much as people, especially on HN, like to pick on bootcamp graduates, it's undeniable that you can get someone with no to little experience building complicated software in a matter of month.
What I think will happen to software engineering is that the middle will shrink. We'll see many more frontend and product engineers, and slightly more infra and systems programmer. I think the fullstack, middleware rails/django type engineering will all but disappear (most will move towards product).
>it's undeniable that you can get someone with no to little experience building complicated software in a matter of month.
Yeah, and who do you think comes along and cleans up their mess, extends and maintains that software once your cowboy coders are gone?
We might be more productive, but only to a certain point. The complexity comes when people want to twist and bend the off the shelf solutions in ways they weren't designed for, and when systems become so large and complex that adding just one more feature takes a significant amount of time.
This is what differentiates your low cost bootcamp grads from highly paid software engineers. Experienced engineers aren't just building for today, but for the future.
I don't buy that fullstack devs are going anywhere. The real world is complex, the devil is in the details and the complexity of those details can't simply be chucked into an off-the-shelf solution and be expected to survive. We'll still have to have people that glue all the pieces together, we'll still have to name and compose things, to make modifications and optimisations, to maintain existing products, and we'll need people that push the boundaries of what's been done before and explore the new.
> Yeah, and who do you think comes along and cleans up their mess, extends and maintains that software once your cowboy coders are gone?
Sometimes someone with a CS or SE degree, sometimes someone who learned to program as a hobby while doing something completely irrelevant like Music, English, bar tending or high school and sometimes the cowboy coders themselves with more experience. There’s an enormous amount of theory in programming which is highly relevant to many, many people but you can be amazing at CS theory and write scientific code that’s garbage, uncommented spaghetti like the Imperial epidemiology model. At the other end you can have a great grasp of how to write clean, modular, well commented code and have no idea how you would start parsing a text file to extract all nouns or some other introductory undergraduate project for one of the infinitude of topics in CS.
> At the other end you can have a great grasp of how to write clean, modular, well commented code and have no idea how you would start parsing a text file to extract all nouns or some other introductory undergraduate project for one of the infinitude of topics in CS.
How difficult is that to read up on? I do a lot more of the former than the latter as that is what real life jobs entail (actually most of them involve fixing other peoples shitty code).
I personally think we have not made any significant progress in 20-30 years with regards to development and the number of software developers is still growing at a rate where the majority has probably less than a year experience. So no progress can be made as the industry never matures.
The kind of optimism displayed in the article reminds me of my early years as a developer ;)
In software engineering, languages and tooling the progress is really slow. RAD tools existed for decades, OOP and functional programming paradigms remained largely the same for a very long period of time ... incremental enhancements but not really a breakthrough.
So true. Believing we will “finish” needing computer programmers because “the work is done” misses the basic physics that governs this process. And naively believing we are finally there is a regularly resurgent myth.
We have been predicting machines would not need programmers since there were programmers. It is true for a _given_ task at a given complexity level. But overall the demands on, and for, a modern programmer have only gotten higher, because the demand of all business and human activity is to offer more than we might have otherwise.
Just wait until, for an app to differentiate itself in business, you have to create intelligent responses in a variety of augmented reality interfaces, correctly predict human behavior, and interface with the physical environment in a routine and nuanced way. And the companies that can do it well are suddenly dominating the ones who do a sloppy job.
I've been around long enough to see these claims over and over again. You and me will be right that this claim again is false, but I think each time developers get more productive, they can do more with less time, and at some point developers will be able to do so much with so little time that we need few of them. So far though, the need for software has continued to grow even as developer productivity has increased, thus there has been no significant employment issues.
I have to admit that as a technical person it's easy to ignore the tools which are being built for non technical people. A non-technical is currently building a wordpress site for me that is better than what I would have thrown together as css/html.
I think if something like that happens, it will be on the scale of the shift from alchemy to chemistry - some sort of as-yet-unimaginable standardization which changes what is currently an art to something more like a science. I don't expect to see anything of the sort in our lifetimes, barring some very extreme advances in medicine.
Eh, I also think a lot of people overengineer things that are made simple with recent technology and that in fact most companies don't need the best engineers to get their job done.
I think it's totally true that one can leverage new tools to get more work done with less people, especially when it's for a service that doesn't reach scale and what not. Most companies don't need that to be lucrative. But I think the space of problems expands, whether that is more fields valuing tech, feasible complexity increasing in others, or competition just ratcheting up by lowering technical barriers to entry.
Its not going to save us from all of the work, but it does eliminate a lot of redundant work. For sure.
> Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
I think that hundreds of billions of dollars has been spent moving from localized IT to cloud. Do you really believe that was all a waste of money? For example, most of those medium or large companies had their own operations software backends, and most of it was eaten by clouds services/APIs.
> You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.
Docker/K8s is a good example. I spent more than a year building a Docker orchestration/hosting startup (eventually decided not to try to compete as an individual with Amazon). But when I recently needed a reliable way to host a new application and database, I did not have to configure Docker or K8s at all. Why? Because I used AWS Lambda and RDS. Those are examples of software eating software. AWS can handle all of the containers for you if you do it that way.
As far as failover and backup, that was handled by checkboxes in RDS. I did not need a message queue because that was built into the Lambda event injection service.
Just like people put some PHP scripts on fully hosted Apache+MySQL 20 years ago. This was very common and far easier than AWS. (And reliable, too, although not as scalable, but the needs then were different). The point being that all of this has been here before. Every few years some complexity creeps back in (in exchange for some other benefits) and then it‘s eliminated again and some progress is made. But works always expands.
Recently I helped a friend who is a teacher with her Excel sheets to do grade reports. Pretty well done for Excel, yet it was a terrible user experience. Even if there are only 1000 users, working with this 1 hour per month, a proper custom made software would have been better and easily economically viable. Even no-code has existed for a long time, but it never fits perfectly.
Similarly, people regularly complain about just glueing components together. As opposed to what? Copying sorting algorithms right out of a CS class? It‘s a strange idea. Looking at the code I work with, over the years, I find very little glue. True, there are abstractions and sometimes they get in the way. But they are there for a reason. Whether you start out from scratch or use frameworks, much of the application will revolve around business-related data structures.
You can give it a try: Take your real-world product, strip out the abstractions, replace the built-in UI widgets, sorting routines, hash tables of your language and maybe that OR-Mapper and your GraphQL-server framework and so on with your custom code and something minimalistic. It won‘t take that much time and code and compared to the stuff on top you‘ll find it‘s not that much that you actually used in the end. Nothing to glue together anymore.
Not that it makes sense to do this. But the idea that glueing things together has replaced „real“ development is very much mistaken.
>As opposed to what? Copying sorting algorithms right out of a CS class?
As opposed to, I'm guessing, adapting Monte-Carlo tree search to Go and inventing AlphaGo. Or what BellKor was doing back in the Netflix Challenge days. Not copying sorting algorithms out of a CS class, but solving a puzzle with a clever new algorithm that just works, and then everything else falls into place.
(Call it "if you write it, they will come" taken to its logical conclusion.)
You might also notice that none of that is actual product development and it will not lead to a product anytime soon (and as far as I know the Netflix Challenge results were not used by Netflix). That‘s research. It‘s great if you‘re working at a research department or maybe you do it for fun after work. But how is this related, exactly, to the software industry?
The point was that actual software engineering has not been more or less trivial than a few decades ago and that the glue isn‘t that much after all.
There was never a time when the daily work of software engineers was, in fact, research. There are plenty of research institutes and universities and even research labs of software companies where you can do this after you‘ve got your degree. Just go there, many of my friends are doing exactly that.
> Its not going to save us from all of the work, but it does eliminate a lot of redundant work.
It's not going to eliminate work in software development, it's going to increase work in developing software systems by increasing the average value of each unit of work, and, simultaneously, move the average level of the work higher up the abstraction ladder, just as every advance in software productivity since “what if we didn't have to code directly in machine code and instead has software that would assemble machine code from something one-step more abstract”.
I’m looking to get into the hosting space using containers. Basically combining my consulting business with a hosting one since I’m doing it already for clients anyways.
I’d love to exchange notes with you on the lessons you learned building your system and the challenges you faced.
I’m using rancher/k8s with docker on top of “unreliable” hosts with AWS/GCP/DO/Azure providing “spill over” capacity for when those unreliable cheap hosts prove why they’re unreliable.
Is it possible we could get in touch? You can reach me at hnusername at Google’s mail service. Would love to connect if you’re open!
"I can't imagine how that would work"
-> hearing that kills me inside :(
I think we have 2 options:
OPTION 1)
We've reached a plateau -- software will continue to be developed as it is now, no new abstractions.
OPTION 2)
Mankind will create a better set of tools to:
- reduce the effort needed
- increase the # of people who can participate
in the translation of ideas/requirements -> software.
For everyone's sake [1], I really hope it's the second! :)
As one crazy idea, imagine if you could have a spreadsheet that would let you build software instead of crunch numbers...
... anyway, probably a bad idea, we should stick to our current abstractions and tools :D
[1] Take the above with 2.42 lbs of salt, I'm the founder of
> The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak
Well, there hasn't been one single major breakthrough but rather a lot of small ones that cumulatively mean that software has become easier to write. Most of it is more mundane than new fundamental abstractions, it's more about distributed version control, better bug trackers, better libraries, more accessible documentation and learning materials, and so on. These things allow software to be written more quickly with smaller teams. Even someone writing in a language like C that hasn't changed much in decades will have a far easier time of it in 2020 than it 2000, simply because of the existence of StackOverflow and the progress that has been made in getting compilers to warn about unsafe code.
This is combined with the fact that as more software is written, less software needs to be created to fill some functionality gap. As long as we have computers and people who care to use them, there will always need to be new software written. Most software that people get paid to write is not written for fun or for intellectual exercise, though, it's written to solve a business need. If that business need can be satisfied with existing software, there's less motivation for a businesses to write their own.
> The premise that we are on the verge of some breakthroughs in software development that will significantly reduce the need for engineers is really weak
We are on the brink of economic contraction which is forcing a rethinking for the need of software engineers. The necessary disruption is there. It is economic, not technological.
Yes, there will continue to be a need for software engineers, but business expectations will change as budgets will adjust. I suspect fewer developers will be needed moving forward and those developers will be required to directly own decisions and consequences, which has not been the case in most large corporations.
> In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming.
Agreed, but that is not the typical work culture around software development. Thanks to things like Agile and Scrum developers are often isolated from the tactical decisions that impact how they should execute their work, and for good reason. While some seasoned senior developers are comfortable owning and documenting the complex decisions you speak of many are not so comfortable and require precise and highly refined guidance to perform any work. This is attributable to a lack of forced mentoring and mitigated by process.
I think there has been a steady reduction in the required IT personal needed to do a lot of things. Need a web-page/web-store? You buy a standard product for almost no money, and you don’t really need anyone to run it for you. 25 years ago that was a several month project that involves a dozen of engineers and had a costly fee attached to after launch support.
At the same time we’ve come up with a bunch of new stuff which gave those engineers new jobs.
I do see some reduction in office workers by automation. We still haven’t succeeded with getting non coders to do RPA development for their repetitive tasks, but the tools are getting better and better and our workers are getting more and more tech savvy. In a decade every new hire will have had programming in school, like they have had math today. They may not be experts, but they’ll be able to do a lot of the things we still need developers to do, while primary being there to do whatever business logic they do.
But I’m not too worried, we moved all of our service to virtual a decade ago and are now moving more and more into places like Azure, and it hasn’t reduced the need for sysops engineers. If anything it’s only increased the requirements for them. In the late 90ies you could hire any computer nerdy kid to operate your servers, and you’d probably be alright, today you’ll want someone who really knows what they are doing within whatever complex setup you have.
The same will be true for developers to some extend, but I do think we’ll continue the trend where you’ll need to be actually specialised at something to be really useful. If virtual reality becomes the new smartphone, you’ll have two decades of gold rush there, and that’s not likely to be the last thing that changes our lives with entirely new tech.
> 25 years ago that was a several month project that involves a dozen of engineers and had a costly fee attached to after launch support.
25 years ago, yes, but white-label hosted web store things were around in the early noughties. I think there were even a few in the late 90s, but those weren't very good.
Yeah it reminds me of a project at my company that was an attempt to automate certain development processes so that people could ship features without developer involvement. Cool idea! So they built this wonderful system and now there's a team of 6 devs solely dedicated to maintaining it lol.
RPA seems to be the biggest area where this is currently popular. The "citizen developer" bullshit they're pushing IMO sounds good on paper but will lead to fragile bots that end up falling apart and not being properly maintained at scale. I can't imagine handing someone with no programming experience UiPath or whatever and having them basically deploying software directly to production. As far as I know there isn't a "code first" approach to this set of problems but there probably ought to be as someone who can't write code isn't likely to produce a high enough quality product even with a dumb downed drag and drop tool to make it worth it.
In my experience, for the small companies you have an endless stream of custom jobs that need quick unique solutions. You get the same outrageously complicated work, just with the ability to use more duck tape and one-time solutions. Cloud solution is, with some exceptions to self contained commodity solutions (such as email), about managing costs of spinning up or down hardware. Pay a premium on what you need right now, rather than investing into what you might need tomorrow with the possibility of guessing wrong.
You make some great points. There's a humanistic quality that can't be replaced. I realized this even more so while self-isolating. For example, instead of going on TikTok, I decided to build an entire app from scratch. A few of my friends thought it would be a useless app - one "anyone could make." But if I did make it, it should be with severless tech, GraphSQL, AWS amplify, etc.
I decided to just use a $10 Digital Ocean server. With stocks so cheap, my goal was to build an automated trader during COVID-19: https://stockgains.io
I initially used Google spreadsheets but it wasn't effective. I spent a week with Docker, learned MySQL 8's new features, and Ruby on Rails 6 for rapid development. There are so many nuances with storage engines, libraries, query and cache optimizations, and UI/UX design that requires human thought, experience, and skill. Sometimes plenty of it. Now the beauty of this tool isn't the price difference of a stock before COVID (a robot could do that), but the filters. These filters were created from a human (me) reading over 100 books on trading stocks and writing down quantitative and psychological parameters. And I kept track of what could be "automated" over the years.
I just can't imagine a robot reading all those books and doing the same thing. Not just the design, but just building a vision. There's an art and complexity involved in solving problems.
Similar claims were made in the early 1960s when high level language compilers arrived: "Computers will practically program themselves." "There will be little need for programmers anymore." Every new software technology since then has sometimes triggered similar claims.
It is easier to understand code and its consequences than human language; hypotheses are testable and verifiable. It helps to think of coding as a form of game.
Open source, and Github specifically, can be mined and reused like any other knowledge; pay attention to Microsoft and OpenAI going forward.
It's easier to understand language on a syntactical level, the program itself is turing complete and we don't even have a decent automatic verification tool.
Even more simply, if something took a person year to write, it will take at least a person to maintain in perpettuity, as bits rot, especially after the profit authors disappear.
Also: whenever significant reductions in complexity are achieved, the result is a more expansive usage of software, not a reduction of engineers to achieve the same result.
I completely disagree. The new tools (bubble, zapier, airtable, webflow etc.) are an order of magnitude easier to create applications (and even relatively complex ones!)
contrary to what whiteboard interviews test for, programming is more of an art than science, and computers are generally bad at determining what is good art.
In my experience, software engineering is endless, complicated decision making about how something should work and how to make changes without breaking something else rather than the nuts and bolts of programming. It’s all about figuring out what users want and how to give it to them in a way that is feasible.
The idea that we will have some abstraction that will someday (in the foreseeable future) save us from all of this difficult work sounds very far fetched to me, and I can’t imagine how that would work.
Even the example of hosting complexity being replaced by cloud companies seems kind of silly to me. Maybe that’s saving very small companies a sizable fraction of their engineering resources, but I really doubt it for medium or larger companies.
The cloud saves us from some complexity, but it doesn’t just magically design and run a backend for your application. You still need people who understand things like docker, kubernetes, endless different database options, sharding, indexing, failover, backup, message queues, etc. Even if the pieces are now more integrated and easier to put together, the task of figuring out how the pieces will interact and what pieces you even need is still outrageously complicated.