I am curious to know how you evaluate a new development product in respect to the ROI? What characteristics should a product have to satisfy your criteria for considering it?
Example - I have JSON based RPC so that systems from other vendors can talk to mine (enterprise backends written in C++). It works like a charm and has ben doing so for years. Here comes this architecture astronaut and tells ma that I should do GraphQL and proceeds to explain me how powerful and cool it is and how everybody and his cat uses it. So on the downside I will waste a gobbles of time and money, on upside - zilch because nobody gives a shit. The problem is already solved for us year ago so buzz off. And that guy could not give a single example of how it can help me. Just spreading FUD about existing things.
There are numerous opposite examples when I see that this new tech, lib, tool actually saves me time and money. I pay quite a few dollars for software tooling. Well if the tool does not offer perpetual license it is a no go for me then.
This is all that matters to me. On desktop for example I skipped moving to that .NET bandwagon and stayed with Delphi for my GUI desktop products. They worked 20 years ago and they work the same now. Single 10MB self updating exe with zero deployment issue. And free from numerous limitations imposed by UWP. Competitor is 1GB package with crapload of problems and every update turns a nightmare for customers. In my case all the time is spent creative stuff that brings me new customers / money instead of feeding someone else. Sure it costs me few hundred a year but that is peanuts.
Thank you for taking the time to answer. You sound like a good rational engineer. Things that work don't need unnecessary splash of coolness.
On my previous job, I had been working for 15 years on the development of a complex business system. It included desktop apps, mobile apps, webs, on-premises and clouds. Throughout the years, we have introduced many then cutting-edge technologies for new products within the system. Some technologies before they were cool. But, the fine-working-already-done products, we kept supporting with the original technology for the lifetime of the product.
The point is that many new tools and technologies bring a very limited value to the finished working products.
Now, I am a maker of the new development tools. So, I am eager to push them to the world, but wouldn't like to be perceived as an "architecture astronaut". Your opinion helps in understanding how and why engineers choose new tools and technologies.
"Coolness" is not in what I use inside my product but how "cool" customers think it is because of features, robustness, price etc. It feels very "cool" to me when my products work and serve customers.
>"Now, I am a maker of the new development tools."
This is a part where I spend money. Good tools are very valuable as they directly save me time / money.
>"The point is that many new tools and technologies bring a very limited value to the finished working products."
Even for new ones. For example my servers are modern C++. In theory I should be using Rust / Go for new ones if I listen to a chorus. Guess what modern C++ works just fine for me and produces stellar results hence no reason for me to switch. I do some toy projects with new languages / tech to get a grip and be aware just in case.
35 years ago, that same question was asked, but in a different context. It was thought that when there are BASIC and spreadsheets there is no need to do any more serious coding.
Even more so 25 years ago when various GUIs became ubiquitous.
About 20 years ago, Flash and Dreamweaver were promising the same.
Not exactly. VisualBasic and MS Access were pretty powerful. Something remade in the cloud with collaboration and versioning that combines those with spreadsheet like intuitiveness could get us closer than ever. Airtable is maybe leading so far.
I tend to agree with the author of the article. Somewhere along the way, term devops changed its meaning. Initially, devops symbolized a way of working and a multi-disciplinary team. Nicely captured in [1]:
> Under a DevOps model, development and operations teams are no longer “siloed. Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function.
Today, devops is often used to describe the engineering role very similar to the classic system administrator. I guess this is mostly because it sounds "more modern" and it's more attractive.
But, the comments in this thread made me aware that there indeed is a role that's focused only on serving developer needs. I guess we could call an engineer a devops engineer if their job is only to build and maintain system tools for developers.
I have an anecdote. Back then when I was studying CS on the exam for computer architecture class, we got a problem to solve: "Define the instruction set for your imaginary CPU and then use it to write a program for managing an elevator."
They expected us to imagine a simple instruction set with 7-10 commands. But one clever guy wrote this: "My CPU has only one instruction - manage_elevator. The code for elevator management is: "manage_elevator". He passed with A.
I guess that is how a non-programmer would imagine a programming language.
Or an engineer. I briefly worked with a team doing radar systems, their CPU basically had every mathematical function you could want available as an assembly instruction. Want to work with complex numbers? It's in the instruction set. Want to convert between coordinate representations? It's in the instruction set.
AWS isn't a developer tools company. It is ops tools company. In particular enterprise ops tools company. Their customers are IT managers and system administrators from large companies. That explains 1 and 2.
Famously, AWS is organized as huge bunch of two pizza teams. Essentially, it's a huge incubator for internal startups. That's how they manage to churn out new features so frequently and try out and discard unsuccessful products. Also, that is why their tools looks so damn inconsistent and why you never know what's working with what.
Regardless of money, they can't make the tools better without sacrificing something. And that is a space for competitors. Work on developer centric tools for small and medium sized companies.
As I harped on in the AWS outage, when you combine this fact with the disruption of Stack Ranking/forced attrition, how does this lend itself to long term stability for something that is approaching utility-level importance for the economy?
Utility companies are staid and boring. That's a GOOD thing.
If AWS doesn't restructure, then what does it do? Re-re-reimplement all the internal bespoke systems that run AWS? Doesn't matter, their employees will turn over and the bitrot in those systems is about, what, four years?
Back when I was studying CS in the early 90s, it wasn't obvious at all that I am going to work with a DB anytime in my career. I loved the subject, I passed with A*. But I thought I am not going to see it later, because I didn't plan to work for a bank or some large enterprise.
Then, in about two years, everything changed. Suddenly, every new web project (and web was also novel) included a MySQL DB. That's when the idea about the three tier architecture was born. And since then, a few generations of engineers have been raised that can't think of a computer system without a central DB.
I'm telling this because in microservices I see the opportunity to rethink that concept. I've built and run some microservices based systems and the biggest benefit wasn't technical, but organizational. Once, the system was split into small services, each with its own permanent storage (when needed) of any kind, that freed the teams to develop and publish code on their own. As long as they respected communication interfaces between teams, everything worked.
Of course, you have to drop, or at least weaken, some of ACID requirements. Sometimes, that means modifying a business rule. For example, you can rely on eventual consistency instead of hard consistency, or replenishing the data from external sources instead of durability.
Otherwise, I agree with the author that if you are starting alone or in a small team, it's best to start with a monolith. With time, as the team gets bigger and the system becomes more complex, your initial monolith will become just another microservice.
I also started with ZX81. For a while, I didn't have a cable for tape recorder. A couple of magazines came with my ZX. Inside, there were some listings of games in BASIC. So, to play a game, a had to retype the code every time I restarted the computer. After a while, I started changing the code to see what will happen.
Yep, it was a great time to be growing up as a kid wasn't it. All through those years of home computers. ZX81-->VIC-20-->C=64-->Amiga 500-->Amiga 1200 (I still have that one).
Progressing from BASIC to Z80 to 6502 to Motorola 68000 to C and so on. Learned Turbo Pascal and COBOL in college. Such nostalgia! :)
Thanks. That's exactly the kind of answer I'd hope I would get. Precise and actionable.
Admittedly, I'm a fan of serverless. I believe serverless will soon become a dominant form of cloud computing. I think it is just a matter of immaturity of the platforms and lack of tools. While I can't do much about the platforms, I can try to build better tools.