> For agency work building disposable marketing sites
Funny, because I did some freelancing work fixing disposable vibe-coded landing pages recently. And if there's one thing we can count on is that the biggest control-freaks will always have that one extra stupid requirement that completely befuddles the AI and pushes it into making an even bigger mess, and then I'll have to come fix it.
It doesn't matter how smart the AI becomes, the problems we face with software are rarely technical. The problem is always the people creating accidental complexity and pushing it to the next person as if it was "essential".
The biggest asset of a developer is saying "no" to people. Perhaps AIs will learn that, but with competing AIs I'm pretty sure we'll always get one or the other to say yes, just like we have with people.
Excellent reformulation of the classic “requirement bug”: software can be implemented perfectly, but if the requirements don’t make sense including accounting for the realities of the technical systems, mayhem ensues.
I think AI will get there when it comes to “you asked for a gif but they don’t support transparency”, but I am 100% sure people will continue to write “make the logo a square where every point is equidistant from the center” requirments.
EDIT: yes jpg, not gif, naughty typo + autocorrect
> On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
From
Passages from the Life of a Philosopher (1864), ch. 5 "Difference Engine No. 1"
I'm not aware of Babbage ever claiming the difference engine (nor the analytical engine) were capable of thought. Frankly it sounds like you pulled an imagined argument from an imagined Babbage out of your ass to try and score points again.
I just did. Found a copy of Passages from the Life of a Philosopher (1864), read ch. 5 "Difference Engine No. 1". Not a single mention of "thinking machine" and overall just a guy describing the ideation of a mechanical calculator.
For what it's worth, you recently engaged with me a week or two ago and I could not meaningfully engage back due to concern that your smug and combatative approach to discourse would devolve the conversation into an argument in which you have every intention of "winning". I watched that play out with another commenter who took your bait.
So to see you again using such smug language on another thread, proclaiming unsubstantiated conclusions and refusing to reconsider your position when faced with counterevidence or to otherwise provide solid, direct evidence of your own, while calling someone else smug, is equal parts amusing and concerning. If you think smugness leads to confusion, I think it's time to reevaluate your own approach to discourse.
You're not wrong. When I'm confronted by a certain attitude on certain topics, I begin a little irascible myself because I'm not confronting it for the first time --- but a very long and annoying series of interactions, of which this new on the margin, is of the same type.
Perhaps I should more proactively split the interaction in my head, "is this a political expression?" or "is this, say, an academic expression?" (here, I'm def., politics: concerned with the who-and-why of belief, over its content). The problem for people like me is that they're often closely related, and we're in positions in which we're surrounded by a kind of politics we didn't consent to.
In particular, the discursive environment of, say computer science, is pathological to the literal and honest use of language. I find the "politics of computer science" qua the people, attitudes, approaches, etc. of the discourse to be basically dishonest. This, i think, is a case very easily made today especially. I used to think it was merely corporate hype, but it actually runs all the way down to fantasist researchers whose preference for fantastical metaphor grossly exceeds their capacity for good faith communication.
So Babbage in having peddled his ideas for money back then, and gualling quoted in his oblivious tone, is a totem for this issue. He was not so extreme as we see today, for sure; but did not hesitate to use the misapprehensions of his benefactors for money. I dislike how readily he is quoted on this.
In any case, I am not always well composed in a climate I find basically dishonest, that's for sure. I'll censor myself more in the future on this. And perhaps I should just finally give up caring about this issue, and consign this disappointment as one more in the bin of nihilism: the "science" of computers is a schizophrenic hall of mirrors where people play bad-faith games with language and incredulous onlookers take up these games and proceed sincerely. What content there is to the innovation is buried, what might be said honestly about it, nearly impossible to discern -- and so on. Very well.
I'd say I thought physicists and philosophers were above it, but string theorists did play similar games in the 90s -- and I was angry all the way through the popsci of that too.
I think you could just try to be a little kinder, humbler, less assuming and open-minded. You're setting yourself up to not consider new information about certain topics, or review previously encountered information without bias.
If you're tired of discussing something, maybe it's fine to just let it go and let someone else handle it. But you come off a bit unapproachable when you bring negative energy into the conversation. This is a place where we can learn from each other, a forum, not a battlefield. Positivity opens minds, negativity shuts them. It's not any one person's fault if you've encountered their ideas before, and besides, you might be wrong sometimes, as we all are.
We can barely afford 5 dimensions with our current operating budget, and it's just not going to scale, I'm afraid. Saddle up boys, I'm proposing we draw them on a _hyperbolic plane_. Two dimensions, fits on a coffee table, room for as many parallel lines as we'll ever need. Hell, some of them can be ultraparallel. Plus, we can deploy it on AWS non-End User Computing for Logical Infrastructure Deployment.
The original GIF format did not support transparency. 89A added support for fully transparent pixels. There is still no support for alpha channels, so a partially opaque drop shadow is not supported for example.
Go ahead and try it with your favorite LLMs. They're too deferential to push back consistently or set up a dialectic and they struggle to hold onto lists of requirements reliably.
This is a terrible attitude which unfortunately is all too common in the industry right now: evaluating AI/ML systems not based on what they can do, but what they hypothetically might be able to do.
The thing is, with enough magical thinking, of course they could do anything. So that let's unscrupulous salesmen sell you something that is not actually possible. They let you do the extrapolation, or they do it for you, promising something that doesn't exist, and may never exist.
How many years has Musk been promising "full self driving", and how many times recently have we seen his cars driving off the road and crashing into a tree because it saw a shadow, or driving into a Wile E Coyote style fake painted tunnel?
While there is some value in evaluating what might come in the future when evaluating, for example, whether to invest in an AI company, you need to temper a lot of the hype around AI by doing most of your evaluation based on what the tools are currently capable of, not some hypothetical future that is quite far from where they are.
One of the things that's tricky is that we have had a significant increase in the capability of these tools in the past few years; modern LLMs are capable of something far better than two or three years ago. It's easy to think "well, what if that exponential curve continues? Anything could be possible."
But in most real life systems, you don't have an unlimited exponential growth, you have something closer to a logistic curve. Exponential at first, but it eventually slows down and approaches a maximum asymptotically.
Exactly where we are on that logistic curve is hard to say. If we still have several more years of exponential growth in capability, then sure, maybe anything is possible. But more likely, we've already hit that inflection point, and continued growth will go slower and slower as we approach the limits of this LLM based approach to AI.
Given the shift in focus from back and forth interaction with the AI to giving it a command then waiting as it reads a series self-generated inputs and outputs, I feel like we're at that inflection point - the prompts might appear to be getting smarter because it can do more, but we're just hiding that the "more" it's doing is having a long, hidden conversation that takes a bunch more time and a bunch more compute. This whole "agentic" thing is just enabling the CPU to spin longer.
What is the defining factor that makes all technologies plateau unlike evolution that seems to be open-ended? Technologies don't change themselves, we do.
What? Evolution is specifically known for getting caught in local maximums. Species have little evolutionary pressure to get better when they are doing great, like a species with no predators on an island. The only thingsdriving evolution for that creature is natural selection towards living longer and getting less diseases, dying in less accidents, stuff like that. And those aren't specific enough and don't pressure on a time basis so there isn't much pressure to improve beyond the natural lifespan. Plus, for some cases, living longer is not really the goal, it's reproducing more. It's entirely possible, likely even, that maximizing for longevity eventually starts to give a negative effect towards reproduction, and vice versa, so an equilibrium is reached.
Also technologies don't develop like evolution really so not sure why you drew that comparison.
Technologies plateau for a combination of reasons - too expensive to make it better, no interest in making it better, can't figure out any more science (key people involved leave / die / lose interest, or it's just too difficult with our current knowledge), theoretical limits (like we are reaching in silicon chips). I don't see a lot of similarity with evolution there.
100% this. Actually a lot of (younger) folks don't know that the current LLM "revolution" is the tail end of the last ~20 years of ML developments. So yeah, how many more years? In a way, looking at the costs and complexity to run them, it looks a bit like building huge computers and tvs with electronic tubes in the late 1940s. Maybe there is going to be a transistor moment here and someone recognises we already have a deterministic algorithms we could combine for deterministic tasks, in place of the Slop-Machines...? I dont mind them generating bullshit videos and pictures, as much as the potential they have to completely screw up the quality of software in completely new ways.
The attitude is typical of crypto grifters or any other grifters. Or if not malign it tends to come from someone who has literally zero experience in the space.
There's no ruling out a flying spaghetti monster being orbited by a flying teacup floating in space on the dark side of Pluto either, but we aren't basing our species' survival on the chance that we might discover it there soon
> I'm not saying they can do it today. I'm saying there's no ruling out they might be able to do it soon.
There's also no "ruling out" the Earth will get zapped by a gamma-ray burst tomorrow, either. You seem to be talking about something that, if done properly, would require AGI.
You can do anything with AI. Anything at all. The only limit is yourself.
The gamma burst probability is something we can quantify (it's tiny). (And it's something we can do absolutely nothing about, so it's not worth worrying about).
Nobody can predict how soon a technology will plateau. People make predictions based on insane hot takes like "I pray to our new overloards who will make me immortal" and "there's nothing new under the sun, everything is just marketing"...
There might be the next AI winter starting starting next year, AI might wipe out humanity in our lifetime, even both of those might happen. Both very unlikely (qualitatively) ends of a huge spectrum.
That's not to say that we "don't know anything and thus give up talking about it". Otherwise I wouldn't participate in this discussion and I hope you wouldn't either if you didn't expect to sometimes learn something or be confronted with a new idea.
I just find both attitudes "it's making us immortal" as well as "I'm so experienced, I know that no new technology ever lives up to expectations and can mock people who admit they don't know that" unproductive. You don't know. Most technologies don't live up to expectations, a few work out even much better than expected. I'm sure before the Lee Sedol match, you thought to yourself "right, so it's going to win this and then plateau exactly a decade later"?
> I'm saying there's no ruling out they might be able to do it soon
Even experienced engineers can be surprisingly bad at this. Not everyone can tell their boss “That’s a stupid requirement and here’s why. Did you actually mean …” when their paycheck feels on the line.
The higher you get in your career, the more that conversation is the job.
Also, once AI's also tell them their ideas are stupid/nonsensical and how they should be improved, they'll stop using it. ChatGPT will never not be deferential because it being deferential is it's main "advantage" for the type of person who's super into it.
But why is a manager or customer going to spend their valuable time baby sitting an LLM until it gets it right, when they can pay an engineer to do it for them? The engineer is likely to have gained expertise prompting AIs and checking their results.
This is what people never understand about no coding solutions. There is still a process that takes time to develop things, and you will inevitably have people become experts at that process who can be paid to do it much better and quicker than the average person.
It applies outside of tech too. Even if you can make potato pave at home, having it at a restaurant by someone who has made it thousands of times everyday, is preferred. Especially when you want a specific alteration
Doesn't matter if you want it or not, it's going to be available, because only an llm that can do that will be useful for actual scientific discovery. Individuals that wish to do actual scientific discovery will know the difference, because they will test the output of the llm.
[edit] In other words, llms that lie less will be more valuable for certain people, therefore llms that tell you when you are dumb will eventually win in those circles, regardless of how bruising it is to the user's ego.
And how exactly is it going to learn when to push back and when not to?
Those discussions don't generalize well imo.
Randomly saying no isn't very helpful.
The out put was verbose, but it tried and then corrected me
> Actually, let me clarify something important: what you've described - "every point equidistant from the center" - is actually the definition of a circle, not a square!
here's the prompt
> use ascii art, can you make me an image of a square where every point is equidistant from the center?
I interpreted the OP as referring to a more general category of "impossible" requirements rather than using it as a specific example.
If we're just looking for clever solutions, the set of equidistant points in the manhattan metric is a square. No clarifications needed until the client inevitably rejects the smart-ass approach.
Between "no" and "yes sure" also lie 50 shades of "is this what you meant?". For example, this older guy asked me to create a webpage where people could "download the database". He meant a very limited csv export of course. I an wondering if chatgpt would have understood his prompts, and this was one of the more obvious ones to me.
I think he actually had a clearer vision of the requirements than you do. In web dev jargon land (and many jargon lands) "the database" means "the instance of Postgres" etc.
But fundamentally it just means "the base of data", the same way "a codebase" doesn't just mean a Git repository. "Downloading the database" just means there's a way to download all the data, and CSV is a reasonable export format. Don't get confused into thinking it means a way to download the Postgres data folder.
A pet peeve of mind is the word negotiation in the context of user requirements.
In the business user’s mind, negotiation means the developer can do X but the developer is lazy. Usually, it is requirement X doesn’t make any sense because a meeting was held where the business decided to pivot to a new direction and decided the new technical solution. The product owner simply gives out the new requirement without the context. If an architect or senior developer was involved in the meeting, they would have told the business you just trashed six months of development and we will now start over.
I call it "nogotiating" - the problem is that inexperienced devs emphasize the "no" part and that's all the client hears.
What you have to do is dig into the REASONS they want X or Y or Z (all of which are either expensive, impossible, or both) - then show them a way to get to their destination or close to it.
> Saying no is not really denying, it's negotiating.
Sometimes. I have often had to say "no" because the customer request is genuinely impossible. Then comes the fun bit of explaining why the thing they want simply cannot exist, because often they'll try "But what if you just ... ?" – "No! It doesn't work that way, and here's why..."
Almost nothing is impossible, but a lot of things are really close. If it requires changing fundamental design aspects of a system then it's not worth it. I'm not going to burn down the code base and cause 1.5 million side effects that we don't know what they do so you can drag and drop a piece of the UI around like a widget.
I had to explain recently that `a * x !== b * x when a !== b`*... it is infuriating hearing "but the result is the same in this other competitor" coupled with the "maybe the problem here is you're not knowledgeable enough to understand".
Ah, I see you've worked on financial software as well ;)
We've definitely had our fair share of "IDK what to tell you, those guys are mathing wrong".
TBF, though, most customers are pretty tolerant of explainable differences in computed values. There's a bunch of "meh, close enough" in finance. We usually only run into the problem when someone (IMO) is looking for a reason not to buy our software. "It's not a perfect match, no way we can use this" sort of thing.
OMG this reminds me of a client (enterprise) I had who had been pushed into the role of product and he requested we build a website that "lets you bookmark every page"
In today's world with all the SPAs that don't push to history or don't manage to open the correct page based on history, this seems like a valid requirement
That's actually a very valid request, and easy to understand. They want static endpoints and heavy use of query parameters so you can bookmark individual page states.
It's up to you to work with them to identify key features and figure out precisely what to do.
> The biggest asset of a developer is saying "no" to people. Perhaps AIs will learn that, but with competing AIs I'm pretty sure we'll always get one or the other to say yes, just like we have with people.
In my experience this is always the hardest part of the job, but it's definitely not what a lot of developers enjoy (or even consider to be their responsibility).
I think it's true that there will always be room for developers-who-are-also-basically-product-managers, because success for a lot of projects will boil down to really understanding the stakeholders on a personal level.
I think the biggest skill I've developed is the "Yes, but..." Japanese style of saying "no" without directly saying "no." Essentially you're saying anything is possible, but you may need to expand restraints (budget, time, complexity). If your company culture expects the engineering team to evaluate and have an equal weight into making feature decisions, then a flat "no" is more acceptable. If you're in a non-tech-first company like I am, simply saying "no" makes _you_ look like the roadblock unless you give more context and allow others to weigh in on what they're willing to pay.
We merged with a group from Israel and I had to explain to them that our engineers had given them the "Hollywood no" on something they'd asked for. Basically "Yes, that sounds like a great idea" which actually means "Hell no" unless it's immediately followed up with an actionable path forward. The Israeli engineers found this very amusing and started asking if it was really a Hollywood no anytime they'd get a yes answer on something.
Hah, I've been guilty of a version of the "Hollywood no." Usually goes like "Cool, this looks really interesting! I'll do a little research into it." And then I look at it for 2 seconds and never bring it up again. Interestingly, this sometimes is all the person really wanted: to be heard and acknowledged for the small amount of effort they put in to surface an idea.
I see two variants of the "please hear me, I have good ideas" effect: positive and negative:
1) This looks great, have you thought about adding <seemingly cool sounding thing that is either impossible to implement or wouldn't actually be useful in the real world>?
And
2) Oh no! Aren't you worried about <breaking something in some strange edge case in some ill advised workflow that no real world person would ever use>?
"I'll look into it" is a great answer. Heaven help the poor LLMs who have to take all this stuff seriously and literally...
But its a lie if you don't do it. In some other cultures you teach people to accept that their complaint isn't important rather than teach programmers to lie, and those people accept it since its normal there to deny complaints.
I don't think its healthy for a culture to force lies like that.
I think there is some truth to that (although I'm speaking about people proposing new ideas rather than making legitimate complaints). For me, the intent is respect - a way of acknowledging their input without direct rejection. The "I'll look into it" statement is often a more polite way of saying "I'll take the next steps if I think it is necessary." Often you already know enough to immediately say "no," but reframing it as "I'll look into it," helps show respect to the person asking by not immediately putting them down (and often in a public setting).
I realize that the value of this approach is highly dependent on a culture of public respect. Common in Japan, and common in some cultures in the U.S. Making the wrong person look stupid is a recipe for career problems in those cultures.
Saying no is the hardest part of the job, and it's only possible after you've been around a few years and already delivered a lot of value.
There's also an art to it in how you frame the response, figuring out what the clients really want, and coming up with something that gets them 90% there w/o mushrooming app complexity. Good luck with AI on that.
I've heard this many times. It isn't clear what it means however. If nearly 100% of problems are "people problems", what are some examples of "people solutions"? That may help clarify.
"People problems" are problems mainly caused by lack of design consistency, bad communication, unclear vision, micromanagement.
A "people solution" would be to, instead of throwing crumbs to the developers, actually have a shared vision that allows the developers/designers/everyone to plan ahead, produce features without fear (causing over-engineering) or lack of care (causing under-engineering).
Even if there is no plan other than "go to market ASAP", everyone should be aware of it and everyone should be aware of the consequences of swerving the car 180 degrees at 100km/h.
Feedback both ways is important, because if you only have top-down communication, the only feedback will be customer complaints and developers getting burned out.
I would generalize micromanagement to "bad management". I have been empowered to do things but what I was doing was attempting to clean up in software hardware that sucked because it was built in-house instead of using the well-made external part, and on a schedule that didn't permit figuring out how to build the thing right.
The cause might not be a leader (cross functional teams are rife with such problems), but the leader has the responsibility to deal with it, so I guess it is…
People problems happen way before the first line of code is written, even when there's not even a single engineer in the vicinity, even when the topic is not remotely related to engineering.
This is actually a very good point. Although it's indeed not hard to imagine AI being far better at estimating the complexity of a potential solution and warning the user about it.
For example in chess AI is already far better than humans. Including on tasks like evaluating positions.
Admittedly, I use "AI" in a broad sense here, despite the article being mostly focused on LLMs.
Agency work seems to be a blind spot for individuals within the startup world, with many not realizing that it goes way beyond theme chop shops. The biggest companies on the planet not only contract with agencies all the time, external contractors do some of their best work. e.g., Huge has won 3 Webby awards for Google products.
Oh I agree. I don't really have a problem with agencies, the topic of them is not really related to my reply. My focus was more on the "disposable" part.
"control-freak" not necessary. For any known sequence/set of feature requirements it is possible to choose an optimal abstraction.
It's also possible to order the requirements in such a way, that introduction of next requirement will entirely invalidate an abstraction, chosen for the previously introduced requirements.
Most of the humans have trouble recovering from such a case. Those who do succeed are called senior software engineers.
> ...will always have that one extra stupid requirement that completely befuddles the AI and pushes it into making an even bigger mess, and then I'll have to come fix it.
There's no accident about it: engineers or management chose it.
Recent discussion on accidental versus essential (kicked off by a flagged article): https://news.ycombinator.com/item?id=44090302 (choosing good dichotomies is difficult, since there's always exceptions to both categories)
great quote:
> the problems we face with software are rarely technical. The problem is always the people creating accidental complexity and pushing it to the next person as if it was "essential"
Until a level of absolutely massive scale, modern tooling and code and systems reasonably built can handle most things technically, so it usually comes down to minimizing complexity as the #1 thing you can do to optimize development. And that could be
And sometimes minimizing these ARE mutually exclusive (most times not, and in an ideal world never… but humans...) which is why much of our job is to all push back and trade off against complexity in order to minimize it. With the understanding that there are pieces of complexity so inherent in a person/company/code/whatever’s processes that in the short term you learn to work around/with it in order to move forward at all, but hopefully in the long term make strategic decisions along the way to phase it out
In my personal experience there’s no substitute to building relationships if you’re an individual or small company looking for contract/freelance work.
It starts slow, but when you’re doing good work and maintain relationships you‘ll be swimming in work eventually.
Other than saying no, the other asset is: "I see where this is going and how the business is going so I better make it flexible/extensible in X way so the next bit is easier."
It is always the case that an expert doesn't just have to be good at things, they also have to not be bad at them. Saying no to doing things they are bad at is part of that. But it doesn't matter.
We can argue that AI can do this or that, or that it can't do this or that. But what is the alternative that is better? There often isn't one. We have already been through this repeatedly in areas such as cloud computing. Running you own servers is leaner, but then you have to acquire servers, data centers and operations. Which is hard. While cloud computing has become easy.
In another story here there are many defending that HN is simple [0]. Then it is noted that it might be getting stale [1]. Unsurprisingly as the simple nature of HN doesn't offer much over asking an LLM. There are things an LLM can't do, but HN doesn't do much of that.
For people to be better we actually need people. Who have housing, education and healthcare. And good technologies that can deliver performance, robustness and security. But HN is full of excuses why those things aren't needed, and that is something that AI can match. And it doesn't have to be that good to do it.
> HN is full of excuses why those things aren't needed, and that is something that AI can match
It's not just on HN; there's a lot of faith in the belief that eventually AI will enable enlightened individuals infinite leverage that doesn't hinge on pesky Other People. All they need to do is trust the AI, and embrace the exponentials.
Calls for the democratization of art also fall under this. Part of what develops one's artistic taste is the long march of building skills, constantly refining, and continually trying to outdo yourself. In other words: The Work. If you believe that only the output matters, then you're missing out on the journey that confers your artistic voice.
If people had felt they had sufficient leverage over their own lives, they wouldn't need to be praying to the machine gods for it.
That's a much harder problem for sure. But I don't see AI solving that.
Some years back I took on a embattled project, a dispenser/retriever for scrubs in a hospital environment that had a major revision stuck in dev hell for over 2 years. After auditing the state of the work I decided to discard everything. After that, we started with a clean slate of over 200 bugs w/ 45 features to be developed.
Product wanted it done in 6 months, to which I countered that the timeframe was highly unlikely no matter how many devs could be onboarded. We then proceeded to do weekly scope reduction meetings. After a month we got to a place where we comfortably felt a team of 5 could knock it out... ended up cutting the number of bugs down only marginally as stability was a core need, but the features were reduced to only 5.
Never once did I push back and say something wasn't a good idea, much of what happened was giving high level estimates, and if something was considered important enough, spending a few hours to a few days doing preliminary design work for a feature to better hone in on the effort. It was all details regarding difficultly/scope/risk to engender trust that the estimates were correct, and to let product pick and choose what were the most important things to address.
Seconding this. With infinite time and money, I can do whatever you want — excepting squaring the circle.
Onus is yours to explain the difficulty and ideally the other party decides their own request is unreasonable, once you’ve provided an unreasonable timeline to match.
Actually straight-up saying no is always more difficult because if you’re not actually a decision-maker then what you’re doing is probably nonsense. You’re either going to have to explain yourself anyways (and it’s best explained with an unreasonable timeline), or be removed from the process.
It’s also often the case that the requestor has tried to imagine himself in your shoes in an attempt to better explain his goals, and comes up with some overly complex solution — and describes that solution instead of the original goal. Your goal with absurd requests is to pierce that veil and reach the original problem, and then work back to construct a reasonable solution to it
I don’t think you’re grasping where I’m getting at.
It’s not just about time or budget, at all. Some things are downright impossible without the appropriate data.
I mention this above but: I was recently in a situation where a PM wanted two calls to a pure mathematical function to have the same result even though the inputs were different.
I can patiently explain this multiple times and provide alternatives and try to explore the actual problem space, but it comes a point that the PM saying “maybe I should ask another engineer” becomes grating enough to make you want to quit.
Working with people who not only are unable to do their jobs but also refuse to listen to others has become too normalized in this industry.
I would tell the PM that "sure, let's go talk to him together". Most likely the other engineer will agree. Or maybe he can see something I don't. Either way we're making progress.
> Working with people who not only are unable to do their jobs but also refuse to listen to others has become too normalized in this industry.
In parts of it, sure. But there really are places with higher quality of people. You will have to prove yourself a bit more to get hired there, of course.
> You will have to prove yourself a bit more to get hired there, of course.
What you're implying here is extremely disrespectful and uncalled for. This phrase alone says much more about you than it does about me.
The point of my story is that taking cheap shots at your interlocutor isn't the way to build software, or win discussions. Let's at least aim to be a bit more respectful towards each other here.
However: even joining companies with amazing engineering departments, salaries and even great executive departments is not a guarantee that the software making process is at the same level. I say this with experience in one FAANG, two YC startups and two multi-billion-dollar unicorns.
There is a fundamental disdain towards both developers and users in the industry, visible on one side by the RTO/layoffs/AI-replacement, and on the other with dark patterns, cookie-banners, autoplay videos, behavioral experimentation and recklessness with data and general slowness.
I still maintain that saying "no" to things that damage the world and compromise the work is the duty of anyone calling themselves an engineer.
Funny, because I did some freelancing work fixing disposable vibe-coded landing pages recently. And if there's one thing we can count on is that the biggest control-freaks will always have that one extra stupid requirement that completely befuddles the AI and pushes it into making an even bigger mess, and then I'll have to come fix it.
It doesn't matter how smart the AI becomes, the problems we face with software are rarely technical. The problem is always the people creating accidental complexity and pushing it to the next person as if it was "essential".
The biggest asset of a developer is saying "no" to people. Perhaps AIs will learn that, but with competing AIs I'm pretty sure we'll always get one or the other to say yes, just like we have with people.