It is tough though, I'd like to think I learnt how to think analytically and critically. But thinking is hard, and often times I catch myself trying to outsource my thinking almost subconsciously. I'll read an article on HN and think "Let's go to the comment section and see what the opinions to choose from are", or one of the first instincts after encountering a problem is googling and now asking an LLM.
Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.
I think you are making a couple of very good points getting bogged down in the wrong framework of discussion. Let me rephrase what I think you are saying:
Once you are very comfortable in a domain, it is detrimental to have to wrangle a junior dev with low IQ, way too much confidence but encyclopediac knowledge of everything instead of just doing it yourself.
The dichotomy of Junior vs. Senior is a bit misleading here, every junior is uncomfortable in the domain they are working in, but a Senior probably isn't comfortable in all domains. For example, many people with 10+ SE experience I know aren't very good with databases and data engineering, which is becoming an increasingly large part of the job. For someone who has worked 10+ years on Java Backends, now attempting to write Pythin data pipelines, Coding Agents might be a useful tool to gap that bridge.
The other thing is creation vs. critique. I often let my code, writing and planning be rewiewed by Claude or Gemini, because once I have created something, I know it very well, and I can very quickly go through 20 points of criticism/recommendations/tips and pick out the relevant ones. - And honestly, that has been super helpful. Using it that way around, Claude has caught a number of bugs, taught me some new tricks and made me aware of some interesting tech.
I think one thing to look out for are "deliberately" slow models. We are currently using basically all models as if we needed them in an instant loop, but many of these applications do not have to run that fast.
To tell a made-up anecdote: A colleague told me how his professor friend was running statistical models over night because the code was extremely unoptimized and needed 6+ hours to compute. He helped streamline the code and took it down to 30 minutes, which meant the professor could run it before breakfast instead.
We are completely fine with giving a task to a Junior Dev for a couple of days and see what happens. Now we love the quick feedback of running Claude Max for a hundred bucks, but if we could run it for a buck over night? Would be quite fine for me as well.
I don’t really see how this works though — Isn’t it the case that longer “compute” times are more expensive? Hogging a gpu overnight is going to be more expensive than hogging it for an hour.
Nah, it’d take all night because it would be using the GPU for a fraction of the time, splitting the time with other customer’s tokens, and letting higher priority workloads preempt it.
If you buy enough GPUs to do 1000 customers’ requests in a minute, you could run 60 requests for each of these customers in an hour, or you could run a single request each for 60,000 customers in that same hour. The latter can be much cheaper per customer if people are willing to wait. (In reality it’s a big N x M scheduling problem, and there’s tons of ways to offer tiered pricing where cost and time are the main trafeoffs.)
Why wouldn't it? I still have to hear one convincing argument how our brain isn't working as a function of probable next best actions. When you look at amoebas work, and animals that are somewhere between them and us in intelligence, and then us, it is a very similar kind of progression we see with current LLMs, from almost no state of the world, to a pretty solid one.
That's not even a devil's advocate, many other animals clearly have consciousness, at least if we're not solipsistic. There have been many very dangerous precedents in medicine where people have been declared "brain dead" only to awake and remember.
Since consciousness is closely linked to being a moral patient, it is all the more important to err on the side of caution when denying qualia to other beings.
I've been experimenting with Claude, and feel like it works quite well if I micromanage it. I will ask it: "Ok, but why this way and not the simpler way? And it will go "You are absolutely right" and implement the changes exactly how I want them. At least I think it does. Repeatedly, I've looked at a PR I created (and review myself, as I'm not using it "on production"), and found some pretty useless stuff mixed into otherwise solid PRs. These things are so easily missed.
That said, the models, or to be more precise, the tools surrounding it and the craft of interacting with it, are still improving at a pace where I now believe we will get to a point where "hand-crafted" code is the exception in a matter of years.
The issue is that by introducing hyperbole, the meaning changes completely. Take the two statements:
1. I want peace.
2. a) Therefore I need to be strong enough to deter any attack.
2. b) Therefore I need to be so strong that all my enemies fear me.
2. a) is sound. Nobody attacks if they believe the cost is higher than benefit. ("Believe" is doing heavy lifting here, most wars start when countries belief about cost/value is misaligned)
2. b) is incompatible with 1. Either you believe that a stronger party does not necessarily attack weaker parties, thus peace could also be maintained without supremacy, or you believe supremacy leads to wars, but then your own goal of supremacy cannot be in the name of peace.
Unless, of course, you're a race supremacist, who believes you're so much wiser and more moral than anyone else that only you can be trusted with unchecked power. An idiotic and immoral position to take.
Wrong. 2b is compatible with 1. You can have peace without military supremacy, for a time at least. But you can guarantee peace with military supremacy. That's the difference.
You are at your enemy's mercy without it. They may conquer you on a whim, and there's not a thing you can do about it.
I would much prefer that military supremacy in the hands of the wise and moral, there's nothing idiotic or immoral about that (indeed, the opposite is idiotic and arguably immoral).
On the contrary, Figma's value proposition is increased by LLMs. Current coding assistants are like savant-idiot junior devs: They have relatively low reasoning capabilities, way too much courage, lack taste and need to be micromanaged to be successful.
But they can be successful if you spell out the exact specifications. And what is Figma if not an exact specification of the design you want? Within a couple of years the Frontend Developer Market might crash pretty hard.
There were many daggers making the Perl Community bleed:
1. Enterprise Development
Java et Al led to a generation of developers working further from the kernel and the shell. Professionalization of the field led to increased specialization, and most developers had less to do with deployment and management of running software.
Tools also got much better, requiring less glue and shifting the glue layer to configs or platform specific languages.
Later on, DevOps came for the SysAdmins, and there's just much less space for Perl in the cloud.
2. The rise of Python
I would put this down mostly to universities. Perl is very expressive by design, in Python there's supposedly only "one right way to do it". Imagine you're a TA grading a hundred code submissions; in Python, everyone probably die it in one of three ways, in Perl the possibilities are endless. Perl is a language for writing, not reading.
3. Cybersecurity became a thing
Again, this goes back to readability and testability. Requirements for security started becoming a thing, and Perl was not designed with that in mind.
4. The Web was lost to Rails, PHP, then SPAs
I'm less clear on the why of that, but Perl just wasn't able to compete against newer web technologies.
You could write good-quality secure code in Perl, but the level of dynamism in the implementation and the fact that there’s only the one main implementation means there’s not much hope of quality static analysis.
Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.