This is incredibly shallow, and feels almost bordering on a type of delusion. Even if you agree that all labour _could_ be automated, it's highly debatable whether you'd want it to be. A lot of human society is for humans and by humans, and that is a good thing. We are social animals, automation of every task is not desirable to say the least.
This point is becoming self evident in the rejection of generative AI by the broader populace. Maybe I am in a bubble, but it seems that everyone except tech bros (those who have something to gain in adoption) hate generative AI. Creation is what makes us humans enjoy life. No one (real) wants to automate the humanities.
I agree, though it's hard to tell what people outside the tech sphere are signalling about what they think about the tech - many seem to be pretty positive about chatgpt.
What concerns me is that I don't think it's widely understood enough that there's a frantic, cultish mindset around the tech in SV and tech circles - a mindset that wants to see society upended in search of justifying the investments being made.
I'm not sure that would necessarily have helped. My regular sim was on another network (giff gaff), that was down. Went out and bought a pay-and-go sim on yet another network (EE) - no dice there either. Not sure what the issue was, but it seemed to have knock-on impacts in areas like mine that were affected.
giff gaff is O2 and O2 shares a parent company with Vodafone (Virgin Media O2) and also shares masts. EE and Three probably share infrastructure too, particularly in rural areas.
At least for me, Three was definitely working (South West)
Vodafone & O2 / VM are separate entities I think. In fact, 3 is being merged into Vodafone. Regardless, hard to know how to have a good backup, maybe starlink ;)
which country do you propose Russians ought to move to such that they are no longer complicit with some other government's malfeasance? Russia is not the only country that under your logic you might say this about it's residents.
Isn't 'agentic search' just another form of RAG? information still gets retrieved and added to the prompt, even if the 'prompt' is levels down in the product and not visible to the user.
I think this article misses the point. I don't think many engineers are averse to collaborating, advocating for their work and using soft skills. Politics also entails lots of complex human relationships - favoritism, rivalries, jealousy, egomania etc. these play out in different ways in different orga depending on how people and teams are incentivised and arranged. But the net result for the engineer trying to navigate this is can be a lot of wasted energy and frustration. Not always of course, and some orgs really don't have as much of a problem. I suspect it's more common in larger orgs with a traditional corporate culture.
I agree, and I think intent behind the code is the most important part in missing context. You can sometimes infer intent from code, but usually code is a snapshot of an expression of an evolving intent.
I've started making sure my codebase is "LLM compatible". This means everything has documentation and the reasons for doing things a certain way and not another are documented in code. Funnily enough i do this documentation work with LLMs.
Eg. "Refactor this large file into meaningful smaller components where appropriate and add code documentation on what each small component is intended to achieve." The LLM can usually handle this well (with some oversight of course). I also have instructions to document each change and why in code in the LLMs instructions.md
If the LLM does create a regression i also ask the LLM to add code documentation in the code to avoid future regressions, "Important: do not do X here as it will break Y" which again seems to help since the LLM will see that next time right there in the portion of code where it's important.
None of this verbosity in the code itself is harmful to human readers either which is nice. The end result is the codebase becomes much easier for LLMs to work with.
I suspect LLM compatibility may be a metric we measure codebases in the future as we learn more and more how to work with them. Right now LLMs themselves often create very poor LLM compatible code but by adding some more documentation in the code itself they can do much better.