Its your claude speaking to their claude, which is fair, but it makes this whole discussion a bit dumb since we are basically talking about two bots arguing with each other.
> Animals in general avoid violence between conspecifics
That seems to mostly just be true for oppressed species that doesn't already dominate. For example Orcas attack each other when they get into each other territory, as do ants. Humans dominate most land animals today so they probably lost most of that since humans already kill enough that killing each other is no longer a benefit for them.
In other words, it requires a tremendous amount of effort to fully communicate your tastes to the AI. Not everybody wants to expend the time or mental effort doing this! (Once we have more direct brain/computer interfaces, this effort will go down, but I expect it will not be eliminated fully)
This is the second time in two days I've seen a subthread here with folks seemingly debating whether or not defining and communicating requirements counts as work if the target of those requirements is an LLM system.
I'm confused as to why this is even a question. We used to call this "systems analysis" and it was like... a whole-ass career. LLMs seem to be remarkably capable of using the output, but they're not even close to the first software systems sold as being able to take requirements and turn them into working code (for various definitions of "requirements" and "working").
I'm also skeptical that direct brain interfaces would make this any less work; I don't think "typing" or "english" are the major barriers here, anymore than "drafting" is the major barrier to folks designing their own cars and houses... Any fool thinks they know what they need!
At some point, just an idea will be enough for your Neurolink to spawn an agent to create 1000 different versions of your idea along with things that mimic your tendencies. There will be no effort, only choice.
As both a software engineer and a creative, I absolutely do not want 1,000 versions of what I am trying to make generated for me. I don't care if it's free or even cheap. I want to make things.
I know this is a concept deeply alien to a lot of HN's userbase but I did not get into programming or making art to have finished products; that's a necessary function that is lovely when it's reached, but ultimately, I derive my enjoyment from The Process. The process of finding a problem a user has, and solving it.
And yes I'm sure Claude could do it faster than me (and only at the cost of a few acres of rainforest!) but again, you're missing the point. I enjoy the work. That is not a downside to me.
Deciding between 1000 different versions is a lot of effort IMO. With manual coding, you’re mostly deciding one decision point at a time, which is easier when you think about it. It just require foresight which comes from experience
What do you mean? It is grounded on the text it is fed, the reason it said that was that humans have said that or something similar to it, not because it analyzed a lot of LLM information and thought up that answer itself.
LLM can "think" but that requires a lot of tokens to do, all quick answers are just human answers or answers it was fed with some basic pattern matching / interpolation.
> I think the disagreement doesn’t lie in this concept, but rather in whether an LLM can be used by someone who’s willing to put in effort to assist them in doing so, rather than just having it do it for them
No, you misunderstood here. People aren't saying "it is harder to learn in the future", the issue is "it will be harder to make sure that someone will learn in the future".
Currently you need an engineering degree and experience to do engineering work. However if in the future a lot of people get their degree and experience just by calling LLM for every problem, those engineers will not understand at all what they are doing. Before someone having that experience will have had solved a lot of problems manually on the job, that experience made them an expert. The same person solving those by calling an LLM and pasting in the answer will just as ignorant as someone with no experience.
Most such people today didn't wanna learn to be engineers out of curiosity, they just wanted a job. In the future all such people would use LLM and never learn. Those are the main parts of our workforce, so it is a scary prospect that in the future we cannot force them to learn things properly in the same way since LLM allows them to do basic tasks without learning.
If you argue there are plenty of people who learn for fun, then you would be wrong. Extremely few people learn enough in their own time to contribute meaningfully to for example mathematics, it isn't enough to matter. People learn those fundamentals primarily because they are forced to do it for a degree they need for a job, if they weren't forced to learn and pass tests they would happily go do the job without any knowledge or skills.
The standard dictatorial takeover of a democracy is to keep the elections and the presidency, but to add a supreme leader above the president, similar to what Iran or Russia or China is doing. So Trump would no longer be president, he would be supreme leader joining what the other world powers are doing.
And the military. Who the majority of soldiers supports matters a lot since they have the final say when leaders cannot agree. Trump does a lot to gain favor with the military, democrats doesn't do much for them.
reply