> tptacek wasn't making this argument six months ago.
Yes, but other smart people were making this argument six months ago. Why should we trust the smart person we don't know now if we (looking back) shouldn't have trusted the smart person before?
Part of evaluating a claim is evaluating the source of the claim. For basically everybody, the source of these claim is always "the AI crowd", because those outside the AI space have no way of telling who is trustworthy and who isn't.
If you automatically lump anyone who makes an argument that AI is capable - not even good for the world on net, just useful in some tasks - into "the AI crowd", you will tautologically never hear that argument from anywhere else. But if you've been paying attention to software development discussion online for a few years, you've plausibly heard of tptacek and kentonv, eg, from prior work. If you haven't heard of them in particular, no judgement, but you gotta have someone you can classify as credible independently of their AI take if you want to be able to learn anything at all from other people on the subject.
Part of being on Hacker News is learning that there are people in this community - like tptacek - who are worth listening to.
In general, part of being an effective member of human society is getting good at evaluating who you should listen to and who is just hot air. I collect people who I consider to be credible and who have provided me with useful information in the past. If they start spouting junk I quietly drop them from my "pay attention to these people" list.
That system depends on pulling funding for roads if they don’t follow the rules. Technically any state can opt out if they don’t receive any highway funding. Given the government isn’t giving large AI funding to states, they can’t do the same here.
In the few days following the election, there was a flood of conservative posters all over the place. After about a week, they all disappeared and Reddit returned to its usual politics. I think the difference you are seeing is an atypical amount of conservatism, not the other way around. Most people who voted for Harris still do not think that the lack of a primary was the issue.
Probably not, but as someone who didn't vote for either major party, nor am I a conservative, it was glaringly obvious that ramming through without lube someone who totally dive-bombed the prior primary might have avoided a sanity check to filter primary issues.
The strongest candidate for either party to field would be an incumbent President, especially one who has already beaten the other party's frontrunner. They have the advantage of celebrity, a record and the bully pulpit. The second strongest candidate would logically be an incumbent Vice President.
The Democratic Party may have been a shitshow but Harris was the best possible option once Biden was no longer in contention. And the margin between her and Trump turned out to be slim, so a Harris win wouldn't have been impossible.
Harris was pretty much the only option. The primary was already over and there were real questions on who could spend campaign funds with Biden's step down.
That said, I really blame her lose on her and the biden campaign more than anything. They chased hard for disaffected republican voters at the expense of the base. They failed to win those voters and lost some of their base voters.
Ive noticed very clearly a material change even on this site, where a comment with a conservative viewpoint would get downvoted into oblivion, and now I seem to see far more diversified opinions. Which is great, I want that.
Some of the people who have done the worst things in history have been well put together people. The man who is ruthless and puts himself before everything oftentimes ends up successful, wealthy, and with plenty of resources to take care of himself and the people he chooses. Does that make him a good person?
One of the most important, time-tested values is one of responsibility and honor. That means doing the right thing with the power that you do have, both by yourself and by others, even if it hurts you. We each are responsible for the environment (natural and man-made) that we inhabit, and to that extent it is our duty to help others and ourselves.
We have been given many, many resources at our disposal, and we bear the responsibility to use them well. Too often in our society we shirk that responsibility with the excuse "well, its not our problem".
I will try to save someone if his life is in danger. I will try to help a stranger if I can and I by helping him does not produce harm to others.
But I am only motivated to help individuals. I don't plan to change societies, I don't plan to help social groups, invade countries, dictate some policies, doctrines, because that is what someone can mean "by taking care of the world".
I began to have a profound mistrust and dislike for activists, ideologues, social warriors, fighters for "a good cause" and revolutionaries. Their actions are usually finalized through loss of freedoms and blood baths.
Some of the most horrific atrocities have been done by people trying to " take care of the world"
> We have been given many, many resources at our disposal, and we bear the responsibility to use them well.
You should use "I" rather than "we" and I would agree. I've been given the gift of life in my children and I do everything for them. Fortunately I have resources to spare and try to take care of my family and neighbors as well, and I suggest you do the same.
The best people I know do good in both local and global ways. It's not necessary to choose one or the other. I don't disagree with your examples, but I notice that they say nothing about donating money to World Vision or putting solar panels on your roof, for example. Replace these with causes you believe are good.
This might be unfair, but I'd summarise what you said as "living a charitable life, but only for people within 50km of your house", and I think it's fairly obvious that "living a charitable life, mostly for people within 50km of your house, but also you give $50 a month to an international charity and you try to generate a bit less carbon dioxide" is better for the world, better for you because you don't have to harden your heart, and wouldn't harm most people's ability to look after themselves.
I agree that it's possible to be too neurotic about this and do what Sam Bankman-Fried did. It's also possible to be a little better than average at caring for the world without much cost to yourself. I don't understand why anyone would have a problem with the latter.
I do, and I do so with the knowledge that this is a responsibility that has been placed on me, and others, by the gifts that have been given to me. I help others and contribute to society, as is my duty, and I expect others to do the same. I also expect the same responsibility, trustworthiness, and honor of those who have been given power.
The job loss depends on the average speed up, however. If the AI is only effective in 10% of tasks (the basic stuff), then that 3x improvement goes down to 1.3x.
That's such a economical fallacy that I'd expect the HN crowd to have understood this ages ago.
Compare the average productivity of somebody working in a car factory 80 years ago with somebody today. How many person-hours did it take then and how many does it take today to manufacture a car? Did the number of jobs between then and now shrink by that factor? To the contrary. The car industry had an incredible boom.
Efficiency increase does not imply job loss since the market size is not static. If cost is reduced then things are suddenly viable which weren't before and market size can explode. In the end you can end up with more jobs. Not always, obviously, but there are more examples than you can count which show that.
This is all broadly true, historically. Automating jobs mostly results in creating more jobs elsewhere.
But let's assume you have true, fully general AI. Further assume that it can do human-level cognition for $2/hour, and it's roughly as smart as a Stanford grad.
So once the AI takes your job, it goes on to take your new job, and the job after that, and the job after that. It is smarter and cheaper than the average human, after all.
This scenario goes one of three ways, depending on who controls the AI:
1. We all become fabulously wealthy and no longer need to work at all. (I have trouble visualizing exactly how we get this outcome.)
2. A handful of billionaires and politicians control the AI. They don't need the rest of us.
3. The AI controls itself, in which case most economic benefits and power go to the AI.
The last historical analog of this was the Neanderthals, who were unable (for whatever reason) to compete with humans.
So the most important question is, how close actually are we to this scenario? Is impossible? A century away? Or something that will happen in the next decade?
> But let's assume you have true, fully general AI.
Very strong assumption and very narrow setting that is one of the counter examples.
AI researchers in the 80s already told you that AI is around the corner in the next 5 years. Didn't happen. I wouldn't hold my breath this time either.
"AI" is a misnomer. LLMs are not "intelligence". They are a lossy compression algorithm of everything that was put into their training set. Pretty good at that, but that's essentially it.
Currently, AIs emulate a less skilled, junior developer. They can certainly get you up and running, but adding junior developers doesn’t speed up a lot of projects. What we are seeing is people falling into the “mythical man month” trap, where they believe that adding another coding entity will reduce the amount of work humans do, but that isn’t how most projects come out.
To put it simply, it doesn’t matter if AI does 80% of the work if that last 20% takes 5x longer. As long as you need a human in the loop who understands the code, that human is going to have to spend the normal amount of time understanding the problem.
Indeed. My roommate has just been put on a new project at his workplace. No AI involved anywhere. But he inherited a half-done project. Code is even 90% done. But he is spending so much time trying to understand all that existing code, noting down the issues it has which he'll need to fix. It's not just completing the remaining 10%. It's understanding and fixing and partially reworking the existing 90%. Which he has to do, since he'll be responsible for the thing once released. It's approaching a point where just building it from scratch on his own would have been more time efficient.
It seems to me that LLM output creates a similar situation.
Yeah but AI coding does speed up some simple tasks. Sometimes by a lot.
But we have to endure these tedious self-congratulatory "mwa ha well it's still not as good as my code" posts.
No shit. Nobody is saying AI can write a web browser or a compiler or even many far simpler things.
But it can do some very simple things like making basic websites. And sure it gets a lot of stuff wrong and you have to correct it, or fix it yourself. But it's still usually faster than doing everything manually.
This post feels like complaining about cruise control because it isn't level 5 autonomy. Nobody should use it because it doesn't do everything perfectly!
> This post feels like complaining about cruise control because it isn't level 5 autonomy.
It's nothing like that, because cruise control works reliably. There is never a situation where cruise control randomly starts going 90mph or 10mph while I have it set to 60mph. LLMs on the other hand...
This is why I disagree with people who argue (as you did) "it really does speed up simple tasks". No it doesn't, because even for simple tasks I have to check its work every time. In less than the time it takes me to do that, I could've written the code myself. So these tools slow me down, they don't speed me up.
> In less than the time it takes me to do that, I could've written the code myself.
This hasn't been my experience at all. At worst you skim the code and think "nah that's total nonsense, I'll write it myself from scratch", but that only takes a few seconds. So at worst it wastes a few seconds.
Usually though it spits out a load of stuff, which definitely requires fixing up and tweaking, but is usually way faster than doing it all.
Obviously it depends on the domain too. I wouldn't ask it to write a device driver or something UVM or whatever. But a website interface? Sure. "Spawn a process in C and capture its stdout"? Definitely. There's no way you are doing that faster by hand.
Honestly, I'm not sure if there is any correspondence between an AI and a particular skill level of developer. A junior developer won't know most of the things an AI does; but unlike an AI, they can be held accountable for a particular assignment. I feel like AI is more like "a skilled consultant who doesn't know that much about your situation and refuses to learn more than the bare minimum, but will spend an arbitrary amount of time on self-contained questions or tasks, without checking the output too carefully." Which is exactly as useful yet infuriating as it sounds.
I’ve seen the opposite: plenty of examples of very productive teams with really no standout “10x engineer”, and several examples of unproductive teams purely due to poor team decision making. IME, productivity is a measure of past investment, not current skill.
The AP News was just kicked out of press conferences for not using the government-preferred term for the Gulf of Mexico. The new director of the FBI is pledging to go after members of the press that he doesn't like. The US is jumping headfirst in the "bad speech isn't free" direction in the past month.
Yes, but other smart people were making this argument six months ago. Why should we trust the smart person we don't know now if we (looking back) shouldn't have trusted the smart person before?
Part of evaluating a claim is evaluating the source of the claim. For basically everybody, the source of these claim is always "the AI crowd", because those outside the AI space have no way of telling who is trustworthy and who isn't.