He knows for sure because he’s omnipotent. HN commenters are experts on everything. They were right about driverless cars and how it will never come to fruition and they were right about how useless vibe coding is. This HNer here is also right about everything as well .
People will comfort you about your emotions and your tears and then tell you you’re wrong to feel sad about it. They’re lying to you. Your instincts about how you feel are completely correct.
The reality is it is a bit irrational to love a saas so much and cry about it. I’ve been using GitHub as long as you and I feel nothing for it. To most people moving off of GitHub is a huge hassle, an annoyance rather than a tragedy.
I think the biggest damage is the project visibility. Everything else is more of an annoyance.
This is an exaggeration. But there are things China can do that are legal in the name of national security. I would say it’s just as extreme as what the US would do to Snowden if he came back.
Everything in the math universal language is defined as an expression or formula.
All proofs are based on this concept.
To translate this into programming think about what programming is? Programming rather being a single line formula is a series of procedures.
1. add 1
2. add 3
3. repeat.
in functional programming you get rid of that and you think from the perspective of how much of a program can you fit into a single one liner? An expression? Think map, reduce, list comprehensions, etc.
That is essentially what functional programming is. Fitting your entire program onto one line OR fitting it into a math expression.
The reason why you see multiple lines in FP languages is because of aliasing.
m = b + c
y = x + m
is really:
y = x + (b + c)
This is also isomorphic to the concept of immutability. By making things immutable your just aliasing part of the one liner...
So functional programming, one line programs, formulas and equations in math, and immutability are essentially ALL the same concept.
That is why lean is functional. Because it's math.
I think those of us who have years of experience under our belt our safe. If we're older the knowledge is ingrained and atrophy of this knowledge will be limited based on the fact that it's already "imprinted" onto our brains.
Our futures are safe in this sense, in fact it's even beneficial as we may be the last generation to have these skills. Humanities future on the other hand is another open question.
When people communicate they speak in terms of the overwhelming generality of reality. There's always at least one guy that is an extreme exception.
I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.
Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.
Not a compliment. I’m saying you’re speaking from an incredibly obscure perspective because you took what the other person said way to literally and pedantically.
Your intent here is irrelevant. ;) I know why my perspective is the correct one here and the fact that is "incredibly obscure" only supports my findings.
Majority of software sucks and majority of "software engineers" suck too. It's not a herd that one should ever want to be associated with.
It's because HN is not really full of smart people. It's full of people who think they're smart and take pride in that idea that they're pretty intelligent.
ChatGPT equalizes intelligence. And that is an attack on their identity. It also exposes their ACTUAL intelligence which is to say most of HN is not too smart.
how can you ask this question with on a post titled "Amateur armed with ChatGPT solves an Erdős problem"???? are you looking for some randomised control trial? omg
God, do people not read my posts? I wrote this: "It also exposes their ACTUAL intelligence which is to say most of HN is not too smart."
These types of people need citations for the time of day. They don't know how to debate or discuss in abstract terms. Reality freezes over if no scientific papers exist on the topic.
> These types of people need citations for the time of day. They don't know how to debate or discuss in abstract terms. Reality freezes over if no scientific papers exist on the topic.
Oh man you have captured the exact emotion I had. These people need randomised control trials to prove any inane thing lmaoo. Reddit brained I tell you
Idk, going out on a limb and guessing the folks who hang out on erdosproblems.com aren’t run-of-the-mill dumbasses. The prompt, if you look at it, is actually quite clever. Not as clever as the proof. But far from the equalization OP posits.
AI equalizes intelligence in the sense that it closes the gap. Not perfectly, not infinitely, but directionally. The distribution compresses. The floor rises faster than the ceiling, so people who used to be far apart end up operating much closer together.
You can already see it in the Erdős example. The person who wrote that prompt wasn’t some random idiot. It took real cleverness to even set it up that way. But the fact that they could get that far, with assistance, is exactly the point. The distance between “amateur” and “expert” shrinks when the tool fills in large parts of the path.
Now extend that forward. Today it’s one clever person, one problem, one careful interaction. As the tooling improves, that same pattern scales. Better reasoning, better search, better guidance. The amount of lift the tool provides increases, which means the gap continues to narrow.
All the supposed “counterpoints” people bring up are already implied in the claim. “Equalize” here obviously means moving closer to equality. Is it NOT obvious that LLMs don't actually equalize intelligence to a level of 100%? Do I actually need to spell that out? If there was nothing at stake, I wouldn't need to.
But instead people latch onto the most absurd version possible, knock that down, and act like they’ve said something meaningful. It’s the same mindset as that guy demanding a formal paper or citation for an observation you can see unfolding in real time. Not because it’s unclear, but because engaging with the actual claim is uncomfortable. It’s easier to distort it into something extreme and dismiss it than to admit the gap is closing.
I’ll agree the top of the stack may have compressed downwards. But that leaves open the possibilities that (a) the ceiling has risen and (b) the floor isn’t really moving, inasmuch as productively engaging with any tool required baseline intelligence.
More pointedly, I don’t think anyone who opposes AI does so because they want to remain the smart kid in the room.
> If there was nothing at stake, I wouldn't need to
You’re on HN buddy. If you measure stakes by how pedantically you’re challenged, everything will rise to existential terms.
When i said stake, I meant HN is especially vulnerable because the stake is the HN communities identity as programmers. Consistently on HN you see articles on IQ voted up. People take pride in their intelligence and programming skills here... and AI is dismantling their identity piece by piece.
It's more then being the smart kid in the room. The future is pointing to a place where programming is just a one hour tutorial on how to tell AI to do it for you. What happens to you if you're entire identity and career was built on being a programmer as many people are here? THAT is what is at stake.
Yes, I love living in communism too. Imagine if you had to pay money for it or something. The wealthiest people would get unrestricted access to intelligence while the poor none. And the people in the middle would eventually find themselves unable to function without a product they can no longer afford. Chilling, huh? Good thing humans are known for sharing in the benefits of technological progress equally. /s
His core issue is jealousy and fear. I don't think these types of people are at the top of the intelligence curve (more closer to bottom) but that is orthogonal to my point. What I'm saying is his personality archetype makes him think (keyword) he's at the top of the intelligence curve and an equalization means, personally to him, that he's losing his edge.
More specific to HN is the archetype of: "I have spent years honing my craft as a expert programmer, my identity is predicated on being an expert programmer in which high intelligence is causal and associated positively with my identity" That's why ironically most of HN was completely wrong about AI. They were wrong about driverless cars, they claimed vibe coding was trash. It's the people who think (keyword) their stupid/average (aka general public) who got it right... because perceptually they stand to gain from the equalization.
Anyway.. this fear and jealousy is not something most humans can admit to themselves. Nobody will actually be able to realize that these emotions drive there thinking. They have to lie to themselves and rationalize a different reality. That's why you get absurdist takes like this.
To everyone reading. It is obviously that chatGPT does not equalize intelligence to the point of 100%. That statement is obviously not saying that. Everyone knows this. You want proof?:
Look at the declaration of independence... without getting to pedantic: "All Men are created equal" is not saying all Males are 100% equal. Everyone knows this. First off no one is 100% equal.. and second the statement in a modern context is obviously not referring to only men. It is referring to women&men and clearly men and women are nowhere near equal.
So if you all know this about the declaration of independence... how can you not see the same nuance for: "ChatGPT equalizes intelligence."? First ask yourself... do you think you're smart? If you do, then the self delusion I just described is likely happening with you.
They used ChatGPT Pro to solve it. Over 50% of people in the world couldn't afford ChatGPT Pro ($200/mo) even if they spent more than half of their income on it. [1]
What was that about "spreading FUD about unaffordability"?
They didn't buy ChatGPT Pro themselves. You could've done the same as the students in the article and get a free subscription if you were interested in this instead of trolling.
ChatGPT flattened the difference between top .0001 percentile mathematician and an amateur. This is the definition of making intelligence more available.
You are exaggerating the situation by essentially claiming since some people can’t afford 200 dollars this means ChatGPT is not democratising intelligence. It’s a bit strange to claim this because according to you it only becomes affordable when maximal number of people can afford it. It’s a bit childish.
Directionally it is democratising. Are more people able to afford higher level intelligence? Yes.
> ChatGPT flattened the difference between top .0001 percentile mathematician and an amateur
It flattened the difference between a top epsilon percentile mathematician and an amateur with money. It didn't flatten the difference between an amateur with a little money and an amateur with a lot of money. It widened it. That's the part I'm scared about.
You are shrugging this off because it currently isn't that expensive. But we're talking about the massively subsidized price here, which is bound to get orders of magnitude higher when the bubble pops. Models are also likely to get much better. If it gets to a point where the only way to obtain exceptionally high intelligence is with an exceptionally high net worth and vice versa, how is that going to democratize anything?
What you are saying is similar to saying "computers and internet don't democratise intelligence and access to information because some supercomputers exist". Its pedantic and frankly childish.
"All men are created equal" is obviously not literally saying all humans are 100% equal. Just like how "ChatGPT equalizes intelligence" is not saying ChatGPT equalizes the intelligence of all humans to a level of 100%.
I'm not going to spell out what I meant by: "ChatGPT equalizes intelligence". You can likely figure it out for yourself, because the problem doesn't have anything to do with your reading comprehension. The problem is more akin to self delusion, you don't want to face reality so you interpret the statement from the most absurdist angle possible.
The admins at HN actually noticed this tendency among people and encoded it into the rules: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
It is not “absurdist” to call out a baseless claim that doesn’t take into account over half of humanity, a percentage that will grow even further once investor money inevitably runs out. If your response to that is to wave away more than 4 billion people, then you’re not even trying to look like you care about reality, you’re just trying to make yourself feel better with some made-up nonsense.
You seem to be under the misconception that you somehow “own” ChatGPT or are entitled to the insight it provides. You don’t and you aren’t. You are at the mercy of trillion-dollar private companies that owe you nothing. Their products’ intelligence is not your intelligence. Whatever profits you’re seeing from it, it’s currently losing them money. And when that changes, so will your image of them as benefactors of humanity who make intelligence available to all.
It is fucking absurdist and pedantic when I hear this drivel coming out of the mouth of a hypocrite. You’re already part of the privileged few. Every single thing that you do from drinking clean water to writing your bullshit on the Internet is the result of your own arguments of distributing technology among the top percentage. And as a recipient of such benefits you should have the intelligence to see that even that much matters. Why don’t you raise your shit against the ass holes who are really making things unequal: Internet service providers and their astronomical fees which don’t equalize the world enough such that homeless people have access to the internet. That’s societies real problem according to your genius logic… so stop your tirade against AI as their are bigger fish to fry.
> You seem to be under the misconception that you somehow “own” ChatGPT or are entitled to the insight it provides.
Right now for the price of a new car I can definitely get enough hardware to run a local LLM to the quality of ChatGPT at my home. And this is just the status quo. The demand for this technology and the projection of improvement in prices predicts a future where you can run one for the price of a new computer. Wake up.
But who the fuck cares? Point being is AI is equalizing intelligence and you’re just throwing in tangents and side branches to try to disentangle the obvious general truth which I will repeat: AI is fucking equalizing intelligence and if you don’t agree, you’re absurd.
Oh if your so but hurt by this go ahead and call me names if you want. Hypocrite is not really that much of an insult and it's true. You called someone absurd as well.
> Then you lecture me about HN guidelines.
Not a lecture. An example of how it's a well known issue. I'm obviously not a rule follower myself; and your content is not really fit for HN either. Once you flag the entire conversation is over. I don't really care, but if I were you I'd rather end the argument by being right instead of running away and tattle tailing to the authorities. Up to you.
maybe the admins come in and block the convo, delete it, and/or ban me. Who knows. I don't care. The fact of the matter is... I'm right, and you know it. Everything I said here is true, and you're turning to this way to end it because you can't face it.
<meta> You're incredibly rude but at the same time.... 100% right. On first reading it was quite off-putting, but your conclusions are solid. Emotions take over rationality, and people - just like thinking models - reverse-engineer a logical-sounding explanation for their actions, they don't "expose" their internal chain of thought.
Maybe the models are closer to us than we're comfortable to admit.
> It is not “absurdist” to call out a baseless claim that doesn’t take into account over half of humanity, a percentage that will grow even further once investor money inevitably runs out.
I love the confidence that comes from this claim. You can run open models in your laptop today compare to the best models from 2 years ago. But sure spread your FUd about investor money running out
I have nothing against open weight models, my issue is more with these mega-corporations posing as saviors of humanity. That said, how is your consumer hardware going to out-compete a datacenter when it has more mouths to feed per token than a datacenter? Who is going to give you money to run anything when a machine can do everything you can do?
No matter how you spin it, we humans are now becoming thermodynamically less efficient versions of LLMs. We contribute nothing of value to the system, so economics dictates we have no place in it except as investors. Skill is nothing now, and ownership is everything. So yeah, I'm afraid of the future. Call it FUD or whatever, I don't care.
This was a hardware and os level problem first. All of that had to be solved before higher level abstractions through languages like go JavaScript could tackle it. Author skipped this entirely.
Because it creeped up on us in the last decade, the US is not the technological powerhouse it was once before. It's not that it's so sad here, overall America is a declining country and losing dominance along every possible vector.
We remain dominant in aerospace and computer science but we're losing edge. And for computer science aka programming the techniques are easily learned and replicable so having an edge here doesn't really mean shit. Not to mention a good portion (aka majority) of the top CS engineers are either indian or chinese.
IQ in the US has also been declining in the last 2 decades as well. It's all going down. This article shouldn't be about a contrast between a great country and happiness, it should be about overall decline of an empire and a new one that may or may not take it's place (China).
F-35 isn't a deterrent. Nukes are the deterrent. Iran and Venezuela lacked nukes. North Korea doesn't lack nukes.
The F-35 is just peacocking but ultimately useless. If these war games were realistic the game ends on the first move which is asking the question "Do they have nukes." If the answer is yes, then the game doesn't even start.
Nuclear weapons are a deterrent against somebody invading the US (or another NATO country) but that doesn't make conventional forces not a deterrent against other kinds of aggression. Many attacks have been made against the US and not resulted in nuclear retaliation, like 9-11.
India and Pakistan have nukes and have fought each other recently so your assertion that "has_nukes() == no_game_start()" is *false*. Nukes, however probably will deter India from doing the full-Putin into Pakistan.
reply