Not sure that disproves the point :) Most people have never been anywhere close to competing with the top 6 athletes at a high school with ~2k students.
OK, so let's do the math. There's about 25k high schools in the USA. Let's suppose they all have a track team, and let's assume that they all have 5 team members who can break 04:30 for 1600m. Sure, at some schools that's too few, but at others it is too many.
That gives us 125k high schoolers in the USA who can break 04:30 for 1600m. There are about 18M high school students. So of just the high school population alone, about 0.7% of them can do this.
Assuming there are the 4x as many adults that can do this as there are high school students, that gives us slightly less than 0.2% of the total US population capable of this.
We just have different ideas of what constitutes "mere mortals." 1 in 150 high school students or even 1 in 500 from the general population doesn't sound super human to me at all. Talented, yes but not god like.
It's kind of an aside in the post, but connecting LLMs and Searle's Chinese Room argument is a brilliant observation. Although there are people who believe LLMs are really thinking, it's mostly confirming that the Turing test wasn't the right way to test this.
Understanding LLMs require a few pieces of knowledge that are not very common in the industry, not only ML stuff, but I think that having worked in the past with NLP helps a lot.
I fucked around with dissociate press 30 years ago just to see what would happen if you fed it a combination of random chapters from Alice in Wonderland and the book of Revelations. This hardly represents insider knowledge.
I don't doubt that number, but it's always a bit baffling to look at the median income in expensive cities. New York city's median household income is $87k, which means that the majority of households are well below the income level it takes to live there.
This baffles me too. I don’t understand how “normal” people let alone lower income people live in places like SF/SV, NYC, etc. The math doesn’t math. Yet these cities have these people and could not function without them.
People making $80-90K can live a similar lifestyle to the people making $125K+, they just aren't saving any money. I know people that do this, live their whole life with less than $5k in the bank.
I worked at a company once that posted h-1b jobs on a piece of paper on a board next to a restroom at the office. That was technically a publicly accessible area (if you had a guest pass).
IMHO, people making claims should provide the evidence for them. One link is behind a paywall and the other clearly states that it is making informed speculations.
I could make all sorts of claims on the spot here. It doesn't create a duty for people reading this thread to go investigate them.
You're so close, just one more step, and it's easy, just have to step away from keeping it hypothetical.
<SPOILER>
Then it certainly does not create a duty for people to go investigate, when the only difference is "someone replied telling someone to fact check"
</SPOILER>
You're the one in this thread claiming people are responsible for "going and finding the evidence" of other people's unsourced claims. You could have just not replied since you didn't have something to contribute.
I apologize for not quoting you directly “Then go get some!”. That’s what you said in response to there being no evidence. Would you like a link to your comment?
"People are responsible for going and finding the evidence" and "Then go get some!" are not paraphrases of each other. They don't share a single word, or advance a similar idea. I am uncertain linking comments can change that.
I'm not sure what's going on: "People are responsible for going and finding the evidence" and "Then go get some!" are not paraphrases.
Best steelman I can come up with is you're seeing deep red, so it's hard to see "Then go get some!" is suggesting he could fact-check his own question instead of asking the room to do it for him.
Which is the opposite of your characterization that I think people are responsible for investigating strangers' unsourced claims. We violently agree, not disagree.
Making this exchange all the curiouser.
Are you inebriated? I only ask because it's unusual to see someone on HN choosing to say obviously incorrect things, aggressively, on purpose, just to talk down to someone. Much less making bullying attempts based on comment history.
Program generation from a spec meant something vastly different in 2007 than it does now. People can and are generating programs from underspecified prompts. Trying to be systematic about how prompts work is a worthwhile area to explore.
Sure, but Joel isn't saying that's impossible or that people who do that are crackpots. In fact, he was an advocate of writing specs ahead of time [1] - for people.
At the time "generating a program from a spec" was an idea floating around that you could come up with a "spec language" that was easier than regular programming languages but somehow still had the same power and could be compiled directly into a program. That's the crackpot idea that Joel is referencing - but that's not what a spec language used with an LLM is doing.
This is an excellent observation and puts into words something I have barely scratched the surface of. Along with specifications, formal verification is another domain that received the "just automate it" treatment in the before times.
And because formal verification with LLMs is an active area of open research, I have some hope that the old idea of automated formal verification is starting to take shape. There is a lot to talk about here, but I'll leave a link to the 1968 NATO Software Engineering Conference [1] for those who are interested in where these thoughts originated. It goes deeply into the subject of "specification languages" and other related concepts. My understanding is that the historical split between computing science and software engineering has its roots in this 1968 conference.
Might look like it, might also just be survivorship bias. Alot of crackpot ideas hit the wall instead of beeing a success. We only notice the successors and might think of them as the default, not the exception.
I was commenting from that perspective, basically any thing we consider today to be “the way it’s done” was once something only crazy people did. I think maybe it was pg who said something like if you’re only working on safe things you’ll never have a breakthrough because if breakthroughs came from safe ideas then there would be more of them. I’m not saying every crazy idea changes the world but if you want to change the world you need a crazy idea.
LLMs were trained on stuff that people wrote. I get there are "tells", but don't really think people are as good at identifying AI generated text as they think they are...
I wouldn't have picked this article as AI until I got an agent to do some writing for me and read a bunch of it to figure out if I can stand behind it. Now I see the tells everywhere "It's not this. It's that." is particularly common and I can't unsee it. (FWIW I rewrote most of the writing it generated, but it did help me figure out my structure and narrative)
The problem I think with AI generated posts is that you feel like you can't trust the content once it's AI. It could be partly hallucinated, or misrepresented.
Yeah, but "it's not X. It's Y" is a common idiom that LLMs picked up from people. That's the point i was making. And it's starting to feel like every post has at least one comment claiming that it was AI generated.
Good chunks of the article don't trigger this for me, but I would bet money on the final paragraph involving AI:
> That's not a technical argument. It's a values argument. And it's one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it's the best technology. But because it's the one technology that already belongs to you.
reply