The reality is that effective LLMs, combined with some kind of knowledge retrieval, are coming close to becoming the idealized individual tutor. This is also a daily reminder that studies show that individual tutoring is objectively the best way to educate people:
personal tutoring and coaching is basically mandatory for mastery. name a professional concert pianist or athlete who doesn’t have one. I act as personal tutor for comp sci students and I’m envious of them. I didn’t have one and I think it really limited my growth.
Chatgpt does tutoring just fine, i've had it draw up a lesson plan for me and execute with hardly any special prompt engineering at all, just sort of like: "Please tutor me on french adverbs, please start by asking me a few questions to find out what I already know," and it dialed in fairly well to my level.
Thank you, updated indirect references to direct: I know it's un-HN but Jesus christ am I tired of hearing this person's garbage quoted ad nauseam like a gospel
Jeez bro. This is a pretty intense reaction to a lukewarm and reasonable take. Personally I appreciate an "AI influencer" being down to earth and being willing to say that the technology isn't magic, amidst a huge amount of hype. If you think people are parroting swyx uncritically - that's hardly a criticism of swyx, is it?
I think you should keep reflecting on your realization about how people got swept up in the cryptoasset hype. You can believe this technology is promising and will improve dramatically without being a fanatic. You can disagree without going for the jugular.
> To hear: "the man talks about things he has no clue about with disturbing loudness"
> And turn it into: "that's actually not a criticism of the guy at all!"
Ironically, that's not at all what I wrote. You made statements such as these:
> But they read tweets like this, and they have the typical developer blindspot of not questioning motive enough and they believe it!
> I'm tired of hearing this person's garbage quoted ad nauseam like a gospel
And so I observed:
> If you think people are parroting swyx uncritically - that's hardly a criticism of swyx, is it?
To be clear, if you feel other people are repeating swyx uncritically, that is a criticism of those people.
When it comes to people being loud while not adding knowledge to the discussion - with all due respect, you should consider the composition of your house before casting that stone.
I apologize if my comments are frustrating for you to read; I am doing my best to give you useful feedback. You behaved in a pretty extreme way, which is not considered acceptable in this community (or most communities). And I take it you knew better:
> that's actually not a criticism of the guy at all!"
> that's hardly a criticism of swyx, is it?
"Ironically, that's not at all what I wrote"
The most extreme idea in this thread is thinking anyone would consider your feedback based on the level of coherence you've shown: maybe give it a rest?
It has been known that LLMs cannot reason transparently nor can these black-boxes explain themselves without regurgitating and rewording their sentences to sound intelligent, but instead are confident sophists no matter what random anyone tells you otherwise.
EDIT: This is the context before it was deleted by the grandparent comment:
>> i have yet to see any ai system properly implement individual level-adjusting tutoring. i suspect because the LLM needs a proper theory of mind (https://twitter.com/swyx/status/1697121327143150004) before you can put this to practice.
You're showing why I'm so annoyed by this perfectly!
It's malicious to rope theory of mind into justifying that point because it's just wrong enough.
If the reader doesn't think deeply about why on earth you would ever to rope theory of mind into this, your brain will happily go down the stochastic parrot route:
"How can it have theory of mind, theory of mind is understanding emotions outside of your own, the LLM has no emotions"
But that's a complete nerdsnipe.
—
If instead you distrust this person's underlying motivations to not be genuine intellectual curiosity, but rather to present a statement that is easily agreed to even at the cost of being wrong... you examine that comment at a higher level:
What is theory of mind adding here besides triggering your typical engineer's well established "LLMs are over-anthropomorphized" response? Even in psychology it's a hairy non-universally accepted or agreed upon concept!
Theory of mind gives two things at the highest level:
inward regulation: which is nonsensical for the LLM, you can tell it what emotion it's outputting as, it does not need theory of mind to act angry
outward recognition: we've let computers do this with linear algebra for over 2 decades. It's what 5 of the largest companies in technology are built on...
—
Commentary like that accounts is built on being just wrong enough:
You calmly state wild opinions. There are people who want to agree with any calm voice because they're seeking guidance in the storm of <insert hype cycle>. They invent a foothold in your wild statement, some sliver of truth they can squint and maybe almost make out.
Then you gain a following, which then starts to add a social aspect: If I don't get it but this is a figure head, I must be looking it wrong. Now people are squinting harder.
This repeats itself until everyone has their eyes closed following someone who has never actually said anything with any intention other than advancing their own influence.
They don't care how many useful ideas die along the way, there's no intellectual curiosity to entice them to even stumble upon something more meaningful, it's just draining the energy out of what should be a truly rewarding time for self-thinking.
There was no need to delete except being so trivially shown to be wrong, I didn't chase them to twitter or something.
But that's the MO for the tech grifter:
- you herd the few people who are unsure and will listen to any confident voice
- the people who know the most about <insert tech> tend to not like that, but when the herd is small just defer to their confrontations with humility and grace, and use that show of virtue to continue herding
- the more people you herd, the easier it is to get incrementally smarter people to follow: We're all subject to certain blindspots in a large enough crowd
- the more people who follow someone who's clearly wrong, the more annoyed people who are knowledgeable about <insert tech> will get about the grifter
- This makes each future confrontation more heated, so now the heated nature of the confrontation is justification to disengage without deferring. Just be confidence and continue herding
- rinse and repeat until people who don't follow the grifter gospel are a minority.
—
The actual VC dollars start chasing whatever story their ilk has weaved by then. And eventually it all collapses because there was no intellectual underlying: just self-enrichment.
That realization from the crowd exhausts any good will that was left for <insert tech> and the grifters move on to the next bubble.
I share your frustration at those who confidently & prematurely write-off rapidly-changing AI tech based on dated examples, cherry-picked anecdotes from the unskilled, & zero extrapolation based on momentum. They do a a double-disservice to those who trust them: 1st, by discouraging beneficial work on ripe, solvable challenges, and 2nd, encouraging a complacency about rapid new capabilities that may leave vulnerable people at the mercy of others who were better prepared.
But, not being familiar with the account in question, I don't see those attitudes in that tweet. It seems more an assessment "no one has quite nailed this yet" than defeatism over whether it's possible.
> i have yet to see any ai system properly implement individual level-adjusting tutoring. i suspect because the LLM needs a proper theory of mind (https://twitter.com/swyx/status/1697121327143150004) before you can put this to practice.
But to be perfectly transparent, I'd never respond so harshly to someone for just that tweet, or even that comment.
Instead it's the fact they're currently a synecdoche for the crypto-ization of AI. This person doesn't usually dismiss AI, instead they heavily amplify the least helpful interpretations of it.
_
This is one of the largest voices behind the new "the rise of the AI engineer" movement in which this author specifically claimed researchers were now obsolete to AI due to the tooling they built: https://news.ycombinator.com/item?id=36538423
Like, I get wanting to make money by capturing value as much as the next person... but basing an entire brand on declaring that the people who are enabling your value proposition are irrelevant just to create a name for yourself is pointlessly distasteful.
The only thing he gained by saying researchers don't matter and understanding Attention doesn't matter is exactly I described above: a wild opinion that attracted the unsure, pissed off the knowledgeable, and served as a wedge that he could then carve out increasingly large slices of the pie for himself with.
Fast forward 2 months and now the process has done its thing, the "AI engineer" conference is being sponsored by the research driven orgs because they don't want to be on the wrong side of the steamroller.
https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem