Hacker News new | past | comments | ask | show | jobs | submit | vitehozonage's comments login

I think I disagree with the article. I think it is true that if you choose to expend time and energy on something that few people spend effort on, then you can become better than most people at that one thing. However, it seems that the article is trying to say that is true for everything and for everyone, and i disagree with that.

The missing key factor is that you have to find something unpopular and easy which will actually have a payoff if you become an expert. Risky and easier said than done.

If you read a few books on mathematics you think you're easily going to become one of the top mathematicians? Many ambitious people try to study math and decades later are disappointed by how they are still mediocre in their field or simply fail to make it into an academic career. Many PhDs in general, actually.


>If you read a few books on mathematics you think you're easily going to become one of the top mathematicians?

No - but you will easily become more educated in math than most people. 99.9% of people couldn't tell you the difference between a derivative and a integral.

It's not about becoming an expert. You don't need to be the best in the world to be usefully good at something.


I have an engineering degree and I can’t remember the difference since I have literally never needed either since school.

Just like examples in the post it self, sure you can get better fast, but if no one cares why bother? Oh you got a bit better ELO in chess, nice, what now? Now your average friends don’t want to play you and you aren’t good enough to beat anyone at the chess club. So either you quit or you have to dedicate yourself.

Same with the shooter game. Do you really want to play a video game competitively? If yea then go ahead but you need to do more than just play to get good which again probably means switching friend circles


You’re right, you should just be bad at everything. That’s a much better way to live your life.


Just because I don’t want to get better at a game doesn’t mean I want to be or am bad at anything much less everything.

What I am trying to say is that you should get better at things that matter to you. I am sure you could get better than most people at making sand castles put of feces real fast, but is that something you should do.


>> 99.9% of people couldn't tell you the difference between a derivative and a integral.

> I have an engineering degree and I can’t remember the difference

Really? I realized I forgot the mechanics of computing the closed-form solutions (like you, I used this type of calculus for maybe four years of my life), but the idea of derivatives being the rate of change stuck with me.


And integrals are the sum of things.


The difference is between getting into the top 10% vs getting into the top 0.1% (or add as many zeroes as you want).

Top 10% just takes a bit of intentional work, rather than just chasing dopamine hits you might get as a casual. And that’s why the 90% aren’t losers or suckers… they just have different priorities.


> If you read a few books on mathematics you think you're easily going to become one of the top mathematicians? Many ambitious people try to study math and decades later are disappointed by how they are still mediocre in their field or simply fail to make it into an academic career. Many PhDs in general, actually.

I don't know about being one of the top mathematicians, but I'd argue that actually, fully reading a few graduate level technical books is more than even most PhDs do.

I was once a PhD student in theoretical physics myself and I'd say that we mostly skim over the books or read only the sections that are immediately and obviously relevant to us.

I once did read one of the shorter known-to-be-difficult books of my field fully from cover to cover, and worked out most of the exercises in the book. After this exercise, I realized that I immediately had much better understanding of the somewhat foundational things described in the book than many of the more senior researchers had. And this was a book that everyone in my field knows, but apparently no-one actually reads it.

The reason why no-one reads actually the difficult books, even when half of their job is reading them, is because it's harsh, gruelling work.

So yeah, maybe you won't become the next Terence Tao by reading three or four graduate level mathematics books, but you can get pretty good if you actually seriously do it without any cheating or skimming.


While what you said may be true simply by the virtue of graduate level textbooks being so dense, I think GP wanted to imply that "just reading a moderate amount of mathematics" isn't sufficient to get anywhere. I would say that 3-4 entire graduate level textbooks (which you wouldn't understand anyway without having done the undergraduate stuff beforehand) is much more than "a moderate amount".


> The missing key factor is that you have to find something unpopular and easy which will actually have a payoff

Why?

I found the article refreshing precisely because it didn't assume you're doing it for the money, or otherwise insisting that a hobby only makes sense if you make a business out of it.

I take this article as a reminder of how little time and effort it takes to achieve basic proficiency in just about anything.

I'm fond of the view that the 80/20 rule applies recursively: you can get 64% of value in 4% of effort, or 51% of value in 0.8% of the effort. Applied to "deliberate practice" meme, that gives you 51% of value for 80 hours of deliberate practice. Sounds absurd, but then most people never done 80 hours of deliberate practice in anything at all.

Personally, I round this up and call it a "10-100-10k framework": 10 hours of deliberate-ish practice is not that big of a sacrifice to pick up some random, specific skill, but you can go surprisingly far doing that. 100 hours should give you competence - good investment for few things that matter for you daily. 10k is for stuff you want to be world-class expert in.


> If you read a few books on mathematics you think you're easily going to become one of the top mathematicians?

No, but so what? The guy behind 3Blue1Brown probably isn't one of the top mathematicians of his era. But he's having quite an impact. He turned explaining fairly basic math concepts in mathematics into a lucrative job.

And who wrote the textbooks you're referring to? Probably not any of the top 10 living mathematicians. That doesn't make the work less useful.

Is Linus Torvalds one of the top 10 computer scientists? He probably wouldn't describe himself that way, and respected academics mocked his work. The list goes on. I think this is compatible with the premise of the article: it's not about being best, it's about being better than the average bear - and then putting that knowledge to some productive use.


It's about combinations of skills.

0.01%ers in one field tend towards monomaniacal obsession.

Sometimes that's useful. But having mostly depth, and enough breadth to balance it out, is better in most cases than depth only.

People who are all breadth, no depth are worse. Those traits give you MBAs and politicians. That doesn't mean breadth is inherently bad, it's about balance.

The sweet-spot is typically to get inside the top 1% and get 75th or 90th percentile people skills or communication skills. Those can take a lot of different forms, good writers / managers / youtubers / teachers are all in that class but there's not necessarily that much overlap.


All those people you mention have studied mathematics for a long time, Grant Sanderson has probably studied it the "least" but that still means a BSc in his case which is not something you accomplish in a weekend. All the textbook authors are usually professors.


The parent's quip was that you can't rise to the top of the field. My point is that it's irrelevant.

Textbooks used in college coursework are usually written by academics, for obvious reasons. Plenty of independent learning / pop textbooks are written by "normal people" who aren't tenured professors.


It's not about being in the top of the field. Even just getting to the point of being able to halfway competently talk about mathematics takes a grueling amount of work.

> Plenty of independent learning / pop textbooks are written by "normal people" who aren't tenured professors.

Idk, Simon Singh is one very well known example of "pop maths" and he has a PhD in particle physics, that's quite a bit more than just "do the reading for a year". Other examples like Eugenia Cheng and Ian Stewart are similarly credentialed.

It's the same in other fields, really. Yes, you can learn to play simple tunes on the piano relatively quickly, but even just being able to play the full Für Elise just takes so much more than that. Or you can learn Hiragana relatively quickly which I guess means that you understand more about Japanese than 95% of people, but it'll still not be enough for you to engage with Japanese in any meaningful way.

The beginner stage is always the easiest. It's after that that it gets really hard.


Plus, the premise comes with a broken assumption regarding distribution and goals between social media and actual human activity.

A friend group has 20 people and 3 making an effort. It's easy to stand out. A sports club has 200 members and 5 stars. Social media has 120 million users. And a screen shows 2 posts at a time. Maybe 10-ish if you're using a non-stupid version of a platform.

A lot of people just want the validation of having low stakes engagement. Likes. Upvotes. Views. Shares. Things that happen absentmindedly or even automatically. Just a small indicator that people saw you, and are generally satisfied even with relatively little of it. So what if that a fraction gets 99 bajillion of likes out of the 100 total. The remaining bajillion still means a small time artist who just wants to show their latest work gets to have 100 people who like it and 2 or 3 who comment.

And that's all they'll realistically amount to while being VERY active. To fall for the trap of thinking "it just takes participation to get there" falls straight in the face of the fact it's a competition of scale.

Speaking of which, so is job hunting. Recruiters even get spammed on the phone. I wish job listings had to be on public platforms with a lot more rules honestly.


Why does it matter if it’s popular? This is neoliberal bullshit. It’s not a competition if you want to know more and then go and acquire that knowledge


Soccer


>there are a number of places where we collect and share some data with our partners, including our optional ads on New Tab and providing sponsored suggestions in the search bar

Mozilla should commit to stop doing anything like that. Then we can have a nice clear Terms of Use that promises to not sell data. I think that would alleviate community concerns.


Likewise. It is fascinating to me that people seem to assume this.

I suspect it is an intentional result of deceptive marketing. I can easily imagine an alternative universe where different terminology was used instead of "AI" without sci-fi comparisons and barely anyone would care about the tech or bother to fund it.


> I suspect it is an intentional result of deceptive marketing

I mean, certainly people like Sam Altman was pushing it hard, so it’s easy to understand how an outside observer would be confused.

But it also feels like a lot of VCs and AI companies have staked several hundreds of billions of dollars on that bet, and I’m still… I just don’t see why the inside players—that should (and probably do!) have more knowledge than me—see. Why are they dumping so much money into this bet?

The market for LLMs doesn’t seem to support the investment, so it feels like they must be trying to win a “first to AGI” race.

Dunno, maybe the upside of the pretty unlikely scenario is enough to justify the risk?


> still… I just don’t see why the inside players—that should (and probably do!) have more knowledge than me—see.

Sam Altman is a very good hype man. I don’t think anyone on the inside genuinely thinks LLMs will lead to AGI. Ed Zitron has been looking at the costs vs the revenue in his newsletter and podcast and he’s got me convinced that the whole field is a house of cards financially. I already considered it much overblown, but it’s actually one of the biggest financial cons of our time, like NFTs but with actual utility.


Re: Ed Zitron - here is his recent piece that the parent is referencing: https://www.wheresyoured.at/wheres-the-money/

If you find yourself agreeing, I highly recommend subscribing to his newsletter.


You overestimate the intelligence of venture capital funds. One look at most of these popular VC funds like A16z and Sequoia and you will see how little they really know.


If the bar for AGI is "as smart as a human being," and humans do not-very-smart things like invest obscene amounts of money into developing AGI then maybe it's actually not as high of a bar as we assume it is.


What’s the quote?

“A person is smart. People are dumb, panicky dangerous animals and you know it.”

If AGI wants to hit human level intelligence, I think it’s got a long way to go. But if it’s aiming for our collective intelligence, maybe it’s pretty close after all…


The thing is it has 0 intelligence. It has only knowledge


It has pattern-matching. It doesn't have knowledge in the way a human has knowledge, through building an internal model of the world where different facts are integrated together. And it doesn't have knowledge in the way a book has knowledge either, as immutable declarative statements.

It is still interesting tech. I wish it were being used more for search and compression.


Venture capital bets on returns. It's not about some objective and eternal value. A successful investment is just something that another person will buy from you for more.

So yep, a lot of time, they bet on trends. Cryptocurrencies, NFTs, several waves of AI. The question is just the acquisition or IPO price.

I don't doubt that some VCs genuinely bought into the AGI argument, but let's be frank, it wasn't hard to make that leap in 2023. It was (and is) some mind-blowing, magical tech, seemingly capable of far more than common sense would dictate. When intuition fails, we revert to beliefs, and the AGI church was handing out brochures...


> it wasn't hard to make that leap in 2023

It...does seem hard to make that leap to me. I mean, again, to a casual and uncritical outside observer who is just listening to and (in my mind naively) trusting someone like Sam Altman, then it's easy, sure.

But I think for those thinking critically about it... it was just as unjustified a leap in 2023 as it is today. I guess maybe you're right, and I'm just really overestimating the number of people that were thinking critically vs uncritically about it.


They also learned a long time ago to not evaluate the underlying product. Some products that were passed on by big players went on to become huge. So they learned to evaluate the founders. They go by social proof and that's how they were conned into the massive bets done on LLMs.


> But it also feels like a lot of VCs

They only need to last until the exit (potentially next round).

> The market for LLMs doesn’t seem to support the investment

i.e. it doesn't matter as long as they find someone else to dump it to (for profit).


At least part of the reason is strategic economic planning. They are trying to build a 21st century moat between the US and BRICS since everything else is eroding quickly. They were hoping AI would be the thing that places the US far out of reach of other countries, but it's looking like it won't be.


Alternately everyone is just trying to ensure they have a dominant position in the next wave. The history of tech is that you either win the next wave or become effectively irrelevant and worthless.


And you can win the next wave by holding stocks in AI companies which aren't AGI but do have a lot of customers, or an interesting story about AGI in two years to tell IPO bagholders...


> But it also feels like a lot of VCs and AI companies have staked several hundreds of billions of dollars on that bet, and I’m still… I just don’t see why the inside players—that should (and probably do!) have more knowledge than me—see. Why are they dumping so much money into this bet?

I mean, see also, AR/VR/Metaverse. My suspicion is that, for the like of Google and Facebook, they have _so much money_ that the risk of being wrong on LLMs exceeds the risk of wasting a few hundred billion on LLMs. Even if Google et al don’t really think there’s anything much to LLMs, it’s arguably rational for them to pump the money in, in case they’re wrong.

That said, obviously this only works if you’re Google or similar, and you can take this line of reasoning too far (see Softbank).


It's text. Seeing words written down is like a hack for making humans treat something as profound.

People were declaring ELIZA was intelligent after interacting with it and ELIZA is barely a page of code.


I found the article to be very long and uninteresting and also never found the "nightmare" part, maybe i missed it


Perhaps you think all PoW algorithms are still crackable by ASICs? A few years ago that was the case, but some years ago Monero developers made a breakthrough with RandomX. Now it is no longer true that a GPU or ASIC can outperform a typical consumer device to the extent that you seem to imagine. The Tor project uses a similar algorithm, i think with the same developer contributing to it as RandomX. It is nothing like bitcoin's SHA256 PoW - with that, the performance of an ASIC does indeed mean a consumer PC becomes completely useless at the algorithm


Will RandomX work on the old cell phones, via Javascript interface only?

The website says: "Fast mode - requires 2080 MiB of shared memory. Light mode - requires only 256 MiB of shared memory, but runs significantly slower"

If you want your website challenge to work on the cheap phone - slow CPU, with little memory, and when implemented in Javascript, you'd have to tune complexity way down. And when a modern PC with fast CPU and tons of memory tries to solve it.. it probably will take only a few milliseconds, basically being useless.


I don't know, I dont understand the details and your reasoning is confusing for me. My understanding is that the effectiveness of particular hardware is complex to predict; it depends on the sizes of the CPU caches and effectiveness at certain instructions, and the algorithm can of course be tuned in all sorts of ways. The Tor project is already using it so presumably it is working for them to some extent. More info here: https://blog.torproject.org/introducing-proof-of-work-defens...


You might want to try Mullvad Leta, it's what i use for this issue. I would try Kagi if it could be used privately but i suppose it still requires an account and has no way to pay privately


Exactly what i thought too.

Right now, for 10 years at least, with targeted advertising, it has been completely normalised and typical to use machine learning to intentionally subliminally manipulate people. I was taught less than 10 years at a top university that machine learning was classified as AI.

It raises many questions. Is it covered by this legislation? Other comments make it sound like they created an exception, so it is not. But then I have to ask, why make such an exception? What is the spirit and intention of the law? How does it make sense to create such an exception? Isn't the truth that the current behaviour of the advertising industry is unacceptable but it's too inconvenient to try to deal with that problem?

Placing the line between acceptable tech and "AI" is going to be completely arbitrary and industry will intentionally make their tech tread on that line.


You seem to have the timing all different compared to my memory. I remember when there were less than 1000 cases worldwide (i remember clearly because i developed a tracker that kept count even then). That was when i personally was scared because we knew nothing about it. But during that time, the mainstream opinion was that nobody cared. Infected Chinese people were allowed to fly all over the world, there were zero controls, for i think over a month - maybe 2 months - after it began spreading.

Then in February we got strong data that had a low fatality rate and mostly only threatened old people, similar to the flu, for example from the Diamond Princess cruise ship that provided a clear view since it was a closed environment with a lot of older people where everyone could be tested. Only after this did the lockdowns start and then continue for years.

So i think the response was catastrophic. There was no response at the time when it was an unknown disease and it could have been an existential threat for all we knew. Then there was an irrational over-response that lasted very long when we had strong data that indicated it wasnt that much worse than the flu. So now people wont even take it seriously or trust any messaging next time there is a potential pandemic. It is difficult to imagine a worse response or outcome.


You are misremembering. The “irrational over-response” came once we realized just how incredibly contagious the disease was, since it ran the risk of completely overwhelming hospitals and causing a collapse of health care systems. (And came very close in some places such as Italy.)


I disagree. I looked up dates to verify. I suppose if you got your information from mainstream media it would seem like it went as you say. But it is a fact that the spread started in December 2019 [1], and the Diamond Princess data was available in February [2], and lockdowns started in March [3]

1: https://en.wikipedia.org/wiki/COVID-19_pandemic

2: https://en.wikipedia.org/wiki/COVID-19_pandemic_on_Diamond_P...

3: https://en.wikipedia.org/wiki/COVID-19_lockdowns


I don't see how this contradicts what I said. Serious lockdowns started in response to hospitals getting swamped, too late to actually stop the virus but just in time to barely keep health care afloat. As such, I fail to see how you could call such a response "irrational" unless you've memory-holed the corpse trucks, treatment tents, and so on.


Often. It is becoming a big problem. In the last couple of days youtube has become almost unusable. Reddit is usually blocked. Twitch shadowbans chats. A lot of random unexpected sites block it like wiki pages. Captchas make google too annoying to use (but mullvad has its own proxy called Leta but it lacks features like suggested corrections to the search). I still use it always and persist despite this. Often hopping servers will work but I might have to try 5+ different servers


I'm a native English speaker but i have no idea what you mean by that since the words are almost synonyms


I meant that it generates curiosity, but it does not satisfy it.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: