Hacker News new | past | comments | ask | show | jobs | submit | dlkf's comments login

What country are you referring to? The American middle class is objectively very wealthy: https://www.noahpinion.blog/p/no-the-us-is-not-a-poor-societ...


That article is based on 1) disposable income, which is the wrong metric to use (because it doesn't account for the fact that the US has much higher expenses in health care and education than places like the EU, Japan, etc.), 2) housing size as a metric -- which is due to population density (Western Europe has nearly 5 times the population density of the USA) and the fact that US cities are built around cars, and 3) vehicle ownership, which is both a cultural hallmark of the US and due to the fact that you can't really live without a car (2 if you're married) in the US due to the way its cities are built.

In short, laziest article ever.


> Previous generations of neural nets were kind of useless. Spotify ended up replacing their machine learning recommender with a simple system that would just recommend tracks that power listeners had already discovered.

“Previous generations of cars were useless because one guy rode a bike to work.” Pre-transformer neural nets were obviously useful. CNNs and RNNs were SOTA in most vision and audio processing tasks.


Language translation, object detection and segmentation for autonomous driving, surveillance, medical imaging... Indeed plenty fields where NNs are indispensable


Yeah, give 'em small constrained jobs where the lack of coherent internal representation is not a problem.

I was involved in ANN and equivalent based face recognition (not on the computational side, on the psychophysics side) briefly. Face recognition is one of these bigger more difficult jobs, but still more constrained than the things ANNs are useful for.

As far as I understand none of the face recognition algorithms in use these days are ANN based, but are instead computationally efficient versions of the brute force the maths implementations instead.


> Current AI systems have no internal structure that relates meaningfully to their functionality

In what sense is the relationship between neurons and human function more “meaningful” than the relationship between matrices and LLM function?

You’re correct that LLMs are probably a dead end with respect to AGI, but this is completely the wrong reason.


Yeah the internal representation of organic neural networks are also weird - check out the signal processing that occurs between the retina and the various parts of the visual cortex before any decent information can emerge from the signal - David Marr's 1980s book Vision is a mathematically chewy treatise on this. This leads me to start thinking that human intuition may well caused by different neural network subsystems feeding processed data into other subsystems where consciousness and thus intuition and explanation emerges.

Organic neural networks are pretty energy efficient in comparison- although still decently inefficient compared to other body systems - so there is the capacity to build things out to the scale required, assuming my read on what's going on there is correct, that is. So it's not clear to me that the energy inefficiency of ANNs can be sufficiently resolved to enable these multiple quasi-independent subsystems to be built at the scale required. Not even if these interesting looking trinomial neural nets which are matrix addition based rather than multiplication come to dominate the ANN scene.

While I was thinking this comment through I realised there's a possible interpretation wherin human activity induced climate change is an emergent property of the relative energy inefficiency of neural architecture.


Human intelligence has a track record of being useful for thousands of years.


The neurons are always learning whereas the matrices don't change.


I mean, the matrices obviously change during training. I take it your point is that LLMs are trained once and then frozen, whereas humans continuously learn and adapt to their environment. I agree that this is a critical distinction. But it has nothing to do with “meaningful internal structure.”


Now do drug-development.


If you take a vote of 10 random people, then as long as their errors are not perfectly correlated, you’ll do better than asking one person.

https://en.m.wikipedia.org/wiki/Ensemble_learning


If we did this fifty years ago, we’d be having this discussion by snail mail.


Email was invented gratis by government and university employees who were largely paid by the public. Email worked fantastically for decades before the private sector monetized it with spam and CTAs to drive more interactions.


No, we might be having this discussion on mars instead of iPhones.

None of our elites care about: a) 99% of us living a meaningful life b) technologically moving humanity forward (supports bullet a)

Install elites that care about those and then you can use any measure you want, any system you want, etc. Instead we’ve replaced god with money. Only an upgrade for the 1%.


And so what if we did, if it meant Americans were better off economically?

I can tell you I am no happier now that I have email than I was back in the snail mail days. In fact, if anything it's a negative.


And so what if we did, if it meant Americans were better off economically?


There is no epistemological collapse. Access to accurate information has never been so fast nor so easy. To be sure, lies are spread on the internet - but people believed all sorts of bullshit before the internet. Those who want to claim there is a crisis don’t have a principled argument as to how things are worse.


I frequently hear there are more lies and they spread even faster in 2024 than 2014 and for sure faster than 1994.


But surely there are also more truths, and they spread faster than ever before? The amount of lies has increased but so has the amount of information in general, any question you have can be answered within 10 seconds.


I implore everyone reading this to google the Sokal hoax before decide whether these guys are worthwhile.


Michel Desmurget is a well-respected neuroscientist working in research in France, so the Sokal thing is totally irrelevant to him, presumably? https://en.wikipedia.org/wiki/Michel_Desmurget

https://en.wikipedia.org/wiki/Sokal_affair - for the curious.

The Sokal affair is a funny thing yes, which I'd seen before (and I presume many people here are familiar with). I don't see how it's relevant here?

I mean - it was one journal, in 1996, that had no peer review process, that published a fake article someone sent in to prove a point that the journal publishes at least some crap...

What should we reject based on that in your opinion - all cultural and media studies presumably, at the very least, you seem to be clearly suggesting. And every philosopher too? The logicians? The linguists? All of social science? Economics too? Is it just STEM-type stuff that's acceptable then?

Seems preposterous to me. The soft sciences are looser, and definitely have a higher proportion of hand-wavy nonsense, but rejecting it all to avoid stuff you don't like is just silly. Learning to avoid the crap and find the good stuff is pretty similar to other fields.

And often, anecdotally, it seems to me that the more interesting figures in the experimental sciences tend to be very intrigued by the softer, arguably sometimes "trickier" questions that the non-STEM sciences can explore.


Concrete Island by JG Ballard

Libra by Don Delilo

Deep Water by Patricia Highsmith


Assuming you value 10 days at the cost of 1-2 AC units.


If it saves you a single $8k "we have to replace the entire thing" grift, those 10 days can be valued at about a $200k salary ($4k a week).


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: