That article is based on 1) disposable income, which is the wrong metric to use (because it doesn't account for the fact that the US has much higher expenses in health care and education than places like the EU, Japan, etc.), 2) housing size as a metric -- which is due to population density (Western Europe has nearly 5 times the population density of the USA) and the fact that US cities are built around cars, and 3) vehicle ownership, which is both a cultural hallmark of the US and due to the fact that you can't really live without a car (2 if you're married) in the US due to the way its cities are built.
> Previous generations of neural nets were kind of useless. Spotify ended up replacing their machine learning recommender with a simple system that would just recommend tracks that power listeners had already discovered.
“Previous generations of cars were useless because one guy rode a bike to work.” Pre-transformer neural nets were obviously useful. CNNs and RNNs were SOTA in most vision and audio processing tasks.
Language translation, object detection and segmentation for autonomous driving, surveillance, medical imaging... Indeed plenty fields where NNs are indispensable
Yeah, give 'em small constrained jobs where the lack of coherent internal representation is not a problem.
I was involved in ANN and equivalent based face recognition (not on the computational side, on the psychophysics side) briefly. Face recognition is one of these bigger more difficult jobs, but still more constrained than the things ANNs are useful for.
As far as I understand none of the face recognition algorithms in use these days are ANN based, but are instead computationally efficient versions of the brute force the maths implementations instead.
Yeah the internal representation of organic neural networks are also weird - check out the signal processing that occurs between the retina and the various parts of the visual cortex before any decent information can emerge from the signal - David Marr's 1980s book Vision is a mathematically chewy treatise on this. This leads me to start thinking that human intuition may well caused by different neural network subsystems feeding processed data into other subsystems where consciousness and thus intuition and explanation emerges.
Organic neural networks are pretty energy efficient in comparison- although still decently inefficient compared to other body systems - so there is the capacity to build things out to the scale required, assuming my read on what's going on there is correct, that is. So it's not clear to me that the energy inefficiency of ANNs can be sufficiently resolved to enable these multiple quasi-independent subsystems to be built at the scale required. Not even if these interesting looking trinomial neural nets which are matrix addition based rather than multiplication come to dominate the ANN scene.
While I was thinking this comment through I realised there's a possible interpretation wherin human activity induced climate change is an emergent property of the relative energy inefficiency of neural architecture.
I mean, the matrices obviously change during training. I take it your point is that LLMs are trained once and then frozen, whereas humans continuously learn and adapt to their environment. I agree that this is a critical distinction. But it has nothing to do with “meaningful internal structure.”
Email was invented gratis by government and university employees who were largely paid by the public. Email worked fantastically for decades before the private sector monetized it with spam and CTAs to drive more interactions.
No, we might be having this discussion on mars instead of iPhones.
None of our elites care about:
a) 99% of us living a meaningful life
b) technologically moving humanity forward (supports bullet a)
Install elites that care about those and then you can use any measure you want, any system you want, etc. Instead we’ve replaced god with money. Only an upgrade for the 1%.
There is no epistemological collapse. Access to accurate information has never been so fast nor so easy. To be sure, lies are spread on the internet - but people believed all sorts of bullshit before the internet. Those who want to claim there is a crisis don’t have a principled argument as to how things are worse.
But surely there are also more truths, and they spread faster than ever before? The amount of lies has increased but so has the amount of information in general, any question you have can be answered within 10 seconds.
Michel Desmurget is a well-respected neuroscientist working in research in France, so the Sokal thing is totally irrelevant to him, presumably? https://en.wikipedia.org/wiki/Michel_Desmurget
The Sokal affair is a funny thing yes, which I'd seen before (and I presume many people here are familiar with). I don't see how it's relevant here?
I mean - it was one journal, in 1996, that had no peer review process, that published a fake article someone sent in to prove a point that the journal publishes at least some crap...
What should we reject based on that in your opinion - all cultural and media studies presumably, at the very least, you seem to be clearly suggesting. And every philosopher too? The logicians? The linguists? All of social science? Economics too? Is it just STEM-type stuff that's acceptable then?
Seems preposterous to me. The soft sciences are looser, and definitely have a higher proportion of hand-wavy nonsense, but rejecting it all to avoid stuff you don't like is just silly. Learning to avoid the crap and find the good stuff is pretty similar to other fields.
And often, anecdotally, it seems to me that the more interesting figures in the experimental sciences tend to be very intrigued by the softer, arguably sometimes "trickier" questions that the non-STEM sciences can explore.