>Zelensky implied Russia would invade the U.S. Do you think that’s likely?
He did not. What he did say is, that should Russia start to expand the war to other countries (as is very likely if Ukraine falls), these effects will be felt by the people in the US.
That statement is obviously true. The effects of the Ukraine war could also be felt, as it affected the economy across the globe.
Be concrete. Which countries and what effects? Berlin is less than 1,000 miles from Moscow. Why aren’t they spending 5% of GDP on defense if this is a real threat?
The rules aren’t that hard but actually applying it to code and honing it to consistently pull exactly what you want is in my experience the hardest part.
I quite liked ReadEra. Scans your phone for all epubs, allows grouping and organizing them, but most importantly, has a good looking, customizable reading interface.
I think they are best at information extraction/classification tasks, especially for complex tasks with little to no training data, and data synthesis tasks. However, you should always test if simpler models can already perform the task reasonably well to save money.
They underperform at anything that requires reasoning.
Just talking about the software-side of things:
I think they actually have been adding very neat features with the past few updates. Automatic image OCR, so you can copy/paste text, inserting text via camera, the transformer-based keyboard and such are small, but very useful features.
I much prefer that over Microsoft's way of adding a new UI that can do less than previous iterations, while also being less performant and tons of half-baked features like their newly-planned Copilot, still unfinished Android subsystem, worse search, etc.
Of course, neither approach is perfect, but I actually prefer system that change little on the surface and mainly adds small, but well-integrated features over time. Not saying Apple is perfect (there's tons that could be done regarding user freedom on iOS, backwards-compatibility and gaming), but neither are the competitioners.
Even you are still personifying the model. There is no motivation. An input sentence is multiplied by a series of weights and the resulting output vector transformed back into a text.
At the end of the day, it's a program like any other. There are no emotions or a consciousness.
That's why I put "motivation" in quotes. Also I was talking about training, where you compute a cost function and back propagate error. This is somewhat analogous to "motivation".
Most of the time I try it to use it, the website won't allow me to use the '+' in an email or will never send the verification mail to activate my account.
Well, Chomsky already dismissed corpus based linguistics in the 90s and 2000s, because a corpus (large collection of text documents, e.g., newspaper, blog post, literature or everything mixed together) is never a good enough approximation of the true underlying distribution of all words/constructs in a language.
For example, a newspaper-based corpus might have frequent occurences of city names or names of politicians, whereas they might not occur that often in real everyday speech, because many people don't actually talk about those politicians all day long. Or, alternatively, names of small cities might have a frequency of 0.
Naturally, he will, and does, also dismiss anything that occured in the ML field in the past decade.
But I agree with the article. Dealing with language only in a theoretical/mathematical way, not even trying to evaluate your theories with real data, is just not very efficient and ignores that language models do seem to work to some degree.
This is a bit lateral, but there is a parallel where Marvin Minsky will most likely be best remembered for dismissing neural networks (a 1 layer perceptron can't even handle an xor!). We are now sufficiently removed from his heyday where I can't really recall anything he did besides the book Perceptrons with Seymour Papert (who went on to do some very interesting work in education).
There is a chart out there about ML progress that makes a conjecture about how small the gap is between what we would consider that smartest and dumbest levels of human intelligence (in the grand scheme of information processing systems). It is a purely qualitative vibes sort of chart, but it is not unreasonable that even the smartest tenured professors at MIT might not be that much beyond the rest of us.
This dismissal of Minsky misses that Minsky had actually extensive experience with neural nets (starting in the 1950s, with neural nets in hardware) and was around 1960 probably the most experienced person in the field. Also, in Jan 1961, he published “Steps Toward Artificial Intelligence” [0], where we not only find a description of gradient descend (then "hill climbing", compare sect. B in “Steps”, as this was still measured towards a success parameter and not against an error function), but also a summary of experiences with this. (Also, the eventual reversal of success into a quantifiable error function may provide some answer to the question of success in statistical models.)
Gradient descent was invented before Minsky. Imo, Minsky produced some vague writings, with no significant practical impact, but this is enough for some people to claim his founder's role in the field.
Minsky was actually a pioneer in the field, when it came to working with real networks. Compare
[0] “A Neural-Analogue Calculator Based upon a Probability Model of Reinforcement”, Harvard University Psychological Laboratories, Cambridge, MA, January 8, 1952
[1] “Neural Nets and the Brain Model Problem”, Princeton Ph.D dissertation, 1954
In comparison, Frank Rosenblatt's Perceptron at Cornell was only built in 1958. Notably, Minsky's SNARC (1951) was the first learning neural network.
> when it came to working with real networks. Compare
my understanding is that that no one knows what that SNARK thing was, he built something on the grant, abandoned it shortly after that, and only many years later he and fanboys started using it as foundation of bold claims about his role in the field.
> “Multiple simultaneous optimizers” search for a (local) maximum value of some function E(λ1, …, λn) of several parameters. Each unit Ui independently “jitters” its parameter λ1, perhaps randomly, by adding a variation δi(t) to a current mean value μi. The changes in the quantities λi and E are correlated, and the result is used to slowly change μi. The filters are to remove DC components. This technique, a form of coherent detection, usually has an advantage over methods dealing separately and sequentially with each parameter.
The link has been already provided above (opus cit), it's directly connected to the very question of gradients, providing a specific implementation (it even comes with a circuit diagram). As you were claiming a lack of detail (but apparently not honoring the provided citation)…
(The earlier you go back in the papers, the more specifics you will find.)
That claim was never made, but by you. The claim was, Minsky had practical experience and wrote about experiences with gradient descend (aka "hill climbing") and problems of locality in a paper published Jan. 1961.
On the other hand: who invented "hill climbing"? You've contributed nothing to the question, you've posed (which was never mine, nor even an implicit part of any claims made).
Well, who wrote before 1952 about learning networks? I'm not aware that this would have been already main stream, then. (Rosenblatt's first publication on the Perceptron is from 1957.)
It would be nice, if you contributed anything to the questions you are posing, like, who invented gradient descent / hill climbing or who can be attributed for this? what substantial work precedes the writings of Minsky on their respective subject matter (substantially)? why was this already mainstream or how and where were these experiments already conducted elsewhere (as in "not pioneering")? Where is the prior art to SNARC?
This is ridiculous. Pls. reread the threads, you'll find the answers.
(I really don't care about what substantial corpus of research on reinforced learning networks in the 1940s, which is of course not existent, you seem to be alluding to, without caring to share any of your thoughts. This is really just trolling at this point.)
I think you perfectly understand that we are in disagreement about this, my point of view is that your "answers" are just fantasies about your idol without grounding into actual evidence.
Minsky is not my idol. It's just that it's part of reality that Minskys writings exist, that theses contain certain things and that they were published at certain dates, and that BTW Minsky happens to have built the earliest known learning network.
Take the amount of language a blind 6 year old has been exposed to. It is nothing like the scale of these corpsuses but they can develop a rich use of language.
With current models if you increased parameters but gave it a similar amount of data it would overfit.
It could be because kids are gradually and structurally trained through trials, errors and manual corrections, which we somehow don't do with NN. He wouldn't be able learn language if only exercises he would be doing is to guess next word in sentence.
For me this is a prototypical example of compounded cognitive error colliding with Dunning-Kruger.
We (all of us) are very bad at non-linear reasoning, reasoning with orders of magnitude, and (by extension) have no valid intuition about emergent behaviors/properties in complex systems.
In the case of scaled ML this is quite obvious in hindsight. There are many now-classic anecdotes about even those devising contemporary scale LLM being surprised and unsettled by what even their first versions were capable of.
As we work away at optimizations and architectural features and expediencies which render certain classes of complex problem solving tractable by our ML,
we would do well to intentionally filter for further emergent behavior.
Whatever specific claims or notions any member has that may be right or wrong, the LessWrong folks are at least taking this seriously...
My own hobby horse of late is that independent of its tethering to information about reality available through sensorium and testing, LLM are already doing more than building models of language qua language. Write up someone pointed me at: https://thegradient.pub/othello/
How is an understatement? And what do people mean by language models working well? From what I can tell, these language models are able to form correct grammar quite surprisingly well. However, the content of such, is quite poor and often void of any understanding.
[1] https://x.com/hirox246/status/1912603340292448258