On average, yes. But if you look on a voting map[0], you'll notice that the majority of those votes comes out of the former DDR. In many other areas, much less than 1 in 5 voted for the brownshirts.
If you look even deeper, at people's actual attitudes[1], you'll see that far-right positions are approx. equally distributed in Eastern and Western Germany - the kind of people voting AfD in Eastern Germany (currently) just vote CDU in Western Germany.
No, the point I am making is that it very much depends on their location if this "1 in 5"-average is even remotely true. And looking at overall population statistics and concentration of jobs, it's comparatively likely that somebody migrating to germany is _not_ in those areas.
>> This is comforting, as an Argentinian living in Germany. 1 out of 5 people here voted for the AFD...
My fundamental aim here was to show the poster, that it's likely not as bad where they are - and since they are a migrant, I assumed that they were likely not in those really bad areas. Intended as a positive message. Sadly, I apparently butchered the delivery of this message, as several people interpreted it completely differently.
That's already expected. Right now, preparations are made for nuclear (re-)armament of europe. Germany in particular is working towards that at this moment [EDIT: politically. not in the "actively preparing weapons"-phase.].
Also, france has a first-strike policy.
So even without US troops, europe will be fine.
A sizable part of the population has wanted the US out of especially germany for a looong time now, so those movements have become pretty popular again. I haven't heard "ami go home" in a long time, but right now it's common.
> Right now, preparations are made for nuclear (re-)armament of europe. Germany in particular is working towards that at this moment.
Can you provide a source for this? It would be big if true, in particular the Germany claim. I've not heard it and couldn't find things with a quick google search.
Germany is currently already doing nuclear weapons sharing with the US, i.e. they have access to US nuclear weapons though of course with some restrictions.
What you wrote sounds like Germany would be working towards having new nuclear weapons to be produced in Europe without US involvement and shared without US restrictions.
The political work for it is being done right now. However, I reread my comment and realized, it could be interpreted as "actively working at the weapons" instead of the political framework. My apologies in that case.
Thanks, that's helpful! To break it down for others: Germany is considering making a similar deal for sharing with France as it currently has with the US. So it wouldn't really be a (re)armament, more an increase in resilience by not just relying on US for sharing but adding France.
I think that's quite an important difference from your original wording.
I don't know, I think lately there's a massive flaw in first-strike and MAD policies. Does it ever really make sense to destroy the world and yourself because an ally(Baltic states for example) or even your own state is being invaded? It's certainly not going to improve your situation even if the alternative is invasion by a foreign power.
A powerful conventional army capable of proportional response is the only realistic deterrent.
The purpose of MAD is to never have to launch a nuke.
The entire point of the doctrine is that any sane person in charge of a country('s military) would never risk getting their own country nuked just to attack another country, so having nukes on your side means that no one else will attack you.
This...does somewhat break down when we end up with people in charge of nuclear countries who are not sane.
France (and the UK) having an independent nuclear deterrent is because they (/we) didn't want to risk someone like Trump when the USSR was still around:
Those valuations are built on an imaginary future the founders made investors believe.
The idea is: if we reach true AGI first, we are going to own ALL THE MONEY!
Which erroneously assumes that models can't be siphoned off/recreated, as deepseek proved possible and even reasonably doable. Which in turn fundamentally shows that both openai and anthropic very likely have basically no moat.
I can almost smell another AI winter arriving, once all those valuations meet reality.
I cant see a future where AGI exists and money in general isn't worthless within 6 months of it existing.
Either it kills us all, or makes the creator so much money that it's essentially worthless because they're the only one with money, or creates a utopia where money isn't needed.
The laws of economics apply just as much to AI as they do to humans, if anything AI is an even better (more rational) homo economicus. Even if AI wiped out all humans, the AIs would still need a monetary system for trading among themselves.
Resources still need to be allocated. Even by the AI itself. At some point a group of neurons will “argue” for more resources, and then within that group a subgroup will, and so forth. Hence the paperclip maximizes…
AGI means smart like a human, it doesn't mean it invents new things only found in science fiction. It is unlikely the AI will generate infinite free energy.
Deepseek showed us very well that openAI at the very least does not have a significant moat. Also, that the ridiculous valuations dreamt up for AI companies are make-believe at best.
If - and that's a big if - LLM-Tech turns out to be the path to true (not the OpenAI-definition) AGI, everybody will be able to get there in time, the tech is well known and the research teams notoriously leaky. If not, Another AI winter is going to follow. In either case, the only ones who are going to make a major profit there are the ones selling shovels during the gold rush - nvidia. Well, and the owners who promised investors all kinds of ridiculous nonsense.
Anyway, the most important point, in my opinion, is that it's a bad idea to believe people financially incentivized to hype their AIs into unrealistic heights.
It seems increasingly likely that LLM development will follow the path of self-driving cars. Early on in the self-driving car race, there were many competitors building similar solutions and leaders frequently hyped full self-driving as just around the corner.
However, it turned out to be a very difficult and time-consuming process to move from a mostly-working MVP to a system that was safe across the vast majority of edge cases in the real world. Many competitors gave up because the production system took much longer than expected to build. However, today, a decade or more in to the race, self-driving cars are here.
Yet even for the winners, we can see some major concessions from the original vision: Waymo/Tesla/etc have each strictly limited the contexts you can use self-driving, so it's not a 100% replacement for a human driver in all cases, and the service itself is still very expensive to run and maintain commercially, so it's not necessarily cheaper to get a self-driving car than a human driver. Both limitations seem like to be reduced in the years ahead: the restrictions on where you can use self-driving will gradually relax, and the costs will go down. So it's plausible that fleets of self-driving cars are an everyday part of life for many people in the next decade or two.
If AI development follows this path, we can expect that many vendors will run out of cash before they can actually return their massive capital investment, and a few dedicated players will eventually produce AIs that can handle useful subsets of human thoughtwork in a decade or so, for a substantial fee. Perhaps in two decades we will actually have cost-effective AI employees in the world at large.
There are plenty of real applications that Nvidia is fueling. Things that make money. There will be a reckoning for the hype men, but there is a good amount of value still there.
(As always, the task of the hype man isn’t to maintain the bubble indefinitely, but just long enough for him to get his money out.)
There are more fundamental issues at play, where I see stock price fairly divorced from actual, tangible value, but the line still goes up because people are going to keep buying tulips forever, right?
It sucks because I think investing in the stock market takes away from dynamic investment in innovative startups and real R&D, and shift capital towards shiny things.
You implicitly assume, LLMs are actually important enough to make a difference on the geopolitical level.
So far, I haven't seen any indication that this is the case. And I'd say, hyped up speculations by people financially incentivized to hype AI should be taken with an entire mine full of salt.
First, its not just about LLMs. Its not an LLM that replaced human drivers in Waymo cars.
Second, how could AI not be the deciding geopolitical factor of the future? You expect progress to stop and AI not to achieve and surpass human intelligence?
>First, its not just about LLMs. Its not an LLM that replaced human drivers in Waymo cars.
As far as I know, Waymo is still not even remotely able to operate in any kind of difficult environment, even though insane amounts of money have been poured into it. You are vastly overstating its capabilities.
Is it cool tech? Sure. Is it safely going to replace all drivers? Doubt, very much so.
Secondly, this only works if progress in AI does not stagnate. And, again, you have no grounds to actually make that claim. It's all built on the fanciful imagination that we're close to AGI. I disagree heavily and think, it's much further away than people profiting financially from the hype tend to claim.
Vastly overstating its capabilities? SF is ~crawling~ with them 24/7 and I've yet to meet someone who's had a bad experience in one of them. They operate more than well enough to replace rideshare drivers, and they have been.
How stupid of Google. Instead of getting their self driving car technology to work in a blizzard first, and then working on getting it working in a city, they choose to get it working in a city first, before getting it to work in inclement weather.
What idiots!
I hope you are sarcastic! Because it is quite expected that they would test where it is easy first. The stupid ones are those who parrot the incorrect assumption that self-driving cars are comparable to humans at general driving where statistics on general driving includes lots of driving in suboptimal condition.
I get that LLMs are just doing a probabilistic prediction etc. Its all Hutter Prize stuff.
But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?
A completely different tack: if we develop the technology to engineer animal-style nerves and form them into big lumps called 'brains', in what way is that not artificial and intelligence? And if we can do that, what is to stop that manufactured brain from not being twice or ten times larger than a humans?
I don't think the probabilistic prediction is a problem. The problem with current LLM is that they are limited to doing "System 1" thinking, only giving you a fast instinctive response to a question. While that works great for a lot of small problems, it completely falls apart on any larger task that requires multiple steps or backtracking. "System 2" thinking is completely missing as is the ability to just self-iterate on their own output.
Reasoning models are trying to address that now, but monologueing in token-space still feels more like a hack than a real solution, but it does improve their performance a good bit nonetheless.
In practical terms all this means is that current LLMs still need a hell of a lot of hand holding and fail at anything more complex, even if their "System 1" thinking is good enough for the task (e.g. they can write Tetris in 30sec no problem, but they can't write SuperMarioBros at all, since that has numerous levels that would blow the context window size).
> But how are animals with nerve-centres or brains different?
In current LLM neural networks, the signal proceeds in one direction, from input, through the layers, to output. To the extend that LLM's have memory and feedback loops, it's that they write the output of the process to text, and then read that text and process it again though their unidirectional calculations.
Animal brains have circular signals and feedback loops.
There are Recurrent Neural Network (RNN) architectures, but current LLM's are not these.
Human (and other animal) brains probably are probabilistic, but we don't understand their structure or mechanism in fine enough detail to replicate them, or simulate them.
People think LLMs are intelligent because intelligence is latent within the text they digest, process and regurgitate. Their performance reflects this trick.
> But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?
I see this statement thrown around a lot and I don't understand why. We don't process information like computers do. We don't learn like they do, either. We have huge portions of our brains dedicated to communication and problem solving. Clearly we're not stochastic parrots.
> if we develop the technology to engineer animal-style nerves and form them into big lumps called 'brains'
I think y'all vastly underestimate how complex and difficult a task this is.
It's not even "draw a circle, draw the rest of the owl", it's "draw a circle, build the rest of the Dyson sphere".
It's easy to _say_ it, it's easy to picture it, but actually doing it? We're basically at zero.
On Internet comment sections, that's not clear to me. Memes are incredibly infectious, and we can see by looking at, say, a thread about Nvidia. It's inevitable that someone is going to ask about a moat. In a thread about LLMs, the likelihood of stoichastic parrots getting a mention approaches one, as the thread gets longer. what does it all mean?
You seem to be confusing brain design with uniqueness.
If every single human on earth was an identical clone with the same cultural upbringing and similar language conversation choices and opinions and feelings, they still wouldn't work like an LLM and still wouldn't be stochastic parots.
> But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?
It's an economic benefit. It's not a panacea but it does make some tasks much cheaper.
On the other hand if the economic benefit isn't shared across the whole of society it will become a destabilising factor and hence reduce the overall economic benefit it might have otherwise borne.
They seem popular enough that they could be leveraged to influence opinion and twist perception, as has been done with social media.
Or, as is already being done, use them to influence opinion and twist perception within tools and services that people already use, such as social media.
It really does seem like a lot of the insane valuations are turning out to be exactly that.
There was a lot of trust within the markets that AI companies would be able to leverage closed models for permanent rentseeking. Deepseek proved that this is - at least for now - wishful thinking not based on reality. Similar for the "new dawn" of nuclear power for supporting increasingly more power hungry LLM server farms.
I'm certainly excited for the future open models. Interesting times indeed.
[0]: https://www.tagesschau.de/inland/bundestagswahl/wahlkreiserg...