In particular, the long battery runtimes – usually one of the strong arguments for ARM devices – were not achieved under Linux.
A viable approach for BIOS updates under Linux is also missing at this stage, as is fan control.
Virtualization with KVM is not foreseeable on our model, nor are the high USB4 transfer rates.
Video hardware decoding is technically possible, but most applications lack the necessary support.
There is nothing in this press release to suggest they've changed.
The other reason is, not every web server, web application, database, etc is optimized for large number of connections. Sometimes they take up too much CPU and memory. Sometimes each page request triggers a connection to a database, so the database connection limit is hit. Sometimes the queries are inefficient and slow the database to a crawl. Sometimes the sheer number of packets causes so many interrupts that it sucks CPU away from the application/database. And sometimes the network connection of the webserver was just slow.
Most people hit by the /. effect were not just a simple web server serving static content. Often it was a php website with a mysql database, on a server less powerful than a 2015 smartphone.
A 2015 smartphone could be eight 2 GHz cores with 2-4 GB memory and 128 GB SSD. By far faster and more powerful than anything but the heaviest 2000-era servers if that.
I just realized that other industries are way larger than AI. Assuming they capture the entire advertising market, only $390 Billion was spent in the US last year. Compare that to health care, where 4.3 Trillion was spent in the US last year, or commercial banking's revenue of 1.5 Trillion, commercial real estate's 1.5 Trillion, gasoline stations' 1.10 Trillion, etc. What's amazing is, despite the fact that AI isn't making much money, is taking on considerable debt, and isn't even assured to be all that useful, one third of the stock market is now just AI crap. The economy is going to collapse because of a small, brand-new industry. This... shouldn't be possible.
1. Quite a lot of companies are not publicly traded, and therefore are not reflected in the stock market. AI companies have an incentive to be publicly traded because it's all venture-capital stuff.
2. Technology in general is always going to be over-weight anyway because these are companies that tend towards "growth" (re-investing profits into future expansion, offering to buy back stocks at a higher price as a means of compensating investors, etc.) rather than "value" (compensating investors by paying out an explicit cash dividend on shares). This tends to push their P/E multiples higher.
3. Publicly traded companies, and thus stocks, generally are valued based on speculation about future cash flows, not according to current holdings.
4. The companies that you have to add up in order to come to a figure like "one third of the stock market", are doing a lot of things outside of AI. People still play video games, and they still do GPU-accelerated data analysis with conventional techniques. People still want their computer to include an operating system, and still use their social media to talk to accounts that they know are operated by people they know in real life.
5. The term "AI" is now used as if it exclusively referred to LLMs, but other AI systems have existed for a long time and have been actually accomplishing real things in the economy.
> The economy is going to collapse
There are a great many people out there who have predicted a hundred or so out of the last seven recessions. You don't know this, and there are many reasons to doubt it.
Suppose some anti-AI deity snaps its fingers tomorrow and every LLM simply spontaneously ceases to function. It's not as if we've lost the knowledge of how to do things without LLMs. It's not as if the things we created without LLMs disappear, or anything else. We at worst, at a very conservative, scare-mongering estimate revert to that level; and things were pretty tolerable at that level. And technologies that are not LLMs have also advanced since the release of ChatGPT.
Stock market isn't the economy. If wages are largely stock comp at high valuation they get clawed back in a crash. Infrastructure spend is massive but at 70-80% margins for nvidia real cost to economy is cut by that much, aside from the datacenter and power build out which is definitely a big portion.
It could unwind cleanly as long as we don't let it infect the banking system too much, which it has started somewhat doing with more debt financed deals instead of equity financed and probably book values making it into bank balance sheets.
Truck driver wages are $180-280 billion annually and seems like something that will get replaced and should justify $1 trillion of the spend or more, economically. I think Tesla for instance only spends single digit billions in R&D most years though, so the spending may not be going to where the most immediate solid/lasting economic impacts will come from. I'm not sure what Waymo's spend is.
I don't know about those other things, but driverless trucking isn't going to happen within the next 50 years at least. Besides all the logistical challenges and risks to businesses throughout the supply chain, you're talking about 3.6 million American truck-driver jobs. There is little else that politicians love more than saving jobs, especially blue collar, and that's a helluva lot of jobs. To put it in perspective, the TSA is basically a jobs program that costs 12 billion to employ only 58,000 people who don't do anything useful. Politicians'll do whatever it takes to keep those 3.6 million jobs. Add on the fact that truckers can and do organize, and it doesn't take many of them to shut down shipping and transportation. The trucking industry is also reticent to upgrade or spend more money; they won't even invest in electric trucks with drivers. And there's a wide variety of trucks out there with complex routes that won't be mapped, so only a few major highways will be covered, meaning you still need a driver. There will be pilot programs but that's it.
> they won't even invest in electric trucks with drivers.
Do any of these make sense economically right now? The Tesla one is mainly being piloted hauling potato chips for Pepsico because they are so light weight. Few of the Tesla Semi numbers from the presentation seemed to come true (and the part about having autonomous follower convoys working in ~2017...), and you had the guy from Nikola rolling it down a hill and needing a presidential pardon.
People only hear about the tech companies' efforts, but actually Volvo, Peterbilt, and Freightliner have had EV models for a while now. Tesla's truck sucks but the others don't because they're built on established platforms (though their range is ~250mi). At the end of the day the trucks are way too expensive and there's no charger locations for them.
tl;dr no it's not economically viable right now, but my point is that it's an extra expense, and self-driving is more complicated/fraught than even an EV truck. trucking companies want either a very easy cost savings, or stick to what they know. (to give you an idea of this: many companies still use paper and pencil to manage driver schedules/routes)
Yeah this is ridiculous when you consider all the dorps who quail "Fix the existing problems of the world first!!!" when things like space exploration and other scientific endeavors without an immediate benefit are discussed. Where are they now?
Build a system which can read MRI images, now AI took over the whole MRI Analysis sector.
Let AI diagnose the avg cold, now AI took over the household / local doctor for 90% of cases.
Use AI for support, now AI took over the call-center business.
AI can already code. It might not be perfect, it might not be always good but NO ONE assumed that 2025 some matrix multiplication can mimick another human being so well that you can write with it and it will produce working code at all.
thats the hype, thats the market of ai.
And in parallel we get robotic too. Only possible because of this AI thing. Because robotic with ML is so much better than whatever we had before. Now you can talk to a robot, the robot can move, the robot can plan actions. All of these robots use some type of ML in the background. Segment Anything is possible because of AI.
Because AI doesn't mean 'AI' it means massive compute, massive amount of data and machine learning.
All of that pushes everything forward: LLMs and any other architecture to LLMs, GenAI for images, sound and video, movement for robotics, image feature detection.
Segment Anything 2 was a breakthrough in image segmentation, for example.
The latest Google Weather model is also a breakthrough.
All progress in robotics is ML driven.
I don't think any investor thinks that OpenAI will achieve AGI with an LLM. Its Data + Compute -> Some Architecture -> AI/ML Model.
if it will become the golden model which will be capable of everything or a thousand expert models or the Model of Expert model, we don't know yet.
But they literally called it 'yolo mode'. It's an idiot button. If they added protections by default, someone would just demand an option to disable all the protections, and all the idiots would use that.
I'm not sure you fully understood my suggestion. Just to clarify, it's to add a feature, not remove one. There's nothing inherently idiotic about giving AI access to a CLI; what's idiotic is giving it access to your CLI.
It's also not literally called "YOLO mode" universally. Cursor renamed it to "Auto-Run" a while back, although it does at least run in some sort of sandbox by default (no idea how it works offhand or whether it adds any meaningful security in practice).
Unless literally everything you work on is oss I can’t understand why anyone would give cli access to an llm, my presumption is that any ip that I send to an api endpoint is as good as public domain.
I agree that that's a concern, which is why I suggested that a strict firewall around the agent machine/VM would be optimal.
Either way, if the alternative is the code not getting written at all, or having to make other significant compromises, the very edge case risk of AI randomly exfiltrating your code can be an acceptable trade in many cases. Arguably it's a lower risk than it would be with an arbitrarily chosen overseas developer/agency.
But again, I would very much like to see the tools providing this themselves, because the average user probably isn't going to do it on their own.
Funnily enough, even though there are (some) regulations that impose penalties if a financial breach was due to negligence, somebody has to actually investigate and prove negligence first. Government agencies may investigate, but they can just choose not to, it depends on whether they feel like investigating or not.
Meaning that when there is a breach, if you don't personally sue them and take on the costs of investigating and proving the root cause of the breach yourself, then it's likely nothing will happen to them at all. And this is only for the institutions actually covered by a regulation.
And assuming an investigation is done, and proof found of negligence, they'll be given a fine or settle for a small amount of their yearly profit. Nobody goes to jail or is personally fined, and the company has a minor dip in earnings. Problem solved!
Software projects fail because humans fail. Humans are the drivers of everything in our world. All government, business, culture, etc... it's all just humans. You can have a perfect "process" or "tool" to do a thing, but if the human using it sucks, the result will suck. This means that the people involved are what determines if the thing will succeed or fail. So you have to have the best people, with the best motivations, to have a chance for success.
The only thing that seems to change this is consequences. Take a random person and just ask them to do something, and whether they do it or not is just based on what they personally want. But when there's a law that tells them to do it, and enforcement of consequences if they don't, suddenly that random person is doing what they're supposed to. A motivation to do the right thing. It's still not a guarantee, but more often than not they'll work to avoid the consequences.
Therefore if you want software projects to stop failing, create laws that enforce doing the things in the project to ensure it succeeds. Create consequences big enough that people will actually do what's necessary. Like a law, that says how to build a thing to ensure it works, and how to test it, and then an independent inspection to ensure it was done right. Do that throughout the process, and impose some kind of consequence if those things aren't done. (the more responsibility, the bigger the consequence, so there's motivation commensurate with impact)
That's how we manage other large-scale physical projects. Of course those aren't guaranteed to work; large-scale public works projects often go over-budget and over-time. But I think those have the same flaw, in that there isn't enough of a consequence for each part of the process to encourage humans to do the right thing.
> Software projects fail because humans fail. Humans are the drivers of everything in our world.
Ah finally - I've had to scroll halfway down to find a key reason big software projects fail.
<rant>
I started programming in 1990 with PL/1 on IBM mainframes and for 35 years have dipped in and out of the software world. Every project I've seen fail was mainly down to people - egos, clashes, laziness, disinterest, inability to interact with end users, rudeness, lack of motivation, toxic team culture etc etc. It was rarely (never?) a major technical hurdle that scuppered a project. It was people and personalities, clashes and confusion.
</rant>
Of course the converse is also true - big software projects I've seen succeed were down to a few inspired leaders and/or engineers who set the tone. People with emotional intelligence, tact, clear vision, ability to really gather requirements and work with the end users. Leaders who treated their staff with dignity and respect. Of course, most of these projects were bland corporate business data ones... so not technically very challenging. But still big enough software projects.
Gez... don't know why I'm getting so emotional (!) But the hard-core sofware engineering world is all about people at the end of the day.
> big software projects I've seen succeed were down to a few inspired leaders and/or engineers who set the tone. People with emotional intelligence, tact, clear vision, ability to really gather requirements and work with the end users. Leaders who treated their staff with dignity and respect.
I completely agree. I would just like to add that this only works where the inspired leaders are properly incentivized!
> But I think those have the same flaw, in that there isn't enough of a consequence for each part of the process
If there was sufficient consequence for this stuff, no one would ever take on any risk. No large works would ever even be started because it would be either impossible or incredibly difficult to be completely sure everything will go to plan.
So instead we take a medium amount of caution and take on projects knowing it's possible for them to not work out or to go over budget.
If software engineers want to be referred to as "engineers" then they should actually learn about engineering failures. The industry and educational pipeline (formal and informal) as a whole is far more invested in butterfly chasing. It's immature in the sense that many people with decades of experience are unwilling to adopt many proven practices in large scale engineering projects because they "get in the way" and because they hold them accountable.
Surely you mean managers, right? Most developers I interact with would love to do things the right way, but there's just no time, we have to chase this week's priority!
Graphene shouldn't have to reckon with the abuse of government, we should step in and speak up for them. If having a secure device becomes criminal, only the criminals will have secure devices.
Law enforcement is being lazy by trying to rely on mass surveillance rather than espionage tactics to catch criminals. Criminals learned long ago how to work around surveillance, so this doesn't really work on them. But it does subject the public citizen to undue scrutiny and violation of privacy, which history has shown is then used against the innocent. We don't need any more reminders of how popular authoritarianism has become. And it's often used to pin a crime on an innocent person (a common police controversy), or intimidate and harass them (see FBI).
> I don't think population will stand at their side when they find that they've been helping CSAM traffickers hide their loot.
This is just one of many examples of a false rhetoric used by politicians to manipulate the public into cow-towing to mass surveillance. We cannot stand for this and must fight it at every turn. "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
Beware, though, the key words in that quote are not "liberty" and "safety" but rather "temporary" and "essential". You can replace "liberty" and "safety" with any other nouns (including "safety" and "liberty") and it's still true.
Which is not to excuse the fascist actions of the French government. I just don't like that quote.
I don't think quoting American politicians which failed to setup a government preventing Trumpism is going to be very persuasive for European governments... or European people.
Normal practice in deploying post-quantum cryptography is to deploy ECC+PQ. IETF's TLS working group is standardizing ECC+PQ. But IETF management is also non-consensually ramming a particular NSA-driven document through the IETF process, a "non-hybrid" document that adds just PQ as another TLS option.
Centralization has nothing to do with the problems of society and technology. And if you think the internet is all controlled by just a couple companies, you don't actually understand how it works. The internet is wildly decentralized. Even Cloudflare is. It offers tons of services, all of which are completely optional and can be used individually. You can also stop using them at any time, and use any of their competitors (of which there are many).
If, on the off chance, people just get "addicted" to Cloudflare, and Cloudflare's now-obviously-terrible engineering causes society to become less reliable, then people will respond to that. Either competitors will pop up, or people will depend on them less, or governments will (finally!) impose some regulations around the operation of technical infrastructure.
We have actually too much freedom on the Internet. Companies are free to build internet systems any way they want - including in very unreliable ways - because we impose no regulations or standards requirements on them. Those people are then free to sell products to real people based on this shoddy design, with no penalty for the products falling apart. So far we haven't had any gigantic disasters (Great Chicago Fire, Triangle Shirtwaist Factory Fire, MGM Grand Hotel Fire), but we have had major disruptions.
We already dealt with this problem in the rest of society. Buildings have building codes, fire codes, electrical codes. They prescribe and require testing procedures, provide standard building methods to ensure strength in extreme weather, resist a spreading fire long enough to allow people to escape, etc. All measures to ensure the safety and reliability of the things we interact with and depend on. You can build anything you want - say, a preschool? - but you aren't allowed to build it in a shoddy manner. We have that for physical infrastructure; now we need it for virtual infrastructure. A software building code.
Centralization means having a single point of failure for everything. If your government, mobile phone or car stops working, it doesn't mean all governments, all cars and all mobile phones stop working.
Centralization makes mass surveillance easier, makes selectively denying of service easier. Centralization also means that once someone hacks into the system, he gains access to all data, not just a part of it.
I recently found out something interesting about MC4 connectors for solar panels. It turns out the only legitimate MC4 connectors are manufactured by Stäubli. Every other MC4 connector you see is a clone of the originals, which is dangerous if they are mixed (a male of one vendor, a female of another). Differences in tolerance/fit can create unstable connections which can cause overheating, arcing, even fire. So Stäubli posts pictures so you can try to identify the clones (the easiest giveaway is Stäubli's O-rings are always black). It's such an issue that standards now require you to acquire connectors from one origin, and use the same brand and type. But I think a lot of consumers are buying random solar gear now and just plug in whatever connectors look like they fit together.
Yet 2 days ago, Tuxedo Computers announced they were abandoning Qualcomm due to crap support. (https://www.theregister.com/2025/11/26/tuxedo_axes_arm_lapto...).
There is nothing in this press release to suggest they've changed.reply