Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would hope that it would not be removed if there is a performance difference. Not all systems are multi user, thinking stuff like includeos or systems that can control their interaction such that the risk in very minimal and not worth the cost.


If it were removed, the new processors would simply be worth less than the ones that immediately preceded them. The systems you're thinking about would then cost less. What's the problem?

The problem, of course, is that vendors couldn't pretend to sell systems that are worth the prices they would have quoted before all of this awfulness was exposed. That would be a problem for the vendors only. The rest of us would be better off.


But that penalizes the single process systems, they exist. One already pays a cost of overhead for multiprocess/multiuser systems and this is part of it. Pay for what you use. But have sane defaults(protect the common case).

Then again, I cannot see this costing much in future chips.


This debate is about desktop and server processors. You are imagining there is a market segment, worth caring about for non-niche companies like Intel, of non-hackable systems with no network access that use high-end microprocessors, but it isn't so.

Most relatively complex systems that use deluxe microprocessors and could be air-gapped are accessible for convenience instead, and more extreme actually inaccessible systems are likely to use different, specialized processors.


Why do you exclude all single-purpose networked servers? e.g. why wouldn't it be reasonable tradeoff for a private compute cluster with a limited set of applications (or even just one), systems running really dumb services, ...?


Reasonable for hackers, horribly optimistic for the defense team.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: