Hacker Newsnew | past | comments | ask | show | jobs | submit | hurfdurf's commentslogin

  Windows 11 is for video games
Not even that anymore: https://www.youtube.com/watch?v=LJEo6Kb6Rhg

Coverage is still partial because some VERY popular games do not run on Linux due to anti-cheat

I'd say that coverage is very, very substantial, but incomplete because some games use anti-cheat that is either extremely invasive and heavily relies on Windows internals, or is anti-cheat that the devs have configured to reject running in Proton.

Yes, it's very good. However, basically-every-current-multiplayer-shooter is a big missing category.

BTW as someone increasingly fed up with W11 and thus feeling homeless: how well does VR work?

> basically-every-current-multiplayer-shooter is a big missing category.

Weird. I've been playing many multiplayer shooters from Proton with my Windows-using friends. I suppose this is one of those "am I friends with people who pretty much only play CoD or Fortnite?" things.


Anything you'd want to play runs on Linux.

One that people keep commenting on is Battlefield 6, which has an absolutely perfect user experience in Linux.

If you play it on Windows it's a slow, buggy, crashy, half-finished mess that barely works is just no fun at all.

On Linux, it's perfect. It doesn't work at all, so you don't waste any time trying to enjoy the slow, buggy, crashy, half-finished mess.



really appreciate you dropping links thank you!

Remember when Google added Car Crash Detection to Pixel in early 2020? Nobody does.

But when Apple added it in iPhone 14 (2022)...


In french we talk about "le savoir-faire" vs "le faire-savoir" (Know-how vs making it known") and the importance of good communication. Apple are the bestest at it. Remember the iPod shuffle and the lack of screen marketed as a feature to spice up your life.


Can't be that bad when you are required to have an account with them for your digital ID/wallet, right!?

   German implementation of eIDAS will require an Apple/Google account to function (opencode.de)
https://news.ycombinator.com/item?id=47644406


Yeah, coffee was spilled here. What a twist.


Neither, just legacy considerations. Native NVME support was officially added just back in December to Server 2025.


I'm glad at least Yellow Alert isn't what I assumed it was after reading all the other ones.


Definitely for the EU budget. Now it will receive at least 2.25€ per package (75%[1] of 3€) where they got 0€ before. For the individual countries and their customs systems and personnel on the other hand...

[1] https://www.consilium.europa.eu/en/policies/the-eu-customs-u...



Why? Intel making and keeping it workstation/Xeon-exclusive for a premium for too long. And AMD is still playing along not forcing the issue with their weird "yeah, Zen supports it, but your mainboard may or may not, no idea, don't care, do your own research" stance. These days it's a chicken and egg problem re: price and availability and demand. See also https://news.ycombinator.com/item?id=29838403


Maybe it's high time for some regulation?

E.g. EU enforced mandatory USB-C charging from 2025, and pushes for ending production of combustion engine cars by 2035. Why not just make ECC RAM mandatory in new computers starting e.g. from 2030?

AMD is already one step away from being compliant. So, it's not an outlandish requirement. And regulating will also force Intel to cut their BS, or risk losing the market.


OMG no. Politician have no business making technological decisions. They make it harder to innovate, i.e. to invent the next generation of ECC with a different name.


I would argue that in the present conditions, regulation can actually foster and guide real innovation.

With no regulations in place, companies would rather innovate in profit extraction rather improving technology. And if they have enough market capture, they may actually prefer to not innovate, if that would hurt profits.


ECC is like Ethernet. The name doesn’t have to change for the technology to update.


If companies are allowed to change the meaning of terms in legislation we are in even more trouble.


Ethernet was once carried over thick coax at like 2 then 3 megabits per second. By the time it was standardized as IEEE 802.3 it was at 10 megabits. 802.3 was thin coax. 802.3e took a step back in speed to 1 megabit, but over phone-type wire. 10 base T, Ethernet over twisted pair at 10 megabits per second, wasn’t until 802.3i in 1990. Then 10 base F (fiber) in 1992.

Then there are various speeds of 100 M, 1000 M / 1G, 2.5 G, 5 G, 10 G, 25 G 40 G, 50G, 100 G, 200 G, and 400 G. Some of the media included twisted pair, single mode fiber, multimode fiver, twinax cable, Ethernet over backplanes, passive fiber connections (EPON), and over DWDM systems.

There have also been multiple versions of power over Ethernet using twisted pair cable. Some are over one pair, some two pairs, and some over the data pairs while other use dedicated pairs for power.

There are also standards for negotiation among multiple of these speeds. There have been improvements to timestamping. There have been standards to bring newer speeds to fewer pairs or current speeds over longer distances.

There’s currently work on 1.6 Tbps links up to 30 or possibly 50 meters. There has been work on the past to use plastic optical fibers instead of glass ones. Oh, and there are standards specific to automative Ethernet.

Ethernet itself, the name and the first implementation of a network with that name, were from 1972 and 1973. It was on the market in 1980 and first standardized in 1983 as ECMA-82.

Ethernet supports in its different configurations direct host-to-host connections, daisy chains, hubbed networks, switched networks, tunnels over routed protocols like TCP or UDP, bridges over technologies like MOCA or WiFi, and even being tunneled across the open Internet.

All of these are Ethernet. They have a common lineage. They are all derived from the same origin. Token Ring, FDDI, ATM, and SONET have all been more than one thing over time too. So has WiFi. 802.11a is very little like 802.11be, but those are also similar enough to carry the same family name.

The IEEE 802.3 series has a lot of history buried in those documents.


Politicians don’t have to be dumb.


Reading this again, did you forget your trailing /s?


Cost. You are about to making computers 10-20% more expensive.

Computers also aren't used much these days, and phones and tables don't have ECC


ECC has only 10-15% more transistor count. So you're only making one component of the computer 15% more expensive. This should have been a non-brainer, at least before the recent DRAM price hikes.

Also, while computers may not be used much for cosmic rays to be a risk factor, but they're still susceptible to rowhammer-style attacks, which ECC memory makes much harder.

Finally, if you account for the current performance loss due to rowhammer counter-measures, the extra cost of ECC memory is partially offset.


Thanks for the details. I agree and had the same experience, trying to figure out if an AMB motherboard supports ECC or not. It is almost impossible to know ahead of trying it. At least we have ZFS now for parity checks on cold storage.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: