Hacker Newsnew | past | comments | ask | show | jobs | submit | Denvercoder9's commentslogin

It's not.

I can't reproduce the exact datapoints from the site using `git grep`, but most of it seems to be down to a single commit that removed repeated usage of fuck from one file: https://github.com/torvalds/linux/commit/a44d924c81d43ddffc9...


I'm getting the feeling someone has strong feelings about IOC3. From a quick bit of googling that appears to be a Silicon Graphics ASIC


why did you have to ruin my hopes and dreams with facts and research? /s

(that was actually interesting to scan through, dunno why but i found the `[it] is fucking fucked` -> `[it] is broken` lines hilarious)


It's actually because someone with "crap" as a substring of their e-mail address made a bunch of contributions with their e-mail address in it (e.g. in maintainer records and copyright notices) around that time. Nothing to do with COVID-19.

See the graph with the entire domain for comparison: https://www.vidarholen.net/contents/wordcount/#crapouillou


incredible!

I guess the lesson here is to never take a chart at face value. :)


> they're making a final movie.

They're not making a movie; the entire series is a prequel to the 2016 movie Rogue One.


Well, in almost 40 years, the entire timeline has been shaken up several times and films and series have been inserted again and again

In any case, i would be careful with such absolute statements


Perhaps your cautionary admonishments are unwarranted in this specific scenario.

It is logical enough to conclude that a story ending in two intelligence agents flying off for a time-sensitive meeting with a confidential informant, is an immediate prequel to the story that begins with the same two intelligence agents landing and meeting that confidential informant.

This is not quite the same situation as the end of Rogue One and A New Hope, where some people make the argument that Rogue One ends just a few minutes before ANH begins; I am not convinced by that argument, although the cinematography certain seems to be leading us there.


>>> This is not quite the same situation as the end of Rogue One and A New Hope, where some people make the argument that Rogue One ends just a few minutes before ANH begins; I am not convinced by that argument, although the cinematography certain seems to be leading us there.

The ending scene of RO is the data handoff and narrow escape of the Tantive IV with Leia, R2-D2, and C-3PO on it.

How is that not a direct continuity into the opening scene of A New Hope?


Unless there was some sort of tractor beam, the Tantive IV did, in fact, escape, and may have been able to jump to light speed. In such an event, any eventual recapture by the Star Destroyer and battle with Vader's boarding team would have looked exactly the same as the escape sequence. There's nothing definitively saying "and they were recaptured within a few minutes of their initial escape."


The C++ standard doesn't forbid introducing side channels, so the answer to the question is yes.


With all the UB, I wonder how did we manage to write any secure or safety-critical code at all.


this isn't UB, and any other language can do this optimization as well

even the one you cult over


In C++? We pretty much did not.


It depends entirely on the context. For routes where total travel time is mostly governed by moving time, and the stationary time in stops is negligible, the capacity boost from double-deckers easily outweighs the longer (un)loading times. The alternatives to increase capacity can also be problematic: with longer trains you start running out of platform length (and long platforms add walking time); while running more trains closer together requires more personnel and rolling stock, and is limited by signaling block size and braking distance.


Trains can run fully automated today, and if you are running into capacity issues they should be. You may still need more personnel, but it is a different type of personnel and full automation gives enough other advantages as to be worth it.

If the size of your blocks are an issue, then that is a problem worth solving. If you are can't fit in all those trains, then you need to build more track not try to compromise. Yes track is expensive, but if you can't fit all the trains then the passenger volume is high enough to support it. This likely requires better operations though and some people see a loss of their direct train and don't see how a fast (fast is critical!) transfer is overall better for them.


> Trains can run fully automated today

That might be the case in very controlled environments such as a subway network, but in other, more heterogeneous environments GoA 4 is not there yet.


> why would they have their system language set to X if they speak Y? If they want Y, they should just set their system language to Y!

If only they respected my system language. All my language settings are set to English, yet I routinely get autotranslated crap to my native language.


It was more of an example in how they pick up on _some_ signal about a users language preference and then arrogantly assume they're correct in their decision, and that it's the user's fault if they assumed wrong.


> Proxmox adds very little overhead.

It's still running a second kernel and entire userspace stack. In my world that's not "very little overhead".


Using Proxmox with lxc containers, there is no second kernel. It uses the host kernel’s native cgroups and namespaces for process isolation. You can actually achieve the same with just systems and namespaces.

Having said that, I think if you prefer traditional distro packaging, you should absolutely stick to that.


ProxMox supports both VMs and LXC containers. You would use LXC containers for low overhead. No second kernel.


ProxMox supports it, but it's not what the linked script does, nor is it officially supported by HAOS.


Yep!

I'm aware of the tradeoffs here. For home assistant specifically, there's two options if you want to stay on the path of first-class support. Run it bare metal or in a VM.

Going a different path isn't a bad choice, or even a big downgrade.

I had fun with all the different ways of running home assistant 6+ years ago, and then decided to embrace a solution that required the least fuss and would hold up long term. I'm happy with my choice, and it gave me exactly what I was expecting.


> What I wanna know is if I could have a plug-in box that detects the frequency is drifting

Yes, it's not that hard. There's smart meters and plugs that have frequency measurement built in.

You can even do it with an audio cable: https://halcy.de/blog/2025/02/09/measuring-power-network-fre...


Old school electric clocks used to (still?) keep time with the grid frequency. Guaranteed to always have N cycles per day. So if things are gradually failing, you might be able to see your clock keeping worse time.


That's not going to work, because the clock will mainly be showing phase drift.

Having the grid operate at 49.99Hz instead of a perfect 50.00Hz for a day will make your clock lose 17 seconds, but it's completely harmless. That's normal regulation, not a gradual failure. The grid chooses to compensate for that by running at 50.01Hz for a day, but that's solely for the benefit of those people with old-school clocks - the grid itself couldn't care less.

A failure means the frequency drops from 50.23Hz to 48.03Hz, probably within a single second. You'd notice as your clock stops ticking due to the resulting power outage.


> They could live with an older version of GCC for a year.

That's just not what Fedora is, though. Being on the bleeding edge is foundational to Fedora, even if it's sometimes inconvenient. If you want battle-tested and stable, don't run Fedora, but use Debian or something.


Bleeding-edge is fine, but shipping a beta C compiler seems a bridge too far. Even Arch does not ship GCC 15 yet.


> If you have a home machine doing network routing it would absolutely benefit from this.

It most likely won't. This patch set only affects applications that enable epoll busy poll using the EPIOCSPARAMS ioctl. It's a very specialized option that's not commonly used by applications. Furthermore, network routing in Linux happens in the kernel, not in user space, so this patch set doesn't apply to it at all.


NAPI is not busy poll, though, and the way the article is worded suggests it's about NAPI.

Now, NAPI already was supposed to have some adaptiveness involved, so I guess it's possibly a matter of optimizing it.

But my system is compiling for now so will look at article more in depth later :V


The article is terrible, this doesn't affect the default NAPI behaviour. See the LWN link posted elsewhere for a more detailed, technical discussion. From the patch set itself:

> If this [new] parameter is set to a non-zero value and a user application has enabled preferred busy poll on a busy poll context (via the EPIOCSPARAMS ioctl introduced in commit 18e2bf0edf4d ("eventpoll: Add epoll ioctl for epoll_params")), then application calls to epoll_wait for that context will cause device IRQs and softirq processing to be suspended as long as epoll_wait successfully retrieves data from the NAPI. Each time data is retrieved, the irq_suspend_timeout is deferred.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: