Hacker News new | past | comments | ask | show | jobs | submit | karavelov's comments login

> Well, the configuration changes during takeoff mitigate the issue if it happens during takeoff. If it happens at any other time then they don’t do anything to help.

There are no birds at higher altitudes


Fewer perhaps, but not none. :)


Right? I remember there was a bird strike incident at 37,000 feet, a vulture iirc. Hard to imagine how they can get enough oxygen to fly up there.


Waiting for Sabine's comment: "Told you so, physics is dead"


"

And the 2024 Nobel Prize in Physics does not go to physics...

"

https://x.com/skdh/status/1843592351736050053



Could you copy and paste her post in here? I'm brazilian and our STF overlords have decided that we shall not access twitter anymore


The other comment (sibling of the one you replied to) already quoted the entire tweet. (Yes it's short and snarky.)


It's just 6P + 8E + 2IO (ultra efficient) cores or less. Looks it's primary targeting laptops.


Sounds more like targeting is more of "Apple Silicon is kicking our asses, and this is the best we could do"


Again, Intel's target market is very different.

They are using off the shelf cores that have to be good in everything from netbooks and industrial boxes to server workloads. Apple, meanwhile, is laser targeting high volume, premium, media heavy laptop-ish TDPs and workloads. And they can afford to burn a ton of money on die area, a bleeding edge low power process, and target modest clockspeeds like no one else can.


this is such a weak argument. just because it's not in a laptop does not mean that a CPU should be accepted as being a horrible waste of electricity. making datacenters as efficient as laptops would not be a bad thing. i'm sure people operating at the scale of AWS and other cloud providers would be beyond happy to see their power bills drop for no loss in performance. i'm guessing their stockholders would be pleased as well.


Datacenters are actually exactly as efficient as laptops.

They consume more only because they do not stay idle, like laptops.

The CPU cores in the biggest server CPUs consume only 2.5 W to 3 W per core at maximum load, which is similar or less than what an Apple core consumes.

The big Apple cores are able to do more work per clock cycle, while having similar clock frequencies and power consumption to the server cores, but that is due almost only to using a newer manufacturing process (otherwise they would do more work while consuming proportionally more power).

The ability of the Apple CPU cores to do more work per clock cycle than anything else is very useful in laptops and smartphones, but it would be undesirable in server CPUs.

Server CPUs can do more work per clock cycle by just adding more cores. Increasing the work done per clock cycle in a single core, after a certain threshold, increases the area more than the performance, which diminishes the number of cores that could be used in a server CPU, diminishing the total performance per socket.

It is likely that the big Apple cores are too big for a server CPU, even if they may be optimal for their intended purpose, so without the advantage of a superior manufacturing process they might be less appropriate for a server CPU than cores like Neoverse N2 or Neoverse V2.

Obviously, Apple could have designed a core optimized for servers, but they do not have any reason to do such a thing, which is why the Nuvia team has split from them, but they were not able to pursue their dream and then they went back to designing mobile CPUs at Qualcomm.


> i'm sure people operating at the scale of AWS and other cloud providers would be beyond happy to see their power bills drop for no loss in performance

- The datacenter CPUs are not as bad as you'd think, as they operate at a fairly low clock compared to the obscenely clocked desktop/laptop CPUs. Tons of their power is burnt on IO and stuff other than the cores.

- Hence operating more Apple-like "lower power" nodes instead of fewer higher clocked nodes comes with more overhead from each node, negating much of the power saving.

- But also, beyond that point... they do not care. They are maximizing TCO and node density, not power efficiency, in spite of what they may publicly say. This goes double for the datacenter GPUs, which operate in hilariously inefficient 600W power bands.


It's all tradeoffs. Desktop users are happy for 20% more performance at 2x power draw - and they get the fastest processors in existence (at single thread) as a result.

Data centres want whatever gets them the most compute per dollar spent - if a GPU costs 20k you bet they want it running at max power, but if it's a 1k CPU then suddenly efficiency is more important.

It's all tradeoffs to get what you want.


Data center CPUs are already optimized for power and have huge die areas and cost a lot, just like apple silicon.


For interop between runtimes, they need to add `std::async` IO traits that could be implemented by each runtime.


And APIs for timers!


why not switch to bird that was not affected at all? not just acknowledged and fixed. isn’t it “safe be default” better than “fixed after they pointed the flaw”


These are emulated `r1000` devices, not pass-through


I was using the same technique back in the day when I was maintaining my Pentax MXs but using a fluorescent lamp as a source - they blink with the AC power cycle - 50 or 60Hz depending on your country.


The cost of the CPU is small part of a server TCO. Graviton instances could be cheaper because the platform is cheaper, uses less power and needs less cooling - I think we know from Apple Silicon that ARM chips can have these advantages.

Disclaimer: I work for AWS but I don't have any internal knowledge about Graviton pricing and non-public performance data.


I am a Linux user, and I gave on Thunderbird due to terrible Exchange support - now I use Evolution, it has its own problems but my calendar is synced.

P.S. No my choice to use Exchange on the server side.


Have you tried Davmail? There's even a container image for it which worked pretty well for me a few years ago.


It matters actually, if your each tasks use 10KiB stack for execution, and you have 1000 tasks, with threads that will consume 12MiB just for the stacks (+2KiB per task due to page alignment). If you use async runtime as Tokio the stack is un-rolled on each suspend point. So if you have 16 worker threads, that means the stacks will consume 192 KiB.


This is true. After I wrote my initial reply I did some measurements. After spawning 4096 threads on my M1 mac, I saw 80MB of reported memory usage. The equivalent tasks in tokio after spawning 100,000 of them reached only 100MB memory. This is roughly 20x less memory.

One common thought I see a lot though is that each thread uses 1MB (or more) in stack space alone. This just isn't true with modern memory paging


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: