Hacker News new | past | comments | ask | show | jobs | submit | bajsejohannes's comments login

One thing I don't understand from watching the video, is what happens in the (very rare) case that you get collisions all the way down the funnel. I assume this is related to the "One special final level to catch a few keys" (around 14:41 in the video), but given that it has to be fixed size, this can also get full. What do you do in that case?


The dereference table allows allocations to fail:

https://arxiv.org/pdf/2501.02305#:~:text=If%20both%20buckets...

(the text fragment doesn't seem to work in a PDF, it's the 12th page, first paragraph)


Thanks! So I guess the best recourse then is to resize the table? Seems like it should be part of the analysis, even if it's low probability of it happening. I haven't read the paper, though, so no strong opinion here...

(By the way, the text fragment does works somewhat in Firefox. Not on the first load, but if load it, then focus the URL field and press enter)


Yeah, I presume so. At least that's what Swiss Tables do. The paper is focused more on the asymptotics rather than the real-world hardware performance, so I can see why they chose not to handle such edge cases


This bothered me too, reading it and the sample implementations I've found so far just bail out. I thought one of the benefits of hash tables was that they don't have a predefined size?


The hash tables a programmer interacts with generally very much have a fixed size, but resize on demand. The idea of a fixed size is very much a part of the open addressing style hash tables -- how else could they even talk of how full a hash table is?


The UX on MacOS is so bad here. First, a notification prompts you to enable Apple Intelligence. When you dismiss the notification by clicking the "x" in the corner, it instead opens the system settings and proceeds to download something (?) before showing you a checkbox where you can enable/disable it. It feels quite forced.


Desperate product owners resort to desperate measures to juice metrics.


My company is tiny comparatively (~150ish employees) and even I've heard various PMs unironically say things like "Let's enable it by default for everyone, because we have those KPIs to hit!"

I've fought back against so much BS like this, but it's just endless and I can't win'em all. Who cares about good UX, not nagging our (paying) customers incessantly about stupid features nobody has ever asked for while the core product languishes and 80% of our customer feedback is "Please make the platform more stable"? All that matters is AI, and that EVERYONE is forced into using AI so our CEO can say in a slidedeck that we've gained X% usage of our shiny new AI thing (that everyone subsequently disables as soon as they can).

It's a fucking joke honestly, this whole industry is a complete farce.


I don't know -- I don't think that there's a particular social contract (much less a legal one) between companies and users that the offering they provide today will be unchanged forever.

I don't mean to defend the dark pattern in this particular case, I'm responding to you saying "this whole industry is a complete farce". If a company decides that The Way to use their product needs to be nudged in a different direction, they can. (Almost) nobody complained when macs started shipping with Rosetta [0] installed.

I'm nowhere near as confident that Apple Intelligence is worth betting the goodwill of users on as I was about apple silicon + rosetta for intel binaries, but it's Apple's bet to make.

[0] okay, a stub launcher for intel binaries that made it super quick and easy to get Rosetta installed


It's really pathetic. Reminds me of when phone manufacturers make a hardware button stop doing what people are used to and make it do Feature-Of-The-Month. Sorry, your power button now turns on FeatureX instead of toggling power. All so a fraction of a percent of users accidentally and unintentionally invoke unwanted features.


Screams in Bixby

Looks at the TV remote with its "Netflix" and "Apple TV" buttons

Screams even louder


My Roku remote still has an Rdio button. Makes me sad each time I notice it.


Streaming device makers make a significant share of their gross margins from sales of those buttons.

(That and an Apple-like mandatory revenue share on the channels that users choose / install.)


Before 2010s software didn't feel like it was different features competing for attention. Is product owner a new invention or what else happened?


It's not just "product owners." When you're one of 100 teams in BigCorp, your team might own Feature X, and another team owns Feature Y. If teams with more "successful" features grow faster, get more funding, get more compute time, get bigger, fatter org charts, then your whole team is incentivized to fight to make Feature X more prominent and elbow out Feature Y.

As an end user, when you start your device or application or web page, know that the features that are exposed in the first screen, and "above the fold" as they say, that premium placement was likely fought bitterly over, through epic corporate political battles and backstabbing. They're not there because research showed that users want them conveniently located.


Raymond Chen, 2006:

https://devblogs.microsoft.com/oldnewthing/20061101-03/?p=29...

> I often find myself saying, “I bet somebody got a really nice bonus for that feature.” “That feature” is something aggressively user-hostile, like forcing a shortcut into the Quick Launch bar or the Favorites menu, like automatically turning on a taskbar toolbar, like adding an icon to the notification area that conveys no useful information but merely adds to the clutter, or (my favorite) like adding an extra item to the desktop context menu that takes several seconds to initialize and gives the user the ability to change some obscure feature of their video card.

> The thing is, all of these bad features were probably justified by some manager somewhere because it’s the only way their feature would get noticed. They have to justify their salary by pushing all these stupid ideas in the user’s faces. “Hey, look at me! I’m so cool!” After all, when the boss asks, “So, what did you accomplish in the past six months,” a manager can’t say, “Um, a bunch of stuff you can’t see. It just works better.” They have to say, “Oh, check out this feature, and that icon, and this dialog box.” Even if it’s a stupid feature.

This bullshit has been with us since there have been desktop computers with notification areas.


Have you forgotten that clippy used to knock on the glass if you ignored it?


The actual feature set is rather disappointing, too. I don’t want magical summaries of texts or notifications. I don’t want a poor implementation of an email categorization feature that’s years late to market. I do want better Siri, but that means more actual capabilities to control things, especially when triggered from a watch. I don’t want slow, unreliable language models that still can’t get “call so-and-so on Bluetooth right” [0].

What I do want is privacy-preserving AI-assisted search, over my own data, when (and only when) I ask for it. And maybe other AI features, again when and only when I ask for it. Give me hints that I can ask for such assistance, but don’t shove the assistance in my face.

[0] Somewhere along the line this improved from complete fail to calling, with Bluetooth selected, but audio still routed to the watch until I touch the phone.


I agree with OP that it's unnecessarily confusing. A "method" is a procedure. The floating point number is the result of that procedure, not the procedure itself.

"Decimal" implies a ten based system, even though it's perfectly fine to say "binary decimal".

Using your own replacement words, it would be clearer to write "A floating point number is a representation of a number with a fractional part".


Maybe it should've said "is a method of storing" instead of "for". It would make it clear it's not talking about a procedure, but a way or manner of doing something.


How much of luggage handling is the airline vs. the airport crew? I had assumed it was mostly the latter.


And also the whole airport luggage handling system. These are not simple anymore, but instead massive automated systems. With all the usual issues of dealing with real physical objects in addition to identifiers associated...


> admittedly in spurts

I haven't watched his videos, but this seems like a sign of quality. Making a video when you have a great idea vs making a video a week or day just to feed the algorithm.


There's also MPU in even simpler/cheaper MCUs. For instance, ARM Cortex M0+ sports an MPU, and this architecture is used in STM32C0 ($0.24 in bulk) and RP2040.

I have no idea how the landscape looks in general, though.


The vast majority of modern MCUs have enough memory protection for Tock. Anything cortex-m0+ or "better" has an MPU. RISC-Vs PMP or ePMP as well. Most 16-bit "legacy" (though still popular) MCUs don't.

Virtually anything with a radio these days (the MSPs were holdouts but mostly those are Cortex-M these days as well)


This is because of relativity and the fact that there's more force at in the valley than the mountain.

https://en.wikipedia.org/wiki/Time_dilation


What about the fact that you are moving faster when on top of a mountain than in a valley due to the rotation of the earth and being farther from the center of rotation?

Does this also have an effect on your relative time?


Yes, it makes the equipotential surfaces of Earth's gravitational field (the surfaces on which time ticks "at the same rate") ellipsoids instead of spheres. The "geoid", which is the standard such surface that defines UTC on Earth, is the equipotential surface that averages to the Earth's sea level, and is 13 miles further from Earth's center at the equator vs. the poles.


I also have mixed results with using the iPad as an external monitor. It works every time, but it's orders of magnitude more fiddly than an actual monitor.

Exposing it as a monitor sounds like a much nicer interface, although I don't think Apple wants that, since it wouldn't work wirelessly.


And just to spell it out: Fewer payout means fewer resources to spend on further operations. So I would absolutely think that the criminals care if there is an actual ban.


Except, this assumes that the cost of an operation being ran is beyond marginal.

The actual cost of launching an attack like this is basically nothing - initial access, etc, is largely automated and performed at scale.

The “costly” part is the hands on keyboard part, but even that can be largely automated, and even manually doesn’t take long.


Of course the cost of an operation is beyond marginal. The cost of maintaining a team capable of executing sophiasticated ransomware attacks is far from trivial. Especially since the operation is illegal, money need to be laundered, interpersonal tensions in cybercrime happen. Less payouts mean less money for the criminals and is absolutely a problem for them.

This is not a company where you automate people out of job and CEO gets all the profit. Organised crime groups share profits among themselves, and the profit is by far the main motivator for all of them.


You're not competing against the hackers doing nothing, you're competing against them targeting some other country or just changing jobs. You don't have to get the payouts to $0, just low enough that it's not worth doing.

This would basically remove the prospect of million dollar payouts; it probably removes the prospect of payouts in the hundreds of thousands. Any company with the money to make those kinds of payouts is likely to have reporting requirements that make it very hard or impossible to hide.

Payments in the tens of thousands could maybe be hidden or targeted at small enough businesses that they don't have to report what happened to their money, but is it even worth it at that point? We're talking people with at least some level of technical ability; do they really want to piss off the FBI/NSA/European equivalents for tens of thousands of dollars? I sure wouldn't.


The up an coming GateMate seems interesting to me. They are leaning heavily on open source tooling.

chip: https://colognechip.com/programmable-logic/gatemate/ board: https://www.olimex.com/Products/FPGA/GateMate/GateMateA1-EVB...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: