Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Not sure why you would want to choose one particular implementation and elevate it to a higher status.

I don't know, maybe for interoperable type safe types to use across an ecosystem of libraries without wasting CPU cycles converting among them?



To make it "safe" as in "protect against out-of-bounds accesses", slices would be enough. My strong opinion is that "data shape" concerns should be separate from "storage allocation" concerns as far as possible.

This is especially true for the "to use across libraries without wasting time on conversions". I've said it many times, plain C interfaces (pointer + length, or slices if you insist but I don't like them because they are a less normalized representation) ... are the best way to design interfaces optimizing for interoperability. No need for any pointless conversion, just tell the API where your data is located. The physical fact that is needed for communication is the memory (address + length), it's the necessary and sufficient information to carry out the task.

Yes, nowadays "safety" is not just about Out-of-bounds accesses but people expect the system to even protect against resource leaks, double-free, user-after-free, and race conditions. But even when it is the goal to machine check this by introducing a system that requires thinking on the small scale in isolated mini-units ("classes"/"types") - is there a point in locking in on a specific implementation of dynamic arrays? (Not a rhetoric question)


To loop back to the original point, Neat arrays are pointer + length + base. This is necessary for refcounting, but it also allows managing capacity, ie. appending to slices. D gets away with pointer + length, but it can ask the GC for capacity.


So Oracle, Apple, ARM, Google and Microsoft (Intel bothched their design) are investing piles of money moving the industry into hardware memory tagging for nothing?

Maybe we should tell them to stop if they are so good.


"Oracle, Apple, ARM, Google, and Microsoft" are actually a LOT of programmers and non-programmers with a huge variety of opininons, and I'm sure opinions similar to mine can be found there as well.

Also, they have loads and loads of money and their jobs come with prestige, so they have no problem attracting developers to jobs that are perceived by some programmers (such as me) as boring boilerplate jobs that make me miserable.

That answer was more related to the dynamic arrays discussion. If you want to move to hardware memory tagging, is that even a big thing? In any case my understanding is that it would work with pointer + length just as well, because the hardware tags are created at buffer allocation time, not based on arguments passed to a function.


Of course it works with pointer + length, the whole point of hardware memory tagging is that is a proven failure with 50 years of examples, that leaving to C developers the task to manually prove pointer + length are valid, just doesn't work regardless of what is being sold as story.

So lots of money is being burned to ensure that C code is caged and does no harm, in scenarios where C is to still be used.


Lots of money is being burned to keep the platforms alive that still power the entire internet for strange reasons? Sure...


Nice way to avoid the whole pointer + lenght issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: