Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Knowing the history of Apple an open standard and given their success with their implementation of the ARM64 ISA, it is unfortunately highly probable that they will follow the proprietary route once again.

Indeed, they are already doing it, we're lucky they weren't in a dominant position when TCP/IP or HTML were invented.




There isn't any standard matrix multiplication instruction set so there's nothing to standardize over. Machine-learning driven instruction sets (of which MM instructions are motivated by, but not exclusively for) like this have been generally bespoke because the field moves relatively quickly compared to hardware. Every vendor generally follows some basic principles but the specifics are dependent on the workloads and models they expect e.g. quantization or how they expect to split models across accelerators. And ARM does not allow public proprietary instruction set extensions to ARM cores, one of their defining architecture features is that licensees literally are not allowed to do this.[1] The only reason Apple was allowed to do so in this case is likely 1) They negotiated it as part of their AAL (probably for a lot of money) and 2) They do not publicly document or commit to this feature in any way. It could get deleted or disabled in silicon tomorrow and Apple would be able to handle that easily, and in every other visible way they have a normal ARM64 complaint CPU core (there is the custom GIC and performance counters and some other stuff, but none of those violate the architectural license and are just IP stuff they chose to work on themselves.)

So actually the thing you're complaining about is prevented by ARM themselves; Apple cannot publicly commit to features that would fragment the architecture. They don't have to do everything identical either, though.

[1] They have publicly said they will allow some future Cortex cores to contain custom instructions, but it is quite clearly something they're very much still in control over, you won't get a blank check, especially considering almost all ARM licensees use pre-canned CPU cores and IP. You'll probably have to pay them for the extra design work. There are no known desktop/server-class CPUs that fit this profile on the current ARM roadmap, or any taped out processor, that I am aware of.


> There isn't any standard matrix multiplication instruction set

The Scalable Matrix Extension supplement was released last year. Though obviously AMX predates it, having shipped in actual silicon 3 years ago.


In addition to being too new, Scalable Matrix Extension is for Armv9 - the M1 and M2 are Armv8 architectures


> The only reason Apple was allowed to do so in this case is likely 1) They negotiated it as part of their AAL (probably for a lot of money)

Apple fronted the cash that created ARM holdings in the first place, so yes, they invested quite a lot of money (well, relative to the other senior partners Acorn and VLSI and later investors), and ARM was hardly in a position to tell them "no".


History will tell, but I have a bad feeling about "Apple Silicon".

They would not use that naming if they intended to support the official ARM ISA in the long run.

The only thing that would prevent them for going the proprietary route is if they can't.


> They would not use that naming if they intended to support the official ARM ISA in the long run.

Given Apple's marketing priorities, my guess is that the intent you speak of had zero weight in their naming decisions either way. They have no interest in raising the profile of ARM chips in general, and every interest in promoting their specific chips as amazing.


Apple Silicon is no different from Qualcomm Snapdragon or Samsung Exynos.


Does Apple license theirs to other platforms?


I was referring to branding. To clarify my point, I believe having branding separate from Arm’s does not substantially indicate a desire to move away from Arm.


No, and given ARM's hostility to Qualcomm's acquisition of Nuvia they probably would pitch a fit if Apple started selling silicon to third-parties.


I suspect, given Apple's pivotal role in founding ARM holdings, that they have as close to carte blanche with respect to the ARM IP as one could imagine.


Huh, I hadn't realised Apple had been one of the investors when ARM Holdings was spun out of Acorn Computers. It seems their interest was in the Newton using ARM chips.


> "we're lucky they weren't in a dominant position when TCP/IP or HTML were invented"

TCP/IP and HTML became dominant because they were open standards. Should they have been proprietary, they would have floundered and something else would have emerged instead.


I guess people have forgotten, that Webkit came out of KHTML from the KDE team and that Apple was a nightmare when it came to contributing code back. They just released a huge dump of the whole thing.


I remember this... it was just a fork. Projects get forked. It's unfortunate from some perspectives, but from other perspectives you can understand why forks happen.

When you have a long-running fork, especially one that is so active, merging it naturally becomes a nightmare. This is expected and ordinary.

The Linux kernel gets forked by Android vendors and others all the time. A lot of the changes never make it upstream, for various reasons. At least the story ends a bit better for KHTML / WebKit.


Every single Apple patch to GitHub projects is done by the same single indistinguishable user account. This isn't just "some long-running fork". It is Apple culture to actively prohibit contributions to open source projects unless 5 managers sign off on it.


They really don't like people "poaching" their employees/wildlife. Remember they illegally collided to stop other large tech firms with Apple board members from cross recruiting.


No, it’s usually just overzealous lawyers. Remember that Apple is still a pre-dot-com company and — like Microsoft — retains vestiges of those attitudes.


I think that's collusion not collision :) But probably a case of "Damn You AutoCorrect" :)


I don’t get this criticism; this is a case where open source _worked_. People complained about it. Apple cleaned up their act on it a bit, but still maintained WebKit as a fork. And Google forked WebKit when they decided they didn’t want to play in Apple’s sandbox anymore. This is how it’s actually supposed to work. It gets messy sometimes because humans are involved.


This isn't quite the same thing. Apple are really terrible at cultivating open source - more obvious than KHTML is the fiasco that resulted from Apple's half-hearted efforts to kindle a community around Darwin - but my impression is that they have been decent enough with the kind of openness that standards processes need.


Were there ever really interested in Darwin being a thing? I've been following Mac closely since OSX and I've never seen it getting any limelight.


At Apple's executive level, it doesn't look like anyone ever really cared. But people were hired, like Jordan Hubbard, who were supposed to liaise with the community and it's clear that both a nontrivial number of Apple developers were optimistic about the prospects for a healthy Darwin community and that many Apple users found Apple's choices in the early years of Darwin being open sourced and then partly closed to be very disappointing.


I was disappointed in how the Dylan project ended up.


They don't even remember how vertical integrated they were before OS X came to be, how would they remember that?


> TCP/IP and HTML became dominant because they were open standards. Should they have been proprietary, they would have floundered and something else would have emerged instead.

Anyone remember AppleTalk?

https://en.wikipedia.org/wiki/AppleTalk


ActiveX would like to have a word with you.


ActiveX was never 'dominant' really. It was always a duopoly with Java (and in many use cases also with flash!) and a pretty niche one at that (corporate software and crappy webcams)



DECnet, Token Ring, Novell, X.25, et cetra, would like to have a word with you.


Yes, that attempt to EEE the web was thwarted thankfully.


Right. If the government hadn't stepped in to encourage Microsoft to play nice, we might live in a world where "the web" simply means Internet Explorer.

I'm beyond the point of negotiating with the people on this website. Apple is due in for exactly the same treatment, it's only a matter of time before the US eats their favorite crow.


> If the government hadn't stepped in to encourage Microsoft to play nice, we might live in a world where "the web" simply means Internet Explorer.

I kinda doubt that. As soon as Microsoft had a virtual monopoly on the browser market, they let IE go stale for years. Hardly any feature development, hardly any bug squashing. Terrible security. By the time the browser choice thing in the EU and the antitrust thing in the US happened, the rot had already set in and everyone was fed up and yearning for a browser that didn't suck. Google drove their Chrome truck right into that gap.

If IE had actually been a decent browser, no amount of "choose your browser" screens would have been enough to sway people from it. Just like they cling to Chrome now because Google is too smart to make that mistake.

PS FWIW I don't like and hardly use chrome but technically as a browser it's great, I just don't like Google's attitude to privacy.


Instead we live in a world where the web means Chrome. How wonderful!


They usurped the minds a generation with free email and YouTube.


Look, as someone who was rooting for MS to lose big time back then, I would have been happy for that to be true. But MS lost through hubris, not the antitrust settlement.


If it wasnt clear in my reply, I agree with you.

But as an aside, don't let this site get to you too much. There's a lot of arguing for sport that goes on here.


Phyrric victory, modern Web simply means ChromeOS rebranded.


WebAssembly....


WebAssembly is an open standard and I don't really see how it's any bigger a problem than asm.js or obfuscated JS already was.


Agreed! Would be nice to reverse the tide on that stuff though. C'est la vie.


Apple AMX is a private extension. It is not documented and not exposed to the developer. You have to use system-provided HPC and ML libraries to take advantage of the AMX units. This gives Apple the freedom to iterate and change implementation details at any time without breaking backwards compatibility.

I am sure they will support open standards in time as they mature, but for now there is little advantage in doing so. Not to mention that open standards are far from being a universal panacea. Remember Apple’s last serious involvement into open standards - OpenCL - which was promptly killed by Nvidia. Apple has since learned their lesson and focus on their own needs and technology stack first.


https://en.wikipedia.org/wiki/Thunderbolt_(interface)

What I liked about this is that Apple realized it makes more sense for someone other than them to develop and supply tech for something like Thunderbolt, co-engineered with Intel for Intel to do it (and eventually open it, royalty free).


Well, they did create OpenCL.

But yeah it's ages ago and they've kept everything proprietary since then :(


I think OpenCL was a pivotal moment. Apple created the draft and donated it to Khronos, where is subsequently was stagnating before Nvidia effectively sabotaged the effort in order to push its own proprietary CUDA. Since then Apple has been focusing on their own hardware and ecosystem needs. I think the advantage if this is well illustrated by Metal which grew from being this awkward limited DX9 copy to a fully featured and very flexible yet still consistent and compact GPU API. Sometimes it pays not having to cater to minimal common denominator.

In the recent years I am more and more convinced that open standards are not a panacea. It depends on the domain. For cutting edge specialized compute open standards may even be detrimental. I’d rather have vendor-specific low-level libraries that fully expose the hardware capabilities, with open standard APIs implemented on top of them.


AMD and Intel were the ones that sabotaged the effort by never providing the same level of tooling and libraries.

Don't blame NVidia for AMD and Intel incompetence.

Ah, and on the mobile space, Google never supported OpenCL, rather coming up with their C99 Renderscript dialect.


Re: the "lucky" remark: they certainly tried! Remember AppleTalk and HyperCard?


This is seriously ahistorical revisionism. There was no universal networking standard in the 1980’s, and there certainly wasn’t anything universally suited for networking on microcomputers, especially not with the zero-configuration usability that Apple wanted. Remember that TCP/IP was largely a plaything for academics until the early 1990’s, and Apple had to create zero configuration and multicast DNS before they could consider deprecating AppleTalk for use on small LANs.


I think that if Apple had been as powerful as they are now back then, they would have pushed "their" tech more aggressively and refused to support "inferior" protocols.

Really, we've been lucky that it was Microsoft and not Apple that was the dominant player in the 90s.

And I am far from a Microsoft fanboy, but I think that Apple hubris has always been there, and their contribution have to be mitigated in some way to stay on the positive side.


All vendors for home computers were vertically integrated, the PC was the exception only because IBM messed up and Compaq was able to get away with their reverse engineering of the PC BIOS, while the OS was developed by a third party (Microsoft).

Ironically what we see nowadays with phones, tablets and laptops is a return to those days of vertical integrated software.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: