Hacker News new | past | comments | ask | show | jobs | submit login

Apple ported many Pro Apps to ARM , especially their Logic Pro, Photoshop and they were showcasing Maya on ARM. That is about as Pro as it gets for Mac.

That reads to me Apple isn't going to have Intel for some high end Pro machine. They intended to go all in with ARM. i.e There will be a Mac Pro with High TDP ARM Chip. I wonder what are the owner of Mac Pro feeling now having just spend a $5K+ Mac Pro with Intel.

Question is,

1. They are going to design their own CPU for the whole range of Mac? up to 10W for MacBook, up to 45W for MacBook Pro. ~150W for iMac, ~ 250W for Mac Pro ? How is that financially feasible considering the volume of Mac Pro sold. Or do they intend to use those high TDP chip in their server farm / iCloud?

2. What happens to GPU? Having their own GPU for iMac and Mac Pro as well? Dual GPU options where Apple GPU for power efficiency? This feels like additional complexity.

3. Would it be like the PowerPC era where you will get a new iMac once you finish with the development kit?

Finally while I am excited for ARM Mac, at the same time I am also feeling a little sad. Good bye x86.




> What happens to GPU? Having their own GPU for iMac and Mac Pro as well? Dual GPU options where Apple GPU for power efficiency? This feels like additional complexity.

I think GPU scaling will be much harder than CPU, so whereas Apple can surpass Intel CPUs for all but the highest segments, putting together a standalone GPU will be hard and very interesting to see. For an entry-level GPU? No issues. But what about a midrange (AMD RX 5700XT or Nvidia 2070S)? And not to mention the top-tier Nvidia 2080ti.

The other unspoken risk is that while Apple may be vertically integrating its SOC, it still relies on a fab like TSMC. Intel's recent problem is rooted in their inability to move off legacy 14nm fabrication process. TSMC may have done great in 7nm and now to 5nm transition, but what happens if/when they stumble? Would Apple also want to acquire them or build its own fabs to mitigate this risk?


> The other unspoken risk is that while Apple may be vertically integrating its SOC, it still relies on a fab like TSMC. Intel's recent problem is rooted in their inability to move off legacy 14nm fabrication process. TSMC may have done great in 7nm and now to 5nm transition, but what happens if/when they stumble? Would Apple also want to acquire them or build its own fabs to mitigate this risk?

Surely this is an advantage to being fabless? If TSMC stumble, they can evaluate other options. Same for AMD, where would they be now if they were still tied to GlobalFoundries?


Valid point as a GF-tied AMD would not be in the same position as today.

That said, what are the other options if not TSMC? Besides Intel, Samsung is only other cutting-edge option. Intel's 7nm would be technically on par with TSMC's 5nm (marketing names aside). https://en.wikichip.org/wiki/7_nm_lithography_process

There is a chance, however unlikely, that TSMC's 3nm push will run into issues and be delayed. Would create an interesting scenario where Apple would pay Intel to fab their SOCs.


>Apple would pay Intel to fab their SOCs.

Why would Intel agree to that? Have they ever fabbed any other companies design?


Yes: https://www.intel.com/content/www/us/en/foundry/overview.htm...

There were rumours going around about its demise a few years ago, but a fair bit of that was simply their failure to ship 10nm parts on schedule AFAIK. They're still doing some degree of third-party manufacturing, and I don't doubt once they reach a point of having the capacity for their first-party products on 10nm we might seem them expand.

However, the inevitable flip-side of this is unlike TSMC/SS where Apple can bid the highest for the early production of a new node, Intel are highly likely to keep the new node for themselves to start with.


Apple said they’ve produced 2 billion A-series SoCs in ten years. Would Intel turn down a slice of that very large pie if offered?


Intel currently doesn't have enough capacity to make their own chips and are rumored to be outsourcing to Global Foundries as a result ( https://wccftech.com/rumor-intel-moving-select-cpus-to-globa... - huge grain of salt on this one ofc, but the supply constraints on Intel's fabs are well known - they mentioned it even at their earnings call) - why would they stop making their own products to make Apple's instead? Apple'd have to pay an absurd amount for that to make sense.


Apple has an absurd amount of money.


Even if TSMC slipped a year it would still be on par with Intel 5nm so Apple would still be better off


I will tell you one thing for sure that it's impossible for Apple to acquire TSMC. TSMC have a lot of customers other than just Apple. I think it's logical for them to come up with their own fab but honestly that is incredibly hard. Maybe in 10 years, I would say.


>I think GPU scaling will be much harder than CPU

We only have mobile SOCs as a reference point so far, but Apple is doing very well on that metric.

>On the GPU side of things, Apple has also been hitting it out of the park; the last two GPU generations have brought tremendous efficiency upgrades which also allow for larger performance gains. I really had not expected Apple to make as large strides with the A13’s GPU this year, and the efficiency improvements really surprised me. The differences to Qualcomm’s Adreno architecture are now so big that even the newest Snapdragon 865 peak performance isn’t able to match Apple’s sustained performance figures. It’s no longer that Apple just leads in CPU, they are now also massively leading in GPU.

https://www.anandtech.com/show/15246/anandtech-year-in-revie...


And what happens if China marches into Taiwan? This is a plausible consideration that must be one of Apple's worst nightmares.


China can bomb TSMC, no doubt.

But landing troops (they can't march, it's an island, and the difference between amphibious and land based operations matters a lot) would be extremely difficult, and not obviously in the PLA's favor. See, for example:

https://foreignpolicy.com/2018/09/25/taiwan-can-win-a-war-wi...


China's sheer scale means there aren't too many military shortcomings it can't address on a roughly 10-year timeframe.


The larger point is that it'd be way too messy for them to even try. Basically on the scale of US/SK invading NK.

Besides that significant causalities and economic damage to both sides, which of course they'd eventually win, but it'd be a geopolitical disaster that won't end with the quelling of the armed forces.

China has far more to gain not going this route.


I’d imagine China would more likely begin blockading Taiwan, at least until the US and the west responded. Given the strength of US naval power it’ll be a while before China consider even that.


If China mearly wanted to destroy Taiwan's civilian society, they could do it. But to take out the defences and make an amphebious landing across a large and dangerous sea is extremely hard. Perhaps possibly only with cyber-warefare to disable the defenses.


This is why the US just struck a deal to have TSMC build a fab in Arizona. It's absolutely critical to have a backup plan.


Isnt its planned output tiny compared to the production volumes in Taiwan?


Having the foundations of a fab makes it a bit easier to expand production in the future compared to setting up the whole shop at once if needed.


Yes. Everything points to it being done for a DoD order.


My understanding is that the actual machineries and materials necessary to make and run fabs are made in US, Japan and Europe, not in China or Taiwan. For example photo-lithography machines are made by the like of ASML (Netherlands), Nikon (Japan) and Canon (Japan).

Although it would assuredly take some time to ramp up, TSMC should be able to spawn fabs outside of Taiwan, out of CCP reach. They are already building one in the US, albeit with a small output.


Their whole supply chain is going to grind to a halt in that scenario even if TSMC's fabs were somewhere else. China would at minimum get sanctioned and there'd be component and raw material shortages for a while.


The United Nations would issue a strongly-worded letter. Markets would fluctuate for a week. Then everyone outside of Taiwan will pretend that nothing happened.


Could this lead to a large chip fab being built in the US? Or do they just hop to Japan / South Korea?


If we're in a scenario where Taiwan is getting bombed or invaded, South Korea and even Japan aren't exactly safe options either.


TSMC has fabs in mainland China as well. China is a friend of capitalism.


According to Wikipedia, TSMC's 9th largest fab is based in China. Their 1st - 8th largest fabs are based in Taiwan.


I heard from random sources that their GPUs are actually (relatively?) very powerful. Better sources/experience appreciated.

"Apple claims the GPU in the iPad Pro is equivalent to an Xbox One S, although how they came to thise conclusion is difficult to say since we know so little about the underpinnings of the GPU." [1]

[1] https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro...


Game console GPUs are middle-end at best. They compensate by huge amount of hardware-specific optimizations since game developer can only target few models of hardware during 5 years lifecycle and devkits for hardware become available for game engine developers almost 2 years in advance.

Apple sales for Mac are much smaller numbers compared to consoles, they change generations more quickly and amount of optimization in GPU-intensive apps is nowhere close to consoles.

So they might be able to compete with Intel iGPU, but that's nowhere close to AMD or Nvidia offerings.


That's less true than it once was. Both that the console APIs abstract way more of the systems than they once did so that console manufacturers can do perf refreshes halfway through the cycle, and that the whole point of Vulkan, et al. is to bring console like programming techniques to full computers since the GPU's MMU means that all you'll do is crash your own process anyway. That all adds up to them needing pretty nice hardware to keep up.


I dont see how this helps Apple to compete with AMD or Nvidia GPUe. It's much easier for Apple to just use AMD GPUs for their Pro devices. Macs market is just too small to create custom high-end GPUs.

Apple might have huge leverage on mobile with iPhone because Android graphics drivers are horrible mess, but on Windows GPU drivers are decent and building custom driver stack sounds like too much.


I mean, it's not like the consoles are very custom GPUs. They were pretty standard GCN last gen, and pretty standard RDNA this new gen.

Apple's custom GPUs they use in their SoCs are arguably more effort on their part.


Why wouldn't Apple just reuse everything, including graphics hardware and drivers from iOS?


I guess because one can't just reuse hardware. They cant just take iPhone GPU, make it 5 times bigger and get 5X more performance with 5X of power budget. So more likely they'll just gonna sell laptops with GPU performance closer to iPhone which is not at all impressive.


GPU is actually relatively easier to scale by adding more cores/ALUs. The difficulty lies in figuring out how to handle peak current requirement of the whole IP.


> Both that the console APIs abstract way more of the systems than they once did

And PC graphics APIs abstract way less of the system than they once did


Still a current-gen console grade GPU in a form factor like the iPad is pretty impressive. But then again maybe it wouldn't be able to sustain console-like performance due to thermal constraints


Agreed, plus benchmarks and measures like TFLOPS are always highly subjective.

I'd be curious if Apple's non-mobile SOC roadmap emphasizes on CPU development while still allowing for eGPU setups or even some built-in integration (e.g. Mac Pro with Apple CPU and AMD GPU) initially. Maybe in 5-10 years they'll shift to focusing on the GPU front and bring out their own dedicated GPU.


I don’t see how an integrated GPU can compete with the top of the line chips from nvidia and amd. The discreet GPUs(Radeon vega) in the Mac Pros have 13+ billion transistors and have 1 TB/s memory bandwidth with specialized memory.


The GPU in the Xbox One S was far from top of the line, it's pretty mediocre. A middle of the road GPU from that time frame was easily 200% faster than it.

https://www.techpowerup.com/gpu-specs/?architecture=GCN+1.0&...

That being said, frankly I'm amazed at what most modern mobile GPUs are capable of and for most people who are casual gamers, that level of performance will be more than enough. What Apple will bring to the table will certainly be better than that, it's already better than Intel GPUs and they can still support 3rd party GPUs if necessary.

The demo they showed of Shadow of the Tomb Raider running at 1080p looked great for a game, AND it also looked awful if you compared it to the PC version on Ultra settings.


Apple kept mentioning pro apps like maya where you need highend GPUs. Gaming isn’t a big use case on a Mac. Our video production team uses highend macs with discreet gpus. If they don’t plan to offer comparable solutions to current line of desktop machines, we’ll be forced to switch platforms. I don’t see how an integrated gpu can compete with a discreet GPU with specialized dedicated ram and a memory bus that’s 10x of a CPU using ddr4x.


I suspect they aren't going to rule out external GPUs, they were just demonstrating the platform capability without them.

The whole presentation struck me as a big middle finger to Intel and just Intel.


I mean, it looked awful compared to what I run on my PC, but considering as well that it was running using Rosetta I thought it was genuinely quite impressive.


Yeah, but that demo was running under emulation through rosetta 2...


The game natively supports Metal, I suspect that's why they chose it.


Yeah, did you notice they also ran it at 1080p and it really didn't look all that great?

I was actually a bit surprised they would break that kind of a demo out at WWDC -- I've seen that game not running in emulation and it was beautiful. That demo wasn't.


I agree the game definitely did not look like it was intended to and from that perspective it looked bad.

Looking at it without context however, it demonstrated the capability of the platform to run a AAA game without developer optimization/consideration.

And it demonstrates someone could make a stylized game similar to Fortnite.


Yeah, I would agree with that. It definitely demonstrated that gaming is possible even under Rosetta, which is an accomplishment. Watching some other stuff from them this morning, it seems like they're passing the metal calls directly through to the GPU (which makes sense), even while translating CPU calls from x86 to ARM.


An A12x has 10 billion transistors, and being a SoC the CPU, Neural Engine, ISC and GPU are all integrated.

And that was released in 2018. By the time Apple put out desktop class silcon in end of 2020, surely it will be significantly scaled up?


So it can compete with an old, lower-end APU using unreleased metrics. Great. Meanwhile, AMD is set to release much more powerful integrated graphics.


Considering xbox is 1/4 the price, it's not saying much about it. On the contrary.


TSMC is too intertwined with Taiwan and Taiwanese independence to make that feasible.

Building your own fab would be a gargantuan task.

A future JV with Intel seems more likely (despite it being very unlikely in absolute terms).


There's no reason why Apple can't use Nvidia or AMD GPUs with an ARM CPU.

Some examples here https://linustechtips.com/main/topic/917482-arm-and-pcie-lan...


I wonder if Apple had anything to do with TSMC's announcement of a fab in the US?


It seems (wasn't really that clear?) that Maya was running emulation? (as in, x64 binary). I don't think Maya's viewport on MacOS actually runs with Metal (it's still OpenGL), so I doubt it's a native port.

Did it do any CPU-intensive stuff (skinning, deformation), or was it just GPU-intensive viewing?

High-end VFX will be interesting for this with Apple (Maya, Houdini, Nuke) - already there was quite a lot of anger at OpenGL being deprecated and Vulcan not being officially supported. Another instruction set into the mix for highly-optimised apps (lots of SIMD code) is going to be quite annoying, especially for the CPU renderers (Arnold, Renderman, etc)...


As Apple's own GPUs do not run full OpenGL, does this in turn mean they didn't only create a x86 to ARM translation layer but also a full OpenGL implementation running on top of Metal? Similar to other projects implementing OpenGL on top of Vulkan? Or did they actually invest the time to implement OpenGL directly in their graphics drivers?

That seems a bit weird considering OpenGL has been deprecated in macOS already. I would have expected a full removal once the first ARM Macs ship.


iOS's ES support is implemented on top of Metal, don't see why this would be any different.


> Logic Pro

Logic isn't worth much without plugins, and I expect many smaller developers not to port to ARM, and there's nobody to fill the gap in the first years. If that is indeed the case, Apple will begin losing market share where it currently reigns. When there's no pro software, the mac will be just an iPad with a keyboard. It's quite a gamble.

But I've been too pessimistic before.


> Logic isn't worth much without plugins…

Rosetta 2 cross-compiles Intel binaries to ARM. Why would VST/AU plug-in binaries be an exception?


(Audio/DSP) plugins are a tricky thing, it is not uncommon for some of them to have assembly or processor specific instructions to squeeze out as much performance as possible. Your 'budget' in this domain is limited to only a few milliseconds ...


But it does a AOT recompile from x86 to ARM for extensions and plugins according to the second presentation today.


Even if it's precompiled and not at runtime, we don't really know what the performance looks like, especially in hand rolled assembly where something that isn't a one-to-one cycle match could have obvious effects.


Also, Rosetta 2 doesn't support AVX. So it would rely on there being an alternate code path which would be AOT transpiled.


Will an ARM version of Logic support loading x86 plugins?

Last time Rosetta was on a process by process basis, so you might have to make some unfortunate choices.

Edit: https://developer.apple.com/documentation/apple_silicon/abou...

> The system prevents you from mixing arm64 code and x86_64 code in the same process. Rosetta translation applies to an entire process, including all code modules that the process loads dynamically.


Yes - they called out "plugins" being transpiled in the keynote.


I linked to the docs where it says they plugins and the process need to be the same architecture; you won't be able to mix and match in a process.

Also transpile is a source to source compilation and isn't the right term here.


Oh, I see. Extensions have a different model than plugins. Will be interesting to see how support develops and if it expands or not.


Apple has pushed XPC architecture for plug-ins for several years and has announced that these plug-ins will work in Rosetta for a Native host app. Audio Units will work as well.


VST is way older than Apple's push for XPC, and is dependent on being linked into the same process as the main DAW application.


No AVX is fairly brutal


What does that mean and what are you basing that on?


The question is if those plugins are modules (shared objects/dynamic libraries), or if they are used via some sort of IPC to an external process. (or even an XPC service)


They are shared libraries, yes. And rely on lower latency than you would get from an IPC channel to another process.


That would make sense. On the other hand, would that go for Audio Units as well? I thought those were XPC based.


They could have a separate bridging process to host the x86 plugins, so we can’t know whether it does or does not support them.


The plug-in arch doesn't really support that. They're reliant on being able to do brain surgery to the main app, and being in the same GUI context.


Performance, but that remains to be seen. VST performance can be absolutely crucial.


For sure! Hopefully we'll learn more specifics in another session.


They specifically mentioned Rosetta 2 would work for all binaries, including plugins.


cross compiles? Or emulates x86-64?


> Logic isn't worth much without plugins

A lot of people do say that, including a lot of professionals, but in my opinion Logic with its stock plugins is already absolutely great. Some people buy a lot of plugins because they don't know how to use the builtin plugins, and some because they enjoy playing with new stuff more than making music (and yes, some who know what they're doing too).


Some of the built-in stuff is quite good, but there's a whole lot missing. For me, it would be the sample players (Kontakt, Play, SINE, ARIA, and whatever Spitfire's sample player is called), Pianoteq, spatialization, reverbs, some of the outlandish plugins, iZotope's stuff, a PSP equalizer, and a few synths. I know other people use a wider range of plugins, and they depend on them in their workflow. They are quite likely to bail to Cubase on Windows if their plugins don't get ported.


Professional users of Logic (but also Pro Tools) tend to be very conservative with their upgrades. People who use Logic for a living won’t be using this for years to come.


This is absolutely true. Many are still running on old-old Mac Pros with Snow Leopard. Upgrades are done when hardware can't keep up anymore, but they also tend to use big DSP rigs from Avid and UA that remove the CPU bottleneck for major tasks.


> Logic isn't worth much without plugins, and I expect many smaller developers not to port to ARM

I don't think they can afford not to - mac users make up a signifcant share of their target audience.

Furthermore there's always the emulation option although it remains to be seen how performant that would be in an such a demanding, time-critical application like AU/VST instrument and effects.


> When there's no pro software, the mac will be just an iPad with a keyboard.

Maybe in your segment but I can't think of using Windows as a web developer even without any of the "Pro" app from Apple.


>I wonder what are the owner of Mac Pro feeling now having just spend a $5K+ Mac Pro with Intel.

Not much. If you bought that machine you didn't care about it much as you bought older hardware at a premium price-point. You bought it to use it right now without fussing and have accepted it's obsolescence in 2-4 years, as is quite normal for studios.

If you bought it as an IT enthousiast, well..why would you even do that?

I don't see Apple coming out with an ARM Mac Pro within 3 years anyway. Why would they do that? No upside for them, that market has to be won back first. Slowly start with laptops and iMac, focusing on consumers, get the OS in shape and third-party vendors accustomed to the platform first.

iPad-like performance is already fast enough for almost all consumers, and I'm sure Apple doesn't want to break with Intel on everything right away.


The Mac Pro was seen as an an assurance that Apple was going to support their Pro customers for the foreseeable future. It's not just the Mac Pro itself. It's the software and supporting hardware.

Also crucially, due to Apple's spat with NVIDIA, the Mac Pro doesn't support CUDA. This means software has to be modified to use Metal Compute to support the Mac Pro.

If you make pro software Apple just let off a huge signal that the future of the Mac Pro is at best uncertain. So maybe hold off on that Mac support for the next two years.

Apple is not going to make an ARM Xeon. The resulting computer would be so expensive after you amortise all the R&D to create a single workstation class CPU for it, that nobody would be able to afford it. All the Pros who bought Mac Pro got played hard.


The Mac Pro serves such a tiny, tiny sliver of the pro market, I don't know if they got played. I mean, the big audio/editing/GFX studios that bought a Mac Pro will keep using them for years or swap to Windows if no powerful ARM Mac comes out. All the important pro software supports Metal by now.

Anything that isn't a Hollywood studio will have to use an ARM iMac.

Enthousiasts and semi-pro's that want a powerful, affordable and extensible Mac with state of the art discrete GPU's can probably get lost, as is the case right now.

Then again, Apple is now only bound by their own operations, so who knows what they have planned.


How did they get played hard? They got the latest model that is still supported by lots of software. The only way you get shafted is by buying the ARM Mac Pro which is something the owners of a x86 Mac Pro luckily avoided.


It is only supported by software because Apple convinced big software developers to port everything to Metal. Apple no doubt made assurances to these developers that this was Apple's big re-entry into the Pro market and that they were in it for the long term.

Now, it turns out that Apple wants to complete a transition in 2 years... they are not keeping x86 around for the Mac Pro longer than that. It's not credible that Apple can just scale up their CPU into a workstation part. So if you are making pro software then you have to come to the inescapable conclusion that Apple is, in fact, not serious about the Pro market, and it's probably best to avoid expending further development resources on the Mac.


AFAIK, they're going to make their own GPU. A recruiter from Apple reached out to me a few weeks ago trying to poach me from my job at a large GPU manufacturer.


They already have their own, custom GPU on their A-Series SoCs, so them hiring people working on GPUs is hardly surprising? I'd expect them to replace where they use Intel's integrated parts with their own, which might push them a bit higher end than where they have been previously, but I doubt they're chasing after the dedicated side of things?


In theory they could slap on HBM [0] for a decent performance boost. It would eliminate the need for a dedicated GPU in Macbooks and low end iMacs.

[0] https://www.amd.com/system/files/49010-gddr5-hbm-1260x709.pn...


Re. the PPC era and whether it will be like that in the future, this is from a lowly web backend developers perspective, so I might be naive, but hopefully we've learned to build generic solutions without sacrificing performance, when it comes to things like this, and it doesn't devolve into a code that only runs on X-hardware type thing.

I think it's a more common problem to see things like the T2 security chip not being present on older hardware or hackintosh(unsupported) hardware, so if you're not running the right hardware you can't take advantage of a feature (AR in certain iPhones) or you won't have the performance advantage (Filevault encryption with T2 chips vs encrypted by CPU).


1. They explicitly said yes, there would be a whole range of CPUs that they're making. This isn't just "throw the phone CPU on a desktop," they'll have laptop class and desktop class CPUs that are different.

2. They're saying integrated. Whether they also throw Radeons on high-end desktops is probably not yet determined. (They have a two year roadmap for this rollout, and I think it's pretty obvious they'll start with the portables/iMacs where the benefits of ARM are more likely to outweigh the pain of change for more people.)


Inferring from the video announce plus guessing:

Will they have a wide TDP range of designs? Yes. On feasibility, there are two major components to the cost - first is the engineering cost for design. You can bet that these costs are not massive; Apple has roughly $200 billion cash on hand right now. Most of the work will be scaled up as part of their overall design process.

The second cost component is the Non-recurring engineering to get a mask-set made for the chips. A 7nm start right now costs maybe in the range of $20mm? If the Mac Pro is required to stand on its own feet financially, perhaps that will push out timing / push up cost. On the other hand, if one considers the cost of the top end chip to be a marketing cost, it’s literally a rounding error.

GPU - Apple is going to have their own GPUs, and no longer worry about AMD and Nvidia. They’ll have complete control of the hardware and software stack and control their own destiny. I would be very surprised to see dual GPU options.


> wide TDP range of designs

Keep in mind that this doesn't imply a different chip design. TDP is primarily a function of intended application and cooling capacity, not chip tech.

That said, they've already demonstrated ability to do this with A12/A12X/A12Z, which are the same chip at different CPU/GPU core counts. Clock rates are not impacted except where thermal limitations (TDP) impose a limit.


Don't forget the S line of SoC's as well for the Apple Watch - which wiped the floor with the Qualcomm watch SoC.


200 billions may seem huge but at the same time the have 118 billions $ of debt as they use debt to pay dividend without moving money.


Without triggering tax obligations.


Every time stock is sold (whether Apple is the buyer or not), a "tax obligation" is triggered.


Which has nothing to do with why Apple took on debt to pay dividends.

If Apple took funds from a foreign subsidiary to pay shareholder dividends it would have to pay

1) Federal Corporate income tax of 20%. 2) CA state corporate income tax of 9%.

Then out of the 73% left over, the shareholder would pay state income tax + federal dividend tax. That means the shareholder ends up keeping between 50-60% of the funds Apple paid.

If Apple borrows the funds instead, it owes no federal or state corporate tax. The shareholder just needs to pay state income tax and federal dividend taxes, so after tax they end up with 70-85% of the funds Apple paid.


They said "we expect the transition to take two years [...] we've still got some Intel Macs to show you soon [sic] [roughly what they said]" during the keynote.

This isn't just going to replace low-end machines; its every machine they sell, within two years. Probably starting with low-end, but moving up.


I'd assume that they'll still have AMD/NVidia GPUs as options if they truly plan to bring these CPUs to the Mac Pro market. There's no way that e.g. Animation studios will accept anything less than absolutely top-end performance. And upgradability too. I think Apple must know this.


Here's some mad lad adding PCIe slots to a Raspberry Pi 4:

https://twitter.com/domipheus/status/1167566293861588992

It's definitely possible, just need to expost the PCIe lanes in a sensible way (this has been rare to see on ARM-based machines so far) and have PCIe device manufacturers distribute drivers for ARM macOS.


And here’s the link without the twitter wrapping: http://labs.domipheus.com/blog/raspberry-pi-4-pci-express-it...


PCIe is surprisingly robust for a high performance interface. fail0verflow mentioned doing PCIe over serial so they could mitm some of the communication when hacking the ps4.


Apple go down the Samsung route and license RDNA tech for their ARM GPUs - https://www.tomshardware.com/news/amd-rdna-exynos-samsung-so...


> Apple ported many Pro Apps to ARM , especially their Logic Pro, Photoshop and they were showcasing Maya on ARM. That is about as Pro as it gets for Mac. > That reads to me Apple isn't going to have Intel for some high end Pro machine.

That conclusion seems a bit too far fetched from my point of view. There are many users that use Logic on a MBP and with Photoshop it's even more common to use it on a Laptop.

Sooner or later a ARM MacPro is comming but if i'd had to guess i'd say that will take a while. There were 6 years between trashcan and the current cheese grater. So maybe 2025 then ;)


> 2. What happens to GPU? Having their own GPU for iMac and Mac Pro as well? Dual GPU options where Apple GPU for power efficiency? This feels like additional complexity.

Nintendo Switch uses a nvidia gpu with a very slow and outdated arm processor. Yet, this allows it to run Rocket League, Witcher 3 etc.

Apple, might go with discrete GPU from nvidia/amd to pair it with their arm processors.


Re 1: I believe iCloud is hosted on Google Cloud, isn't it?


Apple does use Google Cloud and AWS for some things, but is investing billions into building their own data centers.

https://www.datacenterknowledge.com/apple/apple-spend-10b-us...


Some services are in Google Cloud, some in AWS, some in Azure, plus Apple has eleven data centers of its own. I believe that Apple is working on bringing the pieces that have been offloaded to AWS/Azure/Google back in house.



Maybe they’ll eventually use some Arm server chip on high end.


Maya was running using Rosetta 2 translation rather than being ported.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: