It really startled me that the comparison was the Pro vs the Ara, and not the Pro and the Ara. The Pro is very much in the spirit of modularity. True, everything written demonstrates the different level of granularity of modularity between the Ara and the Pro. But to the consumer, the forward facing arguments are exactly the same: "Buy what you need, each separately, and link them all, as needed rather than a huge AIO - one size fits all".
The Mac Pro is requires you to bring your own HDs, bring your own expansion cards, bring your own network gear. Which centers the Pro itself at the core of a modular computing center.
This seems to be a common misunderstanding about the new Mac Pro. Apple has actually improved modularity by liberating expansion from a fixed physical space. Additionally, users aren't required to install scary-looking computer hardware into PCI slots. They can just use the familiar UX pattern of plugging a cable into a port.
You already had cables everywhere if you were a professional. I don't know any professional photographer, video editor or audio professional that doesn't use tons of external storage.
It's mainly non pros that argue that a few internal 2TB bays would be enough, which is laughable for pro use.
Seems like you missed the part where Apple did it, therefore it finally became good. Just like Intel was a dirty word until Apple switched, then it was good.
Definitely not from straw man land. I remember many people, both that I knew personally and people on the internet that would extol the virtues of the G4 and G5 processors. Especially as Intel started to widen the gap in terms of clock speed, they would explain how clock speed doesn't matter and that the Apple processors do so much more work per clock cycle. Then after the switch to Intel the same people would talk about how great the Intel processors are and refer to how fast the processors could be clocked. Its no longer relevant to anything, but they were a very real group of people.
Coming in 10 hours late, but from the little I know of computer architecture...
Intel won that fight on being at least a generation ahead in semiconductor technology. Intel have efficiency problems compared to the ARM processors now, because they can't just throw many more transistors at their design when the metric is amount of processing/watt.
Could you please explain your comment a little more? I'm honestly curious to know why you think the concept of abstracting technology into easy to use components is in conflict with the "hacker news" ethos.
I recently bought a new Macbook Air. My old Macbook Pro was 4 years old. Not that the old machine stopped working - it still works perfectly fine. The Core 2 Duo CPU was up to all tasks I threw at it (compiling C++ code). I just wanted longer battery life.
That's why I see no problem with the new Mac Pro being so "unmodular". Unless something revolutionary happens with CPUs that machine will be good enough for more than 5 years.
What tasks do you do? I have a 2010 MBPro running ML and it slow has hell for doing any kind of iOS dev work. It used to be pretty fast but Apple's software updates - OS + Xcode have slowed it down considerably.
How much RAM do you have? OS X is quite memory hungry and is terrible when it has to page. I'd recommend maxing out the RAM if you haven't already. I'm guessing you have a regular hard drive too. An SSD will make it feel like a new machine.
Also, Mavericks is much snappier than ML from what I've seen so far and makes much better use of RAM.
I have 4GB and yeah, my hdd is non-SSD. But it wasn't this bad back when I was working with snow leopard. Unfortunately Apple forces you to get their new OS to use the new Xcode and that has screwed my performance.
I've disabled a bunch of things, Spotlight indexing, dashboard, their notification stuff, etc and also moved swap to a new partition, its helped, but still not as snappy as when it was new.
At the moment, I'm weighing my choices .. do I upgrade RAM,SSD or do I get a new one - but then I have to buy apple's expensive ram and cant easily change the SSD in the future.
If you max the RAM to 8GB and add a SSD it will be a completely new machine. Also not that hard to do, well worth it. I have the 2011 model and updated first from 4 to 8 and then 16GB of RAM and from a standard HDD to Crucial M4 500GB and the machine is a beast, must faster in 10.9 than when I originally bought it.
I upgraded my late-2008 iMac with Mavericks to see if it performed better than Mountain Lion given that my machine's maxed out at 4GB of RAM and my experience thus far has been that overall, the system is more responsive, but there's a lot more latency in doing things like restoring backgrounded applications that haven't been quit, but are still running. It may be that my machine is too old, but Mavericks has just shifted the latency around in this case.
I had a 2009 MBP. Lion was slow as hell but swapping the HDD for a SSD did wonders. The RAM upgrade I tried before that didn't help as much as the SSD.
And to follow up, the E5 is an absolute beast of a processor. It's hard to imagine Intel having that kind of leap in the next 5-7 years (but here's hoping ;).
Pretty much - even the base version of the Mac Pro will take more than half a decade to become obsolete (at the rate we're seeing now). And I believe the Mac Pro's processor is still on a socket (correct me if I'm wrong), so it's also upgradeable.
Even if it's a socket, Intel's been churning out new sockets every 2 years or so. On desktops it's gone LGA 1156 (2009), LGA 1155 (2011), LGA 1150 (2013). For servers it's LGA 771 (2006), LGA 1366 (2008), and LGA 1567 (2010). You have a pretty limited upgrade window. One tick/tock, and then your socket is outdated.
The new Mac Pro, provided it's socketed, would be on LGA 2011.
That's beneficial for those who will want to upgrade - once the new sockets are out, the prices for the previous generation chips will go down - you can potentially upgrade from the base quad core to a six core processor, giving it a good boost in performance.
That has almost always been a problem. Even in the past when you could keep socket for 5 years, the chipset on the motherboard was limiting you upgradability to processor of the same generation.
Interestingly you don't touch one of the arguments, physical connections limit the connections (as in bandwidth and future updatability) but the same is true for the internal of cpus and their connections with the RAM and when evolution it's not enough disruption comes to the play, optical connections.
Regarding the screen size and the usability factor, same could have been said about the pc (the keyboard has certain size for some reason) but the goal is not to carry a pc but to extends our capabilities through it and eyes sir are a weak link in a bright future of tech I'm not sure I'll see or may never happen (<dream>and this is totally offtopic and it's even daydreaming but... sci-fi like the extremis armor were mechanisms and tech are fused to the actual human may be the path... with the appropriate tech, you never know when one of those "damn nature you screwed this time with this human" times, curiously the errors make him an uber genius; will create it</dreaming>).
remember we are machines of ideas and our body the bare metal on which our vm runs.
Intel hasn't approved an external graphics card that can connect via thunderbolt, and probably won't for the foreseeable future. It's one of the big issues I have with thunderbolt -- it's micromanaged to hell and back.
The last time the sonnet enclosure came up it was determined that the bandwidth of thunderbolt is simply not yet enough for a high end graphics card (especially if we're talking about upgrading years from now) and the enclosure itself did not provide enough power (only 100-150w). Not to mention there are no reports of this actually working with any graphics card, let alone a high end one worth upgrading to.
Just to demonstrate the bandwidth of PCIe3 x16: PCIe1.1 x16, PCIe2 x8 and PCIe3 x4 all provide around 4GBps. That's right, not 4 Gbps. So that's 32Gbps times 4 for a grand total of 128Gbps over PCIe3 x16. In comparison, Thunderbolt 2 at 20Gbps is very constricted.
I'm not sure the external GPU route will ever be useful for professionals, but I'm hopeful that it will become a more viable option for gamers who want a laptop that is light and thin, and the decently portable option to plug in a midrange graphics card that would work way harder than any integrated graphics could.
ok RAM I'm wrong about but the GPU extension is not the same thing as being able to buy the Video Card you want. So if you want to upgrade to the latest version of CUDA you can't.
Clayton christensen , the researcher behind "the innovator's dilemma", says there are natural cycles in industries between integration and modularity. Basically , when the performance of a product isn't good enough, there's a need for tight integration. So the winners usually have tightly integrated products.
At some point people don't need more performance , the products are "good enough". Then a modular architecture becomes very usefull because it increased the rate of innovation, increases competition and reduces prices. At those phase the winners are usually companies who control modular products.
Now if we look at the mobile market we see signs of a possible shift: for many consumers , mid level phones are good enough. And jolla unveiled in 5/2013 a modular phone[1] that achieved good reviews.
Incidently , it's seems a few months after that declaration, one year ago, motorola has begun working on a modular phone.
So assuming we can extract plenty of value from a modular phone(an it seems to me that way) , i believe we will see a large shift towards modularity.
Back when PCs weren't very good and you had to make real choices about what processor, hard drive and so on to get, DELLs customization and modularity business flourished.
Now that performance is "good enough" for practically any computer you can buy today, the market is moving to highly integrated tablet solutions, the only choice left being disk space, and it's getting less important every day. Meanwhile, DELL is going private.
Agreed, it's bunk. The reason you have modular designs is precisely because some parts aren't good enough so you need to be able to swap in better ones. When all the parts are good enough, and are going to stay good enough, modularity offers insufficient advantages.
The thing with software is that you can have your cake and eat it. App platforms are intrinsically modular - every app is a little module of functionality. Looked at that way, the Apple ecosystem is staggeringly modular, with hundreds of thousands of modules available from many thousands of developers.
It's possible what Christensen means by modularity is really commoditisation. The idea that eventually base operating systems will be so mature and stable that it won't matter which one you're running, so there will be no reason to pay a premium for a proprietary system. I don't think that will ever be true. Human society, fashion, behaviour and needs are too dynamic and advancing too fast. OS and platform development has been going at breakneck speed for the entire history of computing and shows no signs of slowing down. Apple apparently 'lost' the PC wars because modular commoditised modular windows ate it's lunch. Yet now we live in a world where innovative mobile OSes, have totally redefined the computing experience and in the form of the iPad is chewing a great big hole back into the established computing market. The truth is that OSes were never commoditised and just good enough. It just looked that way because the stagnation of the dominant platform was misread as being stability.
>> Back when PCs weren't very good and you had to make real choices about what processor, hard drive and so on to get, DELLs customization and modularity business flourished.
The cpu/memory/chipset remained integrated , due to performance reasons. But external bus performance was good enough to separate some functionality into external cards.
CPU, memory and chipset have always been very much modular parts. Chipset is determined by which motherboard you buy, memory comes as sticks you put into the motherboard, and even the CPU sockets in to keep it modular.
I mean just imagine the trouble to keep the CPU seperate from the motherboard, as is still done today! It is very very difficult to build a socket (and matching CPU) for the 1000+ pins of a modern CPU where each pin can possibly carry signals at multiple GHz frequencies.
The change christensen refers to is the shift from companies that built whole computers, like IBM - to the IBM pc , which was an assembly of parts from different companies.
My guess is that at that time , the pc-XT came with memory chips that weren't soldered , but in sockets. At that context that didn't mean much loss of performance. That habit of pluggable memory stuck. I'm not sure you lose much performance due to it.
And if we're talking about integrated memory - intel does have caches. Probably those are the best ways to deeply integrate memory.
And regarding CPU and board: it would be quite hard to integrate chips and board. There's was one attempt i know but it failed as far as i know. It's complex and not economical, but it does offer great performance.
I think what really happens is that the complexity of each modular component increases over time. It's kind of like Sutherland's Wheel of Reincarnation. When microprocessors were first created they didn't have any internal RAM, then SRAM became part of the microprocessor die. A machine with four cores used to require four sockets. Now you can do it with one, and that one will have a GPU on it.
But we still have external RAM, we still have discrete GPUs, and multi-socket motherboards, etc. The modularity doesn't go away, it's just that some applications require only one of the increasingly powerful modular components.
But when the increase in power doesn't come with an increase in price, Jevon's paradox kicks in and we start coming up with new applications for the increased power. So now Seagate has announced a hard drive which uses Ethernet as the primary interface -- they've effectively integrated a (simple) database server into the disk drive and made that into the new modular component.
So the real question with phones is whether they're yet in a position to become the new modular component. For that you have to answer the question of whether they can integrate with the other components to do what the user wants. How do I add storage to it? How do I make it faster? How do I make it take better pictures, etc.? What you want is to be able to do these things somehow without throwing away your entire existing investment, and until the common user can do it easily (e.g. by syncing the phone to a NAS device that provides bulk storage) there will be a market for modular phones that let people walk into the shop and have the tech insert another 8GB of memory or storage or upgrade the camera etc.
Interestingly the trend toward integrated devices has been held together in part by The Cloud allowing network storage and processing to take place off of the device and therefore reduce the demand for on-device capacity. If the trend away from trusting third parties with your data continues to gain steam then we could see changes in the market demand for modular devices.
From that perspective, you are right. However I think Christensen's perspective still holds if you go back further.
When the first semi-mainstream GUI-based PC (the 128k Mac) came out, the limitations of hardware at the time meant the thing was a mass of clever (perhaps genius) hacks to get the whole OS to run in with such limited RAM and CPU constraints.
Apple simply needed to be making the hardware and software to tweak them enough in tandem to work together.
As CPUs and memory improved, you had enough leeway so everything didn't have to be custom-engineered down to every 1 and 0 and bit of silicon to work well enough, and so the PC market entered the "modularized" phase.
It's interesting to me that the auto industry has been around for a hundred years, and while some components are standard (wheels, battery), 95% of the components in a car are not interchangeable. Until recently, the performance of cars has been relatively stable. So I'm not sure this observation is universal to all industries.
I'm not sure what car are you driving. Yes, you can't take any part from Audi and use it BMW but you have choice. You can buy parts for your car from several different manufacturers: you can buy original parts if you are/feel very rich but you can buy parts from less known companies. As well you can take two not working old cars of the same model and have quite good chance making one working ("donor car"). Meanwhile if I you ruin your phone you basically can't fix it - especially if parts are tightly compressed.
Another possibility you have assigned very special meaning to word "interchangeable".
Obviously Ara and iPhone fill different niches and take different sets of compromises. Neither of these is better than the other by all parameters, but each offers unique advantages.
Possibly the niche of Ara will be narrower, because the connectors are going to add noticeable cost. OTOH the ability to add an entirely different functional block right into the phone can be very valuable in some circumstances.
Of course, CPU and RAM need tight integration; I suppose these will come as one module. But the periphery, like radios, storage, cameras, etc need much less tight coupling and can use physically narrow high-frequency serial connections. If a connector is only 4 contacts wide (like USB) or even 6 contacts (like USB3), it can be reasonably cheap an compact.
Most probably a real Ara phone will have far fewer detachable blocks than currently pictured: a CPU/GPU/RAM block, a radio block, a camera block, probably an extra extension block for new devices (finger scanner? second camera for a stereo pair? projector?), and, of course, traditional detachable flash storage and battery.
The bigger problem with Ara-style modularity is that as computing power shrinks (in both size and power draw) it becomes pointless to think of modules of "a" computer.
There's simply no point to try to separate processing power out of any component that has other physical constraints. e.g. displays, lenses, antennae, etc.
So you won't want to slap a 'better' camera onto your smartphone to leave your DSLR at home. But not because the interconnect between lens and mobile "base" will become too wasteful or inefficient. Simply because that future lens will be a stand-alone camera and it will operate within a network of things, in which your mobile phone won't be a necessary component.
Similarly with any other useful components. You won't have to make any trade-off of space other than "what can you physically carry".
Need more storage? Put a storage pod in your pocket. Want a rangefinder? grab a 'network of things' capable device. Need more battery? grab a power brick that can charge anything with a capacity limited only by your willingness to carry it, rather than being locked into a form-factor unrelated to (and possibly in conflict with) your power needs.
People want cameras in their phones not because their cameras lack connectivity but so that they don't have to carry a separate camera.
Of course, there will always be a niche for cameras with lots of manual controls, possibly with big lenses, and they'll work as you say. DSLRs with wireless networking are already on the market, such as the Nikon D5300.
I own mirrorless camera with wifi. I really would love to have camera component with interchangeable lens on my smartphone just to save some clicks and own single device instead of two.
I assume these things would then be connected by some sort of hyperlocal area network (personal area network? PAN?) around you, with one cellular hub to be connecting them to the cloud?
It's my understanding that most major appliances (washer, fridge, etc) are less repairable than they were 50 years ago. I think it's because of several contributing factors:
* Cheaper manufacturing with simultaneous increase in quality control and reliability.
* More complicated objects lead to more costly repairs, mechanics have to be more skilled, so wages go up.
Because it's simultaneously cheaper to something and more expensive to fix something, it makes much more sense to replace than to repair. These effects feed off of each other, too. To make something more reliable and cheaper, manufacturers seal off more and more parts, making it more and more expensive to repair, and so on.
The price of labor has gone up also, and that likely plays a role.
To wit: I lived in India for 2 years and my apartment had a window air conditioner that I'm pretty sure has been on this earth for longer than I have. During those 2 years, my air conditioner broke several times (including one time where it caught on fire). I would have thought at some point the landlord would have replaced it, but he kept sending people out to come to my apartment and repair it. In India, half a dozen house calls to repair something (most definitely not under warranty) were less than the cost of purchasing a new unit. If that happened in America, a single repair visit (parts, labor etc) would have been within striking distance of "oh I'll just go buy a new one"
<generalization>India offered a glimpse into a parallel world where capital is expensive and labor is abundant.</ generalization>
> We're already seeing workarounds - graphics cards are using two, or even three PCI-E slots to get that precious bandwidth they need. With 4k displays on the horizon, and textures in games being updated accordingly, we're going to need more bandwidth to graphics cards. At some point, we're either going to have to switch to two dimensional connectors like processors are using for GPUs (which will still only delay our issues), or we're going to have to move away from the 1mm build fabrication for our interconnects. If the later happens, we simply can't rely on consumers to properly line up lanes in components.
They aren't actually using the slot, just potentially covering a slot right? One of the fastest desktop cards out, r290x only uses one PCI-E slot.
Yes, every graphics card that I have seen merely occupies additional slots to make space for the cooling devices. However, I do believe that Nvidia's SLI and AMD's CrossfireX multi-GPU interconnects are a proprietary variation on PCI-E, so in some respects that could be considered an additional slot, but it only provides bandwidth between GPUs.
oakwhiz is correct, I was wrong in my statements about cards taking up more than one slot. The blog has been updated accordingly. Sorry for the misinformation all, I could have sworn there were cards that did, but googling turned up nothing.
The bandwidth out of one PCI-E slot is pretty ridiculous on newer motherboards, even high end cards don't use it all. This is why things like those Thunderbolt PCI-E enclosures can even conceivably run a video card, since thunderbolt 2 is only equivalent to just over one lane of PCI-E 4.
I sincerely doubt interconnects are going to be the problem the OP thinks they will. First of all, individual connectors have not changed much in size, but the serial data-rate we can push through copper connections continues to rise. Fiber optical connections are already used to implement PCIe in many applications (Thunderbolt is, in part, a fiber optical PCIe link).
One huge advantage of fiber optics over copper is that you can send many data streams down a single fiber using different wavelengths and/or optical modes. This is why backbone fiber bandwidth keeps growing even over fiber links that haven't been upgraded. It's a function of what you hook up to the ends of the fiber, not the fiber itself. There is tremendous room for bandwidth growth in optical fiber.
Apple definitely doesn't need to use non-standard interconnects for the new mac pros just like they didn't need to use custom connectors for SSD's in their laptops. They just wanted to.
I'm not sure fiber optics are particularly viable in the next 5-10 years as interconnects between hardware components. Adding photonics to a hardware component increases size and cost fairly significantly, which is why you don't see many fiber optic interconnects yet, even for applications where cable size is important. Moreover, to get the kind of miniaturization you would need for a cell phone, you're talking on-chip photonics (diodes and photodetectors integrated into the IC itself), which still looks like it's in the early R&D phase.
Of course, most of this is probably because copper is still doing just fine in terms of bandwidth. Though some big issues with forcing huge bandwidth over few traces are latency and the additional circuitry needed to translate the signal into the actual components needed to drive RAM chips or a CPU.
Modularity is going away, not only in Mac Pros, but even in desktop space.
AMD APUs are integrating the CPU and GPU together. Intel is releasing mainstream chips that cannot be physically separated from the motherboard (See the i7-4770R: it needs to be soldered on).
GDDR5 RAM, the mainstay super-fast graphics RAM, is assumed to be soldered onto a board. DDR4 will only support one RAM per lane.
Modularity is almost the opposite of market forces right now... as unfortunate as it sounds.
AMD's "APU" is simply a way to join 2 different products and jack up the price on the one joined product. As such these are still a modular part. As for Intel this is the same idea they can force you to buy not only their CPU but also the entire motherboard and force you to pay for the entire board that is made by them. Modularity is not the opposite of the market forces right now. The big companies are just too lazy to spend the money to, 1. go smaller, 2. get more complex.
You can build an AMD APU system extremely cheaply. A ZBox Nano A4-5000 (using the "Kabini" APU) is only $300 or so for a complete computer (RAM, Hard drive, etc. etc. included).
On the contrary, the AMD APUs are extreme value buys. Intel has superior CPUs, but you save a money with AMD APUs because you don't need to buy a graphics card anymore. AMD is beginning to do crazy stuff, like cache-to-cache direct transfers between CPU and GPU, because both are on the same die.
Intel has been improving its integrated graphics in response of course... but they aren't at the level of AMD's APU integration yet.
But to call this "price jacking" means that you completely don't understand the marketplace right now. AMD's Kabini and Temash chips are among the cheapest in the entire marketplace right now... and AMD's higher-end APUs (Codename: Richland) are sold at significant discounts in comparison to Intel chips.
I'm afraid you may have missed the point that Apple makes with creating "walled gardens" with their products, and also what it means that Motorola is releasing a phone with modularity. Apple provides its customers focus. Rather than worrying about how they can modify their machine to improve performance and functionality, Apple's customers can be fairly confident that their machine will be reasonably fast, feature rich, relatively safe, and aesthetically pleasing while sacrificing a higher monetary cost and faith that the previously mentioned qualities will be fulfilled. You are very right that modularity on a mobile phone is going to involve sacrifices due to space constraints, and this is what we will see with the Ara. Interestingly though, while almost all mobile phones I've been exposed to in the past, this one will have swappable options. Basically I think this is saying that mobile phone technology is at the point where we can make some of the sacrifices but still end up with a useful device with options. It will be really cool to see how this plays out and if 3rd party add-ons will cause Motorola to pull back with this offering.
I'd like to point out that you highlight the benefits of modularity as improvements for speed, power consumption, size, and cost, but the use case you give is that you are writing an article on this custom machine that you've built. What else does your custome-made computer let you do that you couldn't do otherwise? And why do you think that the majority of consumers don't go down the custom route?
On a side note, I feel that I have to refer this article from the Harvard Law Review. It is a bit of a philosophical discussion on the "generative" aspect of computers compared to other devices/appliances. It's headlined "The Generative Internet" by Jonathan L. Zittrain.
The article hits the nail on the head. Engineering for modularity will squeeze out the benefits at some point. It's not some evil conspiracy, but ever-shrinking electronics and tolerances. It's going to be the domain of robots and high res 3d printers because we just won't be able to see or touch things at this scale.
"I'd rather make improvements to speed, power consumption, size, and cost"
I think those are optimizations that you care about - but not the average consumer. We're getting to the point where phones of the same price pretty much universally operate within a standard of performance, give or take a margin of error, and can run a day or longer without recharge. At a certain point users stop caring about further optimizations and start caring more about features. And that's where modularity gives a huge pay-off.
with the whole debate about "who wants more power/speed? They are already fast enough", i feel we are missing out on one crucial point:
Our perception of hardware performance is driven by the software running on it
The only time you start feeling your tech isn't fast enough is when new software comes out that is even more resource hungry. The PC's running Tomb Raider 1 seemed fast enough at the time, but can you imagine them running even apps now? Even with smart phones them selves, iPhone 3 was great! until people started making apps that needed iPhone 4 level of resources and you started feeling your iPhone 3 was too weak...
So I think we're always gonna want more power. What's the point of a modular phone if it can't run any of the latest apps?
PS: I still think a modular phone would be insanely awesome! But just giving my 2 cents
It is always funny to see the arguments of modularity age. Consider, the idea that a car is this somewhat perfect modular thing. While there is certainly something to it having some obvious discrete parts. Typically along the lines of what wears at a different rate from other items, the idea that it is modular for the 99% consumer is laughable. And I say this as someone that changes my own brake pads. Speaking of, the "components" of most computers are more universal than the average car part. Unless you are simply talking about things which plug in what used to be cigarette lighter.
> A screen has to put out enough photons that a pupil two millimeters wide can capture enough light from it half a meter away. That fundamental principle isn't going to change anytime soon, and since that's the largest piece of energy consumption in phones these days, power requirements aren't going to drop significantly in the foreseeable future
The technology isn't there yet, but I would love to see the day when we can make laptop screens (and the like) out of e-ink-like displays.
I don't like staring at backlit screens all day. A computer screen has about the same luminous intensity as a 40W bulb[0], which is not a pleasant thing to stare at all day.
If only we could figure out a way to make e-ink remotely usable with high refresh rates, etc[1]., we could rely on external lighting (which is usually available) and get dramatically better battery life out of our laptops and/or phones.
It could even fall back on "frontlighting" (what the Kindle Paperwhite does) for nighttime usage.
[0] Preempting any physicists' objections: yes, I know that watts do not measure luminous intensity (that would be the candela), but a 40W bulb is a familiar reference point, and this comparison is roughly in the right ballpark.
[1] Which is a big hurdle, and why I admit that the technology isn't there yet.
I'm not sure I buy the narrative about 1mm connectors. You don't currently line up the contacts on your graphics card - the card only fits into the slot one way. What is preventing us from taking the current design and doubling the number of contacts? Manufacturing tolerances get smaller, but they're certainly going to get smaller if we start printing the same number of connections.
It's a little hard to make out, but if you look at the photo of the ARA, those aren't 100-pin connectors -- they're more like USB connectors. Power + Data = 4 pins. So not all that hard to line up, especially if you have guides in the screen portion to slide in your modules. And unlike USB, there'd only be one way to orient the pluggable module. :)
My chief concern would be pocket lint getting in there.
Modularity is going to remain very very much alive, at least for mobile phones.
Here's why.
A mobile phone needs to be a certain size to hold it. There's always going to be a minimum size, beyond which it won't be sensible, it won't be usable.
When have you last seen one of those being used? People want their 4 or 5 inch screen, it's as simple as that. The most frequently sold Android devices used to be fairly dinky 320 by 240 affairs, but at the price difference becoming fairly tiny, they've now given way to devices with bigger screens, higher resolutions and more processing power.
The Google Nexus 4, for instance, features a base frame that 's a fairly thick chunk of Aluminium. It's bezel features large back areas at the top and the bottom of more than a centimetre each. It features triumphs of miniaturisation within, sure, but the product itself isn't. It's designed to be used by human hands.
The only thing that really requires the space is the battery and the antenna. The antenna can be built into the base frame. It could even be a sheet that sits underneath the components that could also be replaced. Or it could be a bezel, similar to that starting on the original iPhone 4, that might also double as a bumper frame for the thing. Have a bumper frame with connectors and you're there. Or maybe at some point those extra 3 or 6 dB don't matter much any longer for the majority of the market, especially if a lot of the heavy lifting will be done through WiFi.
When it comes to the speed a phone can accomplish as it matters to the vast majority of the market, then size of the components to make that happen is not that much of a constraining factor.
At some point, the diminishing returns of a modular phone vs. a non-modular phone will bring it to a point where the performance difference, as it matters to the average consumer, will be about, let's say, 20% or maybe even 30%.
But when you can target the one aspect of performance that matters most to you and swap that out, if you can prioritise there, then you have the ability to make up for that.
If you then also have the components, on average, each last twice as long for your needs than would otherwise have been the case, it's just become twice as valuable an investment.
I like the size of my Nexus 4, and any performance penalties imposed by modularisation are outweighed by being able to swap out components and it being a better investment. There are diminishing returns to making mobile phones smaller. Desktop computers have reached a certain size and then they didn't really get any smaller because they had no reason to and modularisation outweighed that. Why not the same with mobile phones?
You're making the assumption that battery life is supremely important. If it's that important then you can make the phone slightly thicker. Or significantly thicker. I'm sure China can make you an phone that lasts for two weeks if you don't mind that it weighs fourteen ounces. But battery life isn't that important, which is why nobody buys that phone.
You're also making the assumption that modularity inherently has to add weight and require space, but half of its purpose is to allow the opposite: You can have one slot that will take any of a dozen different cards. A phone with one or two such slots can be smaller than the phone that tries to provide all twelve features in order to reach the same broad customer base, and most customers won't want every option (or every option at the same time) anyway.
I don't care. Make the processor more efficient, make the screen more efficient and it really becomes a non-issue. LG could have put in more battery in the Nexus 4 had they really wanted to, probably 20% more. Remember Nokias with four or five week battery lives? Well, with re-thinking on the other sides of this, like ubiquitous MicroUSB, battery life beyond two days became a bit of a non-issue. Completely. Sure, some phones might be constrained because of battery life, but they're not in the majority. It's just a standard component of a rectangular shape and thickness they put in these days and it's good enough.
What am I really going to do with, what, 20% more battery life, if it already lasts 2 days? And how about the possibility of each of these components adding more than one functionality? Who is to says that a GPU component can't also add a battery? I wouldn't mind having a phone with two batteries. And no one is saying that everything in the phone should be modularised out. How about a module just called 'mainboard' that contains everything but the battery and the antenna, and of course the screen? You could then override or enhance functions on this mainboard with additional modules. And it would be easy to replace. The concept of these connectors and all wasting space becomes a bit of a weird thought if you actually might hold it in your hand an realise that it's a good size, it has good performance, it has good battery life, it has a good screen and there might only be two or three modules in the eight module slots available. And that's doable. Saying that there's wasted space in a phone like that becomes, instead of a criticism, it becomes a justification for buying new modules.
All in all, the wasted space in this is not going to outweigh the "wasted space" (if you can call it that) in the majority of smart phones out there by a substantial amount.
In fact, it gives manufacturers an incentive to cram every single cubic millimetre they're given with as much functionality as possible, or at least sensible, because their revenue will depend on this utterly in a freer market. Has that been the case so far? I've repaired my Nexus 4. There are plenty of cubic millimetres of space being used that, strictly speaking, don't really need to be. Most of what's happening here is that these structural cubic millimetres are being shifted to a different and sure, a bit less efficient shape.
The only outcome that matters coming out of all of this will be that it might cost $15 more. Or $5. Or some small integer dollars to pay for all the connectors. But if I can use at least half of it twice as long, that's an amount that doesn't much matter.
Here's a secret: The Nexus 4 is already full of spring loaded connectors anyway. I counted. It has about 5 going to the back cover. And one for the speaker. Add the SIM and that's another one. Then add the two connectors for antennas... and arrive at about 9. If you count the battery, which uses sprung pins in addition to two screws, that's 10. This thing has eight visible at the back, add a SIM and you arrive at 9, and add maybe another one or two at the front. Ten, maybe eleven. So, really, the only big difference is that these connectors are now more easily user accessible and the aluminium frame is a bit of a different shape. There's already plenty of internal plastic to surround individual components in a Nexus 4 anyway, so just make it a bit thicker on the outside. Not a big leap, people.
"But everything is moving towards integration"
Well, that hardly precludes it from moving in this direction as well. The CPU being on the same silicon as some mesaure of GPU is hardly going to be fundamentally affected by this, is it now?
The Nokia phones with four or five week battery lives used a single backlight for a 120x120 or so pixel monochromatic LCD screen.
Don't get me wrong, I love the one that I have and still use as an alarm clock, but it'll never replace a high pixel density colour screen in anyone's books.
In my world, almost everyone cares about consuming content (maybe not movies) on their smartphones, heck they use their smartphone more than their PC/TV.
Sorry, but a few things in phones are in my opinion important:
- Weight
- Size
- Battery life
Any modular design impacts all 3 of these things. And there are a lot more downsides to it than just that.
To be really modular, you would need some sort of 'interconnect' bus, to tell the 'main cpu' of the phone exactly what modules are installed. This means that EACH module needs a chip that handles communication on this centralized bus. One of the reasons cellphones become smaller, thinner and have a longer battery life is due to the integration of all those little chips into just a few. Each chip requires a minimum of power to run, and if you have some sort of intelligent on-chip power management you can easily shut down parts that you don't need. Sure, some clever scheme to power down individual modules based on use can be engineered, but this would still be far from efficient.
Also - this 'central bus' will be limited to today's technology. Will it still be sufficient for next-gen hardware? What if in a few years, 'integrated' phones have full 4k-resolution recording? Will the this bus still have sufficient bandwidth? Current phone development is moving FAST. Why? No necessary constraints and reliance on aged technology, they can develop and adapt things as much as they want. Better placement of the microphones? Flash further away from the lens gives better results? Sensors integrated in the display? Front camera upgrade? That are things that are going to be ver tricky in a modular phone.
Modularity will also require more logic to be separated. Good example are camera's, most of the 'heavy lifting' is done on the main CPU's silicon somewhere. To have a 'better' camera, it's not sufficient to just replace the sensor and be done with it. You need an extra chip on the camera module, draining more power, wasting more space, unless you want to make serious trade offs regarding speed, battery life, ... The trend is 'all in one' for a reason.
Also the 'recycling' argument is bullshit. Normal people will just throw away old 'blocks' in the normal garbage can without much thought.
And then you have the software mess. This is going to be great! Now instead of Android apps having an already immensely fragmented target platform, you just increased that exponentially. Minimum requirements for this game? Well 2gb of ram, this GPU, this CPU, ... Oh and you need at least this module for sensor Y. Oh and if you happen to have module M, you should remove this before running this app, there are compatibility issues with the current driver your system downloaded automagically (I hope).
Every modular concept has trade offs, in some cases them being perfectly acceptable, see expansion cards in normal desktop pc's - where size isn't a problem. A modular phone on the other hand has to make trade-offs in areas that are simply not acceptable.
Seeing that computer running from inside the mobo box is like seeing pictures of frightened, malnourished animals in those Humane Society ads. Get a case, you cheap bastard! One coffee spill away from disaster...
It is more sad that people are seriously believing this person who is a designer not an engineer. "I do art and code and sometimes other things. This is my homepage, which as a rule of thumb contains nothing useful. This homepage is where I conduct experiments, which at any given time are usually broken."@ http://jjcm.org/
I am sorry but the only things that are non-modular are computers like the xbox360 and products of that nature that the companies do not want the user to change/upgrade the computer at nearly all cost. It is ridiculous to state that it is "inevitable" to become non-modular. Scientists have been backing the barriers of nearly every science there is in the past 50 years and as such you can NOT conclude that this is "inevitable" it is more likely that by the time we run into a space problem at the level you are implying (which by the way is ridiculous as other people stated... what graphics card uses 2+ PCI slots LOL) that we will finally figure out how to make the first bio-matter computers. Ie: a computer that has a cpu of nearly jello matter that will process data at most likely at first a slower but much cooler rate than our cpu's that we use at the moment.
I'm actually an engineer at Microsoft. Design is just a hobby of mine.
As for the graphics cards, I was wrong on that account. My memory betrayed me (I could have sworn I'd seen cards that used two slots somewhere along the line, but googling found nothing). The blog has been updated.
There are graphics cards that use two slots, just not two PCIE interfaces. Humongous cooling solutions on contemporary graphics cards usually extend over two slots:
Engineer? of software? you stated you have a bs in cps not in engineering nor in mechanical/electrical engineering therefore how could you possibly make such bogus accusations?
I was not objecting to him being an engineer, he can be an engineer of software which does define him as an "engineer" but Microsoft would really hire someone who does not have a degree in the area of the job?
Of course they would, i know alot of software engineers that initially studied something different that also involved coding to some extent who are now full blown software engineers. After a few years your experience and your skills matter, not the name of your degree (as long as you have one).
"Engineer? of software?" I know he is an engineer of software I was not disputing that. If you read my first comment I knew that I was disputing that he had any right to comment on hardware as he did which would put him into the electrical or mechanical engineering. As for getting hired with a CPS as a software engineer, uh yeah it's not identical but that's the same area as I mentioned in my later posts.
The Mac Pro is requires you to bring your own HDs, bring your own expansion cards, bring your own network gear. Which centers the Pro itself at the core of a modular computing center.