> It wasn't obvious at the time, but I think in retrospect the case can be made that Apple's ability to innovate effectively (both in terms of time and quality) died with Jobs.
Oh stop. Apple's effectiveness was entirely a product of its time. You had designers building the Zune and calling it competitive but really there was no competition for the products Apple was building for a market that was screaming for a better balance of form vs function.
This picture of Apple's innovation includes no part of OSX, which was always an OS that was technically capable but superbly messy and extremely behind on the times.
Innovate effectively? For a long time of Jobs' tenure, Cocoa was being supported on BOTH Objective-C and Java, and when they ripped it out, they actually chose Objective-C, not Java, as their platform of choice (in retrospect: wow), only to have to build yet another platform (Swift) just a few years later (Swift of course, based off of MacRuby, which they initially tried to build Cocoa on top of internally, so I hear).
Innovate maybe, but effectively no. There is a whole host of broken, abandoned, and outright bad decisionmaking in the OS layer at Apple. Apple's success has everything to do with their success with industrial design, UX design & marketing, and just a bit of being at the right place at the right time.
IMO this interpretation fits the timeline much more cleanly: Apple was churning out the same iPhones and Macbooks long before Jobs left, just as they had been before with the iPod. The butterfly switches, the thin above all else, that's all part of Apple's MO dating back years. Lack of FM radio on the iPods, lack of IR sensors on their early phones, removal of removable batteries, headphone jacks, USB-A, etc, is all part of the DNA. A $5000 screen is not surprising to anyone who saw the Powermacs of the last generation. Butterfly switches look an awful lot like bendy iPhones, which look an awful lot like DOA Powerbooks.
I'm really not sure what part of this is new post-Jobs, but I have a good feeling this is a great case study of confirmation bias. If you believe Jobs was a one-of-a-kind irreplaceable visionary who was single handedly responsible for Apple's success, then you have no choice but to interpret any action Apple makes post-visionary as a failure, otherwise you were wrong about the one-of-a-kind irreplaceable visionary.
And it would be a pretty big blow to most to be wrong about Steve Jobs.
The other interpretation, of course, is that Jobs was not a magical visionary, just a smart guy who made a couple of right decisions, got lucky on lots of others, while still getting it wrong on plenty of occasionss (Macintosh TV cough Apple TV cough Apple TV2), just like most fallible humans.
Apple only kept Java around because coming from Pascal and C++ background they were unsure how the Apple developer community would welcome Objective-C.
When they saw the community had no issue embracing Objective-C that is when they dropped the Java bridge, QT for Java and eventually their own JVM.
Chris Lattner never speaks about MacRuby on his interviews, rather how, like clang before it, Swift started as a side project before being shown to upper management.
According to his interviews, many of the Objective-C 2.0 and later improvements were already a kind of slow roadmap into Swift.
Yeah, but Objective-C 3.0 would have been a much better path than Swift. It would have allowed easier upgrading to existing code instead of mass rewrites, which always bring their own bugs (and that's ignoring the bugs in Swift itself).
This isn't an economic issue, it's an efficiency issue.
There are 3 competing ideas: Fast, cheap, quick. You get to pick 2. We're picking cheap and quick because "fast" comes for free on a longer-term timeline. You can argue with whether or not this is the right choice, but this is the choice society is making, and IMO it would work better than taking significantly more time to build something that is only 10% faster. It's more efficient to let hardware manufacturers solve the fast problem when the differences are on the order of 5-10% a year but the time-to-release problem is on the order of 50-100% differences in development times.
This is extremely hyperbolic, and I'm sure you know this. Here's how you set a variable in React:
const foo = 1;
What you're doing in your complicated sample is not _just_ setting a variable, you're also exposing it to a KVO subscription system that you could never represent succinctly in ASM.
What's sad is how programmers communicate programming concepts right now, with quick digs and hot takes and zero actual critical thought to what is being compared. I'm disappointed that you're spreading FUD in your internal talks.
Yes, still its a way to set a value. And don't be so sure you can't make an entry in a KVO subscription system in asm - of course at bottom that's exactly what's happening.
I'm disappointed at the zero-attempt-to-understand-the-point digs made on hacker news (like the one I'm responding to). The point is, how is multi-layered abstraction an unalloyed good? Its heavy, slow, complicated to author and explain, and not doing all that much of value.
Yes, it's an apple to oranges comparison, but the point is more why are we using an orange when an apple will suffice?
The real question is, is all that really needed?
We keep building abstraction layers on top of abstraction layers when often times, there is already tried and proven solution that works and is much simpler.
I've been doing some experimenting creating a single page application (SPA) without a web framework and turns out you can get 90% of what React offers with a tiny amount of code.
I just use plain javascript objects to store data and wrap it in a function that will trigger re-rendering the VDOM / repaint.
And even if we do need that additional layer of abstraction we can always make it appear simpler and provide a cleaner interface to the programmer. A Proxy object setter could be used to eliminate the boilerplate of State.update.
Good abstractions simplify the problem by creating a mental paradigm that is closer to the problem domain. Poor abstractions do the opposite, they take away what is needed to solve the problem and create additional steps that are not really germane to the problem.
Many abstractions are well intentioned, but after we toss layer on top of layer, we often get so far removed from the problem domain, we have to ask ourselves if it is not simpler to just build up another set of abstractions that allow us to get closer to the problem domain (i.e., "First Principles").
Article tl;dr: "I believe that apples are more efficient than oranges after all."
I just want to point out that the idea that "modern buildings use just enough material to fulfill their function and stay safe under the given conditions" is fundamentally at odds with the author's subsequent thesis.
Modern buildings don't use "just enough material", because "just enough material" would be _just_ concrete everywhere; it would have been "just wood" 200 years ago, but that's not good enough now. This is exactly the problem: it's not software that is unnecessarily bloated, it's software that has evolved to solve higher order problems, ones that are not simply based in how fast a computer can count to 10. Similarly, the definition of "fulfill their function" in the context of buildings have changed too. That definition changes all the time, even in building codes.
In the modern-building hierarchy of needs, we are way passed the "stay safe" level. We still optimize for safety, sure, but that hardly accounts for your spray foam insulation, HVACs, builtin wireless units, complex built-in cabinetry, complex appliances, and more. Simple things like "electricity" are now part of the definition of fulfilling building function. Go find a 200 year old building and you will find a building that simply does not fulfill today's functions. Even safety standards have and continue to change all the time. You can look at historical building codes and see evolving fire safety (asbestos? NO ASBESTOS!), seismic safety, and more.
This is the point. To say that Windows 95 is 30MB discounts the years of improved process space isolation, memory protection, Spectre mitigation, that, if missing, would cause enormous public backlash about why Microsoft doesn't care about security. Windows 10 is 100x larger because WE asked for it to be. WE wanted WiFi, VPN, IPv6 switchover and tunneling all added to our network stack. WE wanted GPU enhanced UI threads. WE want haptic feedback and touchscreens and predictive text and predictive multi-touch pixel accuracy for our touchscreen laptops. This extra complexity exists because our standards changed. A text editor isn't just something that renders ASCII anymore-- heck, it's not even just for rendering characters. My "text editor" is a full web browser because I _need_ that for development these days.
Extra complexity is a feature not a bug. We built computers specifically to do this stuff, not in spite of it. The abstractions and complexities aren't getting in our way, they are literally the things we are building. Does performance suffer? Maybe, but that's because we are explicitly paying into functionality. If you wanted a fast text editor, obviously a black-and-white screen that only renders 256 characters will be faster than VS Code, go ahead and use that software, but you're not getting the other things you probably want. Your very next complaint will undoubtedly be "how do easily I diff my Git branches?" -- and this is how software becomes more complex.
You don't need a web browser to edit text. As evidenced by dozens of excellent text editors that are not built on web browsers.
It's just easier to take a web browser that already has something approximating a text editor, and pile things on top of that.
And why is that? Well, because you want to support all the different platforms, and we as an industry have absolutely screwed up the portability story, and so we build it as hacks instead. There's absolutely no reason why developing a GUI app for macOS should be radically different from developing a GUI app for Windows or Linux - they all ultimately do the same thing. There's no sensible reason for them to be different. But they are. And so now the easiest way to solve that problem is to pile the browser on top, and forget that the differences exist. Of course, it doesn't actually solve the problem in general, because the differences are still there, and the industry as a whole still pays the overhead tax, in both man-hours of work someone has to spend maintaining that flimsy stack of abstractions, and in runtime performance tax those abstractions impose.
But the only way to fix it is to burn the whole thing to the ground. And that's not happening, because the short-term cost is too large to even contemplate any long-term gains.
> My "text editor" is a full web browser because I _need_ that for development these days.
I mean, you definitely don't need a full web browser for software development. That's maybe the least worst option you have at the moment (which is sad) but it could be hella smaller.
Development on the MS side was just fine in the 90s for many people. If you were an open source developer in the 90s, maybe not, but then again, OSS in the 90s was pretty broken for _everyone_.
They mean that the signing bonus is divided into two yearly payouts instead of one. It's a single bonus.
I personally don't see what's wrong with a N/2N/3N... vesting schedule for post-IPO companies. You're buying in long term, that's the deal. Is it sided to favor employee retention for the employer? Sure. The overall comp at AMZN is pretty competitive though. That's the deal.
It makes more sense for a fiscally stable company like Amazon, which actually sees reliable stock growth and a real business model as compared to snap with neither of those things-- but that's not a criticism of the vesting schedule, it's a criticism of snap's true value. That's the thing that would keep me from a company like that, not the length of my RSU vesting periods.
> I stop counting the 5+ years old laptops that have to be upgraded,
5+ years ago is that LOOOOOONG time that OP was talking about. It's also unfair to compare technical capabilities of old hardware for many reasons. I think the point was that, new hardware, _while its new_, is becoming more and more capable. Any new laptop today, even budget ones, can handle YouTube videos in HD. The problem is that HD today won't be the same HD that exists in 5 years (i.e., 4k), and it's sensible that a budget laptop today will struggle with the 8k technology that comes out 5 years from now. This is an old problem (pun intended) and should not be surprising.
> Because personally, I keep having performance problems on all laptops I have.
Selection bias. Programmers who compile code, run VMs or containers, and process tons of data, are not the average consumer laptop use case and have much stricter requirements. Many people are sitting in Facebook, YouTube, Gmail, or Google Docs for most of their day-- and likely inside of Chrome.
Where are the "Chrome is Flash for the desktop" posts?
The idea that Electron is any different of a user experience for the vast majority of users seems skewed to developer usage, to me.
I don't know, 5+ years isn't that old anymore for a computer. Like, 5 years ago I was running... a core i7 with 4GB of ram. And now I'm running... a core i7 with 16GB of ram. The only things in computers that have really gotten significantly faster are SSDs and GPUs
It takes a surprising amount of power to decode. The cheap CPU from netbooks have been struggling for a decade, especially in battery power saving mode.
Lately, they get hardware acceleration just for that. Special CPU instructions and drivers just to achieve that decently.
For Youtube in particular, they're sending VP9+opus where the browser supports it, without considering hardware acceleration. The rather anaemic Atom chips might have H264 decoding on-chip, but only Kabylake has VP9.
It's not as bad as previous poster states, but it's not quite as simple as you make it seem either. A green card holder forfeits their residency if they leave the US for "more than 6 months", or if border patrol people feel like they've abandoned their residency for any reason. This doesn't affect most employees, but if you're a consultant working on-site in another country for extended periods, or simply travel often, you have to do way more work to get everything cleared. And even then, there's no guarantee you won't run into problems.
Git repos change way more often than domain or company names do. Moving from code.google.com to github.com, moving a project from a user to an org, transferring across orgs, renaming an org. All of these things have happened to me in a single Golang project. Ironically, the code.google.com -> github move was actually Golang stdlib code itself.
Oh stop. Apple's effectiveness was entirely a product of its time. You had designers building the Zune and calling it competitive but really there was no competition for the products Apple was building for a market that was screaming for a better balance of form vs function.
This picture of Apple's innovation includes no part of OSX, which was always an OS that was technically capable but superbly messy and extremely behind on the times.
Innovate effectively? For a long time of Jobs' tenure, Cocoa was being supported on BOTH Objective-C and Java, and when they ripped it out, they actually chose Objective-C, not Java, as their platform of choice (in retrospect: wow), only to have to build yet another platform (Swift) just a few years later (Swift of course, based off of MacRuby, which they initially tried to build Cocoa on top of internally, so I hear).
Innovate maybe, but effectively no. There is a whole host of broken, abandoned, and outright bad decisionmaking in the OS layer at Apple. Apple's success has everything to do with their success with industrial design, UX design & marketing, and just a bit of being at the right place at the right time.
IMO this interpretation fits the timeline much more cleanly: Apple was churning out the same iPhones and Macbooks long before Jobs left, just as they had been before with the iPod. The butterfly switches, the thin above all else, that's all part of Apple's MO dating back years. Lack of FM radio on the iPods, lack of IR sensors on their early phones, removal of removable batteries, headphone jacks, USB-A, etc, is all part of the DNA. A $5000 screen is not surprising to anyone who saw the Powermacs of the last generation. Butterfly switches look an awful lot like bendy iPhones, which look an awful lot like DOA Powerbooks.
I'm really not sure what part of this is new post-Jobs, but I have a good feeling this is a great case study of confirmation bias. If you believe Jobs was a one-of-a-kind irreplaceable visionary who was single handedly responsible for Apple's success, then you have no choice but to interpret any action Apple makes post-visionary as a failure, otherwise you were wrong about the one-of-a-kind irreplaceable visionary.
And it would be a pretty big blow to most to be wrong about Steve Jobs.
The other interpretation, of course, is that Jobs was not a magical visionary, just a smart guy who made a couple of right decisions, got lucky on lots of others, while still getting it wrong on plenty of occasionss (Macintosh TV cough Apple TV cough Apple TV2), just like most fallible humans.