Hacker Newsnew | past | comments | ask | show | jobs | submit | mojuba's commentslogin

There's a shorter form of it:

  guard let self else { return }
which annoyingly the AI's don't know about.


Not much better, to be honest.


The early internet was like some settlers' shacks, built uncontrollably and unsystematically whereas the modern web is all skyscrapers, residential and business. Uniform apartments, uniform offices all looking very similar differeing in only subtle interior design details here and there.

Should we go back to the shack era? Of course not. But maybe we should start a new era of land exploration and start over. It shouldn't necessarily be Internet 3.0, might be something else completely. AR/VR? Possibly although that has already failed once.


The only thing missing from your analogy is the fact that the shacks were filled with personal diaries and curios, while the skyscrapers are mostly chock-full of homogenous sewage slurry.


Also the shacks weren’t really particularly shabby or anything, they were just more like well-enough-constructed single family homes.

Old websites before scripting became popular were pretty much solid in that boring-tech way. Hardware and networks were not as reliable, but the sites themselves could be fine via simplicity.

Modern overdesigned sites are sort of like modern apartment buildings: shitty build quality under fake plastic marble and wood.


Well, they did literally often have "this page is under construction" banners on them. and <blink> tags lol.

I'd take it all back over the Squarespace hellscape the web has become.


If you've visited old mining operations / shacks that's pretty common! There are always some weird choices and cool things to see


> Should we go back to the shack era? Of course not.

This isn’t obvious, at least, we can’t write the idea off with an “of course not.”


Keep in mind the early websites were mostly built by an enthusiast minority, technical or not but willing to learn HTML and Netscape Composer. You can't expect the whole humanity to be as enthusiastic. The skyscraper era, no matter how much we all hate it, makes the web more democratic: it gives everyone some standardized space (Facebook, Youtube, etc) with algorithmized discovery which is parking and elevators if you want to continue the analogy.


Hard to live through what social media has done to society over the past decade without at least entertaining the idea that the higher barrier to entry of being online was maybe not a bad thing.


I wouldn't agree that the higher barrier to entry was a good thing, but I also would say that the barrier to entry was actually pretty low, with angelfire, geocities, etc. Dreamweaver + other wysiwyg, and the lack of a necessity of a giant js framework with bundling and tree-shaking.

The problem is that the barrier to entry got too low, so it was necessary for large companies to interpose themselves between producers and audiences, starting with google (becoming something other than a grep for the web, and instead becoming the editor and main income source for the web) and expanding outwards into facebook.

Remember that we started with walled gardens like AOL and Compuserve, and the web (and the end of those companies) was people desperate to break out of them. Now people have been herded in again since the indexers bought the ad companies.


I don't disagree but notice how it's about the second decade of Web 2.0, not the first one. Profit-driven algorithms is a separate era in its own right. I.e. you can't blame the skyscrapers themselves for your shitty life, you just need to demand more regulation.


If the skyscraper is designed with elevators that try to keep me in and away from the first floor so I don't leave I can definitely complain.


yes, for sure! It was a different time. Early website authors were pioneers. They had something worth sharing and they thought it worthwhile enough to learn some coding. Nobody was trying to push ads and monetize, and there was no ubiquitous tracking or cookies


If we're talking about the internet before Eternal September, maybe, but putting up a site on Geocities or Tripod or using Dreamweaver certainly was not a high barrier to entry.


Facebook and YouTube are top-down managed systems, and I think it is a real disservice to the idea of democracy to call this sort of thing “more democratic.” They are democratic like a mall is, which is to say, not.


I like to compare today's web to radio in the late 1800s and early 1900s.

Back then, if you could piece together a transmitter and throw an antenna up, you were a broadcaster and many broadcast whatever they felt like. Just like today's internet.

Social media is the CB radio of the 1970s and 80s when anyone could buy a small rig and do all kinds of weird and wild things for cheap.

But, eventually, something had to reign in all that and the FCC along with international laws and standards came up to calm all that down. In the same way, I think the internet will eventually become licensed and regulated.


The FCC licenses radio broadcasters because the spectrum is finite. Which finite aspects of the internet do you see driving such eventual practice?


The rationale behind the FCC is that it's regulating a limited resource (spectrum space.) The web is not a limited resource (although bandwidth is, but that's a different debate.) The web is also international, and we're already seeing conflicts where one country tries to force their regulations onto another. That metaphor just doesn't work where the web is concerned.

I agree that the web in the US, and specifically large social media platforms, will probably be regulated because that seems to be one of the few things both parties agree on for their own reasons. But more so because the government wants to control information and surveil citizens. I think the balkanization of the web as a whole into smaller, closed networks is probably inevitable.

But what's most depressing of all is how many people in tech and on HN would be thrilled if one needed a license to publish on the internet just because that would implicitly push most people off of the web and leave it for a privileged elite.

As bad as social media can be (and I think its harm is often oversold for political ends) having a space where anyone can publish and communicate and create freely, where different platforms can exist and cater to different needs, where media isn't entirely controlled and gatekept by corporations, is critically important. More important than any other communications paradigm before it, including the printing press.

It's really going to be sad when we burn it all down, because it seems unlikely anyone is going to make something as free and open as the web ever again.


> But, eventually, something had to reign in all that and the FCC along with international laws and standards came up to calm all that down.

No, it actually stayed pretty lively until the 90s, when the government decided that there could be huge monopolies in media, all the stations were bought up by like 6 guys, and were automated to play Disney music 24 hours a day.

Not such a neat story, right?


> Should we go back to the shack era? Of course not.

I am not sure. Different people want different things. I ran a Hetzner cloud instance where I toss a simple webpage with locally hosted travel photos for friends and family. And a Jupiter server (with a very weak password) on the same instance for myself and any friend when we want something more powerful than a calculator.

And this messy, improperly organized, breaking all design patterns way works just fine for me. So I'm fine with a shack for personal communication and as a personal space. My 2c.


I think a better analogy is large corp built & owned vs small artisan businesses & single family homes. Why not have both?


>AR/VR? Possibly although that has already failed once.

I'm pretty sure it's already failed 3 times.


AI in its present form is probably the strangest and the most paradoxical tech ever invented.

These things are clearly useful once you know where they excel and where they will likely complicate things for you. And even then, there's a lot of trial and error involved and that's due to the non-deterministic nature of these systems.

On the one hand it's impressive that I can spawn a task in Claude's app "what are my options for a flight from X to Y [+ a bunch of additional requirements]" while doing groceries, then receive a pretty good answer.

Isn't it magic? (if you forget about the necessity of adding "keep it short" all the time). Pretty much a personal assistant without the ability of performing actions on my behalf, like booking tickets - a bit too early for that.

Then there's coding. My Copilot has helped me dive into a gigantic pre-existing project in an unfamiliar programming language pretty fast and yet I have to correct and babysit it all the time by intuition. Did it save me time? Probably, but I'm not 100% sure!

The paradoxicality is in that there's probably no going back from AI where it already kind of works for us individually or at org levels, but most of us don't seem to be fully satisfied with it.

The article here pretty much confirms the paradox of AI: yes, orgs implement it, can't go back from it and yet can't reduce the headcount either.

My prediction at the moment is that AI is indeed a bubble but we will probably go through a series of micro-bursts instead of one gigantic burst. AI is here to stay almost like a drug that we will be willing to pay for without seeing clear quantifiable benefits.


It’s a result of the lack of rigor in how it’s being used. Machine learning has been useful for years despite less than 100% accuracy, and the way you trust it is through measurement. Most people using or developing with AI today have punted on that because it’s hard or time consuming. Even people who hold titles of machine learning engineer seem to have forgotten.

We will eventually reach a point where people are teaching each other how to perform evaluation. And then we’ll probably realize that it was being avoided because it’s expense to even get to the point where you can take a measurement and perhaps you didn’t want to know the answer.


I feel like one benefit of humans is you can find someone you can truly trust under almost all circumstances and delegate to them.

With AI you have a thing you can't quite trust under any circumstance even if it's pretty good at everything.


Like a proverbial broken clock which shows correct time twice per 24 hours, AI may "show correct time" for 99% of prompts, but doesn't deserve any more trust.


A hammer doesn't always work as desired, it depends on your skills plus some random failures. When it works however, you can see the result and are satisfied with it - congratulations, you saved some time by not using a rock for the same task.


I can trust a hammer will be a hammer, though.


> Is the software easy to take in at a glance and to onboard new engineers to?

This is not as easy as it sounds. Who are those "new engineers", juniors? 10 years of experience? 30 years? What's your requirement?

"Readability" is such a wildcard, with a whole range of acceptable levels from zero to infinity. Readability is a non-concept really. Maxwell's famous equations are readable to some and absolutely impenetrable to the rest of us.

So when someone says "code should be readable", to whom exactly?


Readable code is code that has empathy for the reader and tries to minimize the cognitive load of interpreting it. That's one of the goals of abstraction layers and design patterns.

Yes, it's all subjective, and depends on the reader's expertise and existing familiarity with the codebase. But arguing that code readability isn't at thing, because it's subjective, is an absurd take. Would you claim that Joyce's Ulysses is equally readable as Seuss's The Cat in the Hat?


I see this argument pattern a lot, so looked into what the name is. Apparently it's called Sorites paradox: https://en.wikipedia.org/wiki/Sorites_paradox or the "continuum fallacy" in which something that's continuous is dismissed as not existing because we can't divide it into clear categories.


Did someone claim readability does not exist?


> Readability is a non-concept really

Yes.


Readability without a clarification is a non-concept. You can't say "X should be readable" without giving some context and without clarifying who you are targeting. "Code should be readable" is a non-statement, yes.


Add "to most developers" for context and you'll probably get exactly what original claim meant.

It's not a non-statement. Rich Hickey explains it well, readability is not about the subjective factors, it's mostly about the objective ones (how many things are intertwined? the code that you can read & consider in isolation is readable. The code that behaves differently depending on global state, makes implicit assumptions about other parts of the system, etc - is unreadable/less readable - with readability decreasing with number of dependencies).


It can be further refined to

"to most developers who are most likely to interact with this code over its useful lifetime."

This means accounting for the audience. Something unfamiliar to the average random coder might be very familiar to anyone likely to touch a particular piece of code in a particular organization.


>"Code should be readable" is a non-statement, yes.

Oh, I completely disagree here. Take obfuscation for example, which you can carry on into things like minimized files in javascript. If you ever try to debug that crap without an original file (which happens far more than one would expect) you learn quickly about readability.


>> Readable code is code that has empathy for the reader and tries to minimize the cognitive load of interpreting it. That's one of the goals of abstraction layers and design patterns.

Usually that means something less than "perfect" from the perspective of the writer. Applying to much DRY, SOLID, DI and other "best practices" will make it very hard to understand. Pretend you have about 20 less IQ points than you actually have when writing the code - you will thank yourself when you come back to it.


Reminds me of someone's quote:

> Reading -- and understanding -- code requires twice the brainpower of writing code. So if you used every bit of your intelligence to write 'clever' code, you won't be able to maintain it because it requires twice your intelligence to read it again.

Einstein's words are oh so suitable as well:

> Everything should be made as simple as possible, but not simpler.


The trap people fall into is calling The Cat in the Hat unreadable compared to Joyce's Ulysses because The Cat in the Hat they're reading is written in German and all they understand is English.


I didn't say readability is subjective. I'm just asking, when someone says "code should be readable" without any clarifications, what does it really mean?

Big companies may actually have an answer to that: "since we require at least 2 years of experience in the area from new hires, all code should be readable at that level".

However startups may prioritize something else over readability at that level, for example: move fast, produce the most minimalist code that would be easy to maintain for people like you.

My point being, "code should be readable" should always come at least with a footnote that defines the context.


Theere are two quite a widespread classes of not readable code:

Some code is not readable by _anyone_. That's not readable code.

Some code is readable by its author only (be it AI or a human). That's also not readable one.

Saying readability is not a concept is really strange.


Readability to a certain degree is heavily influenced to the reader's experience and familiarity.

If somebody has spent lots of time in specific patterns he/she'll find them natural to read and mentally process.

To others, they'll be unreadable.


I'll have to disagree.

Developers coming from functional programming and developers coming from C programming, for instance, have very different definitions of "readable", and neither is obviously wrong.

Similarly, developers used to channel-based, async-based or mutex-based concurrent programming will all have very different criteria for "readable" code, again none of them obviously wrong.


Those are just paradigms, ways of solving problems. There’s a difference between familiarity and readability. Sometimes you have to learn stuff before understanding them. Readability is how easy it is to do that, given familiarity with the base concepts that the code use.


You are correct.

Yet it's pretty easy to find people who consider that `map()` or `filter()` are simply not readable – or, on the other side of the aisle, that having a loop variable is detrimental to readability.

And of course, these criteria change with time, industry and programming language used.


I have a formal proof for you that it is a non-concept. If code can be read and interpreted by a computer, it means it can in principle be read by a human. There are of course some edge cases like obfuscated JavaScript or binary executable that some people are able to read and understand.

The question comes down to being reasonably readable and we are back to square one: "reasonable" is very relative. In my early days I could read 8086 binary code (in hex) and understand what it does, it was literally at the very edge of readability but it wasn't unreadable.


You are using a different definition of readable than most people are. Most people are using it to mean "the target audience can read and understand the code, and do so in a way/context that allows them work with it". Your definition seems to be "can read the symbols on the screen".

I can read Assembly. I can, in some cases, figure out what that Assembly is doing. I can not, however, work productively with it. I can read Assembly but would not consider it readable.


Sure, but we do agree that Hello World is MORE¹ readable in Python compared to the equivalent program in say Brainfuck?

  print("Hello World")  
  
vs

  ++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.  
  
¹: more readable means easier/faster to read for most human beings that know the language


I can't reasonably read whether this comment agrees or disagrees with the parent


Speaking of those equations, as he wrote them they were considered rather impenatrable and the modern ones are considered much more beautiful and 'readable' but that was the work of Heaviside and others.


> Readability is a non-concept really. Maxwell's famous equations are readable to some and absolutely impenetrable to the rest of us.

When we talk about a language's readability we're typically talking about 'accidental complexity', to use Brooks' term, [0] and not the 'essential complexity' of the problem being solved. For a hairy enough algorithm, even pseudocode can be difficult to understand.

Readability applies in mathematics too, as a bad notation may make formulae unnecessarily difficult to comprehend.

> So when someone says "code should be readable", to whom exactly?

I'll have a go: to another competent engineer familiar with the general problem domain but not familiar with your specific work. This includes yourself in 2 years time.

This seems rather like the question of readability for scientific writing. Research papers should be readable to other researchers in the field, but they aren't generally expected to be readable to a general audience.

[0] https://en.wikipedia.org/wiki/No_Silver_Bullet#Summary


+1, to come back to the author's own narrative, familiarity plays a big role here.

If the new engineer is well versed in mapping and filtering he/she'll have an easier time onboarding a codebase that's rather void of manual loops.


It is hardly worth bothering how readable is "local code".

Following the same patterns across large parts of the codebase is what makes the codebase as a whole readable. Those patterns may even be complex, as long as they are used over and over without too much deviation and flag-explosion the codebase will be readable.

In short local isolated code can be as bad to read as it wants, as long as it doesn't infect the codebase as a whole (like through the use of shared mutable state or through a bad API).


Completely agree. Readability is actually in the word itself read + ability. The ability of both the code and the reader.


Code readability isn't a metric. It is a tradeoff. It basically boils down to: if in doubt will that programmer go with the more readable version of the code or do they stick with the slightly terse, clever hack?


Call me naive, but I would presume that even a junior, once they start working at a company, should be familiar enough with a language that they know all the basic syntax, idioms etc. Still, even if they are, over-using some language features will make your code less readable (to anyone). E.g. some will prefer good ol' if/else to the notorious ternary operator and its many descendants. But that brings us back to your own personal taste...


I disagree. The ability of someone to read code doesn't grow exponentially, after a few years of experience everyone hits the same plateau. More years of experience does not mean you can understand more complex code.

That is to say, if you target "readable to the majority of engineers with 3-4 years of experience, without them getting confused" then you've hit the mark.


> after a few years of experience everyone hits the same plateau

I'm sorry this is a very naive take, I presume (I could be wrong) coming from someone with just a few years of experience.


> Apple has a max-efficiency design that's excellent for personal computing. Intel/AMD have aging max-performance designs that do beat Apple at absolute peak...

Can you explain then, how come switching from Intel MBP to Apple Silicon MBP feels like literally everything is 3x faster, the laptop barely heats up at peak load, and you never hear the fans? Going back to my Intel MBP is like going back to stone age computing.

In other words if Intel is so good, why is it... so bad? I genuinely don't understand. Keep in mind though, I'm not comparing an Intel gaming computer to a laptop, let's compare oranges to oranges.


If you take a peak-performance-optimized design (the Intel CPU) and throttle it down to low power levels, it will be slower than a design optimized for low power (the Apple CPU).

"let's compare oranges to oranges"

That's impossible because Apple has bought up most of TSMC's 3nm production capacity. You could try to approximate by comparing Apple M4 Max against NVIDIA B300 but that'll be a very one-sided win for NVIDIA.


> That's impossible because Apple has bought up most of TSMC's 3nm production capacity. You could try to approximate by comparing Apple M4 Max against NVIDIA B300 but that'll be a very one-sided win for NVIDIA.

Have you not heard that Intel's Lunar Lake is made on the same TSMC 3nm process as Apple's M3? It's not at all "impossible" to make a fair and relevant comparison here.


> Can you explain then, how come switching from Intel MBP to Apple Silicon MBP feels like literally everything is 3x faster, the laptop barely heats up at peak load, and you never hear the fans? Going back to my Intel MBP is like going back to stone age computing.

My understanding of it is that Apple Silicon's very very long instruction pipeline plays well with how the software stack in MacOS is written and compiled first and foremost.

Similarly that the same applications take less RAM in MacOS than even in Linux often even because at the OS level stuff like garbage collection are better integrated.


I did not say "Intel is so good". I said "x86 peak single-thread performance is just a hair better than Apple M-series peak".

Pretty much everything else about the M-series parts is better. In particular, Apple's uncore is amazing (partly because it's a lot newer design) and you really notice that in terms of power management.


Is the Intel MacBook very old?

Is it possible that your workloads are bound by something other than single-threaded compute performance? Memory? Drive speed?

Is it possible that Apple did a better job tuning their OS for their hardware, than for Intel’s?


it all comes down to thermal budget of something as thin as MBP.


And then there are third-tier historical medieval towns that are 100% walkable and you again don't need a car.

My ideal city of the future is a small walkable town with everything within a 15-20 minute walk, possibly a part of a conglomerate of towns that run trains or buses between them.

I currently live in one such historical town in Southern Europe that's protected by Unesco. The streets are so narrow that not only there's no public transport, all non-resident and non-delivery traffic is prohibited and there's no Uber even. And yet you have everything you need for life and work within a 15-20 minute walk max. More for remote work, obviously.

An ideal city of the future doesn't need to be medieval but maybe we should go back to a city planning concept that is made for humans and not cars. And you know, narrow pedestrian streets are totally fine, they are cute!


> And then there are third-tier historical medieval towns that are 100% walkable and you again don't need a car.

Ah yeah sure I'll just find work in a place and then buy a house there. It's not like 3+ decades of mismanagement on migration and internal policies left even places 30+ minutes by car from work unafforable by mere mortals.


> third-tier historical medieval towns that are 100% walkable

Very many people, including me, want to live in a glorious walkable bijou old-town stone apartment, except they can't afford to because they stopped building them like that in about 1756 and the only jobs within walking distance of the old town are in hospitality and those do not pay the salaries to buy one of the treasured old town apartments from under an AirBnB host.

And if it's a really small, non-tourist town in the middle of nowhere, it may not even have the hospitality sector. So, yes, that bijou property may indeed only cost 50,000 euros, and yes, you can walk to the boulangerie or the confitería or whatever but you're probably going to need a car to get out of your tiny town and go to work or basically anywhere else.


Or you could work remotely or hybrid, or take a 30-60 minute wifi-enabled commuter train to the big city for your big city job, clocking in and handling your emails during your commute and doing the last bits of work on your way home.

There's lots of solutions.


It's interesting to me seeing the different ways that different people respond to our modern urban hellholes. I don't want to live in a city at all, I want to live in at most a village where people all have their own land, and the village 'center' is just the most convenient nexus of property lines, where people could set up the local market.

I always sort of assume people who are into de-urbanization are also de-dev, because I don't see how or why the large-scale industrial base would be needed or could be sustained with only smaller, distributed cities, but it's interesting to hear another perspective.


Peasant life has its charms, I suppose.


It's only peasant if you have a Lord. Ni Dieu, ni maitre.


You only have everything you need for work in such a city if your "work" is limited to small offices, restaurants, and retail shops. So you're excluding everything related to manufacturing, agriculture, resource extraction, logistics, military, etc. You know, all of that stuff that keeps modern industrial civilization operating and allows quaint medieval towns to continue existing at all. If you like where you live that's great, but it's hardly ideal and certainly not scalable.


> all of that stuff that keeps modern industrial civilization operating and allows quaint medieval towns to continue existing at all

That doesn't make sense to me. Medieval towns existed for centuries before industrial civilization and without it we might see a drastic increase in medieval style living...

In any case the poster is talking about their own ideal future scenario, maybe leaving out the details like the robots working in underground manufacturing facilities or fusion-powered hydroponic vertical farms etc.


> if your "work" is limited to small offices, restaurants, and retail shops.

...or just any kind of remote work. Still limited, not available to everyone obviously but can't be omitted.


That's neat! But please kill the semicolon if you can. It's too 20th century already.


Why? It's a good grammatical equivalent to the full stop for the programmer. It can serve as useful context for the compiler. And it's only one character. Antagonism over semicolons is another strange symptom of conciseness at all costs. If you want APL, just use APL.


I always thought ending programming statements with a period '.' like in Prolog was more elegant.


> If you want APL, just use APL.

Or Python, Go, or Typescript.


Or Swift


or Haskell


A counter argument is It’s meaningless for the developer and high level programming is writing for people not things.


That’s why when we write, we use commas and periods. It tells the reader when a thought ends and the next begins. A semi-colon is the traditional period in programming. Not everything fits on one line. Python managed to pull it off and now everyone thinks it’s the right way… it’s just “a way” but by no means modern or right. JavaScript made them optional, but it results in ambiguous parsing sometimes, so it’s not a good idea there either.

In any case, I doubt a run on sentence is “meaningless” but it is hard to parse.


In other words, just like with autonomous driving, you need real world experience aka general intelligence to be truly useful. Having a model of the world and knowing your place in it is one of the critical parts of intelligence that both autonomous vehicle systems and LLM's are missing.


Makes it even more impressive considering that the A320 is slightly more expensive.


Pricing on these planes is pretty complex. It's a stretch to say that airbus is unequivocally more expensive without comparing various options.

Source: my brother worked for Boeing in sales and has been in the industry 30 years.


Indeed, but the practical result in this case is that the A320 is _much_ more expensive, because demand is far higher. Ryanair's big purchase of 737 MAX-10s a while back was at, at most, 50% of list price; that degree of discount isn't really happening for the A32x at the moment, I don't think.


Yeah, also nobody pays list price.


Depends on how you define "common" but the entire lineage 8080 -> 8086 -> 8088 are backwards compatible and therefore are very much related.


It goes further back than that, just not as backwards compatible. 4004 -> 8008, 8080 and so on. Just like the 6800, 6809, 68000 etc progression. All of these are families that have more in common with each other from one generation to the next than with other such families. It's logical: usually those were the same teams designing them with better tools and more money at their disposal, as well as a vastly increased transistor budget. Notable exception: the 6800 is in many ways simply an improved 6502 but by a different manufacturer.


The 8008 is not a 4004 descendant, though: it was a new design originally done for the Datapoint 2200!


Fair point, that's true in the direct lineage sense, but, 6809 to 68000 is a similar jump, there is nothing to say the one was based on the other except for general ideas and some addressing modes that turned out to be handy (when writing compilers, rather than writing assembly). Every widening of the databus caused a redesign from the ground up, even if some of the concepts survived. The 4004 was early enough that there was not much installed base to worry about so a clean start for a new chip made very good sense.

But in the 65XX family there is the 65816, a chip that tried really hard to maintain as much backward compatibility as possible. It saw some commercial deployment (Apple, Nintendo). At that point in time backwards compatibility began to have real value and intel really made some lucky calls: the weird addressing modes resulting from the lack of register width eventually culminated in a setup that worked very well for CPUs that were running multi-tasking OS's. The 386 was a very nice match for such code and this model was a major factor in the success of the line (which really was creaking badly with the 80286 out vs the 68K, which effectively had a 32 bit flat model built in because of its ability to run position independent code).

But in 1987, when the 80386 hit GA it was pretty much game over for the rest even if it took a while for the other empires to crumble, only ARM survived and that is mostly because Acorn had a completely different idea about power consumption and use of silicon than Intel did. The current crop of x86 hardware is insane in terms of power consumption and transistor count, ARM is so much more elegant (in spite of its warts).


And a nearly opposite business model too: IIUC ARM was more or less the first company to behave like it actually wanted customers for its CPU-design licenses.


Yes, that's a very good observation, ARM was always an IP company rather than a one-stop-shop and that in turn served as a very effective avenue for the evangelism of its architecture.


With Datapoint’s own ISA of course!


Another similar exception is the Z80 to the 8080.


That's Zilog, not Intel!


That's the point:

>Notable exception: the 6800 is in many ways simply an improved 6502 but by a different manufacturer.


Ah I see now what you meant.


8080 -> 8086 not compatible although assembly code translation was possible.


Some parts of the lineage are nevertheless very important. When I wrote a 8086 assembler, I’ve come across the idea of of writing the instruction encodings in octal instead of hexadecimal purely by accident, described as some sort of little-known neat trick hidden from the casual reader of the CPU documentation. It’s only by reading the manual for the Datapoint 2200 much later that I found a confirmation that this was very much intentional and (in the distant past) documented.


100%! There is clear lineage back to the Datapoint 2200 which is remarkable given that it wasn't even an Intel design and CTC gave away the rights IIRC.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: