Saw this a few times on HN, but immediately thought of it and feel its an appropriate response to your friend.
"I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all."
While clever, I’m not a fan of this argument as it ignores the fact that each of steps are different things requiring different skills. Putting samples together is more arranging music, while creating the notes is composing. After that, you have sound design, playing an instrument, making an instrument yourself...
Sure none of these are “cheating”, but someone somewhere has to do each of those things, and the further down the chain you go, the more “control” you get over your sound and composition. The law of diminishing returns of course hits at some point (although someone may argue that their breed of goats has a certain sound they can’t get any other way).
It’s the same thing in programming: someone chaining together libraries may eventually run into a point where there’s nothing out there that does exactly what they need. That doesn’t mean it’s not your work unless you’ve written the compiler yourself, or have your own fab in your garage, it just means you have to be aware of the degree of control you give up the higher level you go.
It doesn't sound like you are in disagreement, the parent story is just a flowery expression of the diminishing returns (control), the further you go.
As you say, the reality of it is that there are thresholds, for some where the benefit cost ratio is poor enough few will break through it from a higher level use... and then there are lower ratio thresholds in between where you will get various proportions of experienced people who want a little more control (in different directions) breaking through.
But even with those thresholds (in this case one IC vs another IC), it's arbitrary and subjective, you are just choosing to spend your time and effort in a different way.
> Putting samples together is more arranging music, while creating the notes is composing.
Really depends on what's the length of samples and how exactly are you working with them. With whole musical phrases (as used in 90s hip-hop and french house, for example), it's really more like arranging. But when you cut those very samples just a little bit shorter, and start playing MPC pads like an instrument, I'd argue you switch back to composing.
In many ways watching people describe and choose their spot in the "grab package" <-> "herd goats" spectrum is my favourite part of AoC each year¹. The squirming some people choose to do when justifying their place in a table of magic internet points is a lot of fun.
Edit: Should add I'm one of those squirmers too, often when I'm thinking about networkx/numpy/etc.
I have found satisfaction at least for now in going in the other direction - what can I do with a locked down computer that has absolutely no software installation authorized and only has the standard software for any non-technical employee. In other words, everything that can't be done in that environment is now "herding goats" to me. Sometimes I am tempted to try to get developer-type privileges, but I've resisted so far.
Is there an endgame beyond empathy for "normal" users? I'm curious about whether you're doing this to learn more or perhaps with the intent on spotting opportunities to make things better or some other reason entirely.
I can personally see the discussion of how to work with a basic installation being worthwhile, as I know I'm guilty of "why don't you just $bunch_of_experience_option?". However, I don't think I'd want to try to do actual work without the tools I have and the tools I make.
For an auto metaphor, suppose you were really good at building race cars, and then you set out to make an entry for the 24 hours of LeMons.[1] Some people do that. I never have been a blank sheet of paper/greenfield sort of person and I always lose interest in computer games if I have unlimited resources.
I see what you are talking about, but your parable has another side that no one sees - the guy learned a lot of different things in the process :) For people who enjoy learning and exploration it is the true meaning of the whole thing, not a written tune.
> your parable has another side that no one sees - the guy learned a lot of different things in the process
No one sees?! What do you mean? That’s literally the punch line of the joke.
Of course ‘learning things’ is a secondary benefit, especially if that’s actually one of the goals you specifically set out for. Still I’ve personally watched programmers live that joke, and overengineer something that could take a day into a year long project, to solve a problem they didn’t have. I’ve seen it enough and cause enough problems that I try hard to write code with specificity and stick to the problem at hand. So much so that I have an actual problem with not abstracting things soon enough. ;)
The lesson is that when it comes to personal pursuits, there are no bright lines between doing something “properly” or “cheating”. It truly is all in the eye of the beholder.
If your goal is to reduce the problem domain from hardware and software to just software, then yeah, using a chip with most of the hardware work done for you is fine.
Looking at Ben Eater's youtube channel, there seems to be a healthy demand for building more authentic '80s 8-bit systems.
If one's goal is to learn about how to build an 8-bit computer, using a microcontroller is not going to be an edifying experience unless the goal is to do something mostly in the software domain like write a simple OS.
While I think your parable is good advice for getting a business going quickly, it's not appropriate for deep dive learning or hacking.
Yes, it's silly to be gate keeping like this, with some purist fantasy. If OP friend like to role play as a engineer from 1981, that is cool. But he has to realize that that is not everybody's goal.
>> The lack of an external memory bus feels incredibly in-authentic to me.
Yep. It's still a neat project, but don't claim to build an 80's computer using "only 5 chips" when they're more modern chips that allow that low count. A single FPGA can do it all too.
On a tangent, I'm curious about upgrades to old machines that could have been done at the time. For example, my Interact has super low-res graphics where each pixel is 3 scan lines high. That machine was designed for 4-8K of ram but shipped with 16K and had an upgrade to 32K. It seems like increasing the vertical resolution should have been a fairly trivial hardware hack since the RAM is all there. Bumping the horizontal resolution might have been possible but harder. Increasing the available colors should have been fairly easy. It seems to be a case of not enough time for the design to bake. The other 8bit machines with ASICs might not be so incomplete, but there may still be some things that could have been done.
A huge amount of the limitations were "just" down to cost.
E.g. on the C64 the CPU and graphics chip competes for memory cycles all the time. This is why the screen blanks when loading, for example - to prevent the graphics chip from "stealing" cycles. A more expensive memory subsystem would allow working around that, and speed up the entire thing. This is a recurring issue with many other architectures as well for cost saving reasons (e.g. the Amiga "chipram" vs "fastram" distinction).
From 1983, the 8/16-bit 65C816 (used in the Apple IIGS) would have been a more natural choice for the C128 or even a "revised" C64 (at the cost of some compatibility), and reached clock rates up to 14MHz.
A lot of it was also down to market pressure and R&D costs... All of the home computer manufacturers led precarious existences, as evidenced by most of them failing or transitioning out of that market (and often then failing); at its peak, Commodore was notorious for being tight fisted and spending ridiculously little money on R&D relative to its size, and that was probably what made it survive as long as it did despite serious management failures, while Apple largely survived by already then going after a less cost-sensitive market segment and much higher margins.
The list of R&D efforts in Commodore that were shut down not because they were not technically viable, but because the company wouldn't spend money on them (or couldn't afford to) is miles long. I'm sure the same was true in many of the other companies of the era (but I was a C64 and Amiga user, and so it's mostly Commodore I've read up on..)
We could certainly have had far more capable machines years earlier if a handful of these companies had more money to complete more of these projects and/or if there had been demand for more expensive machines at the time.
But then progress is very often limited by resource availability, not capability.
Many of the ATMEL 8 bit chips do have an external memory bus. Not used in this case, but it's possible.
Sometimes it's pretty blurry what is a microcontroller and what is a microprocessor. The PIC32MZ/DA has an integrated Graphics Controller and 32MB of DRAM.
The AVR can't execute programs from the data bus, which means no execution from RAM or external memory, though you could write a virtual machine that could do so.
Steve Wozniak himself was cheating. Instead of using proper D/A converters that costed 100 times more, he made a hack.
And the hack was good enough.
It is one of the things in which academical training is perverse. It trains you to always complicate more and more any subject, and that the hard work is the goal, shortcuts are not permitted.
the other video standards (Europe used PAL) aren't much different in principle than NTSC, just some different timing and encoding parameters, the same D to A techniques work. PAL is higher bandwidth (and higher quality) than NTSC and Woz's timings were on the edge for NTSC so PAL versions didn't do color till a little later.
A lot of 8 bit designs used designed ASICs that weren't a generally available part, like the Atari 2600 with the TIA, the Commodore 64 with the PLA, etc. Using microcontrollers for these seems fair.
Meh. The spirit of a self contained unit (display output, keyboard input, etc) and boot-to-interpreter are there. The Maximite family of boards "cheat" even more with a PIC32. But they are popular, and have the same feel as the 80's 8 bit PCs.
Yes, but it's not about the feel of the product which is definitely there, but about the skills required to design. I, personally, would never call anybody who designed his own computer whether with ATMEGA or not, a "cheater". It deserves respect, to say the least.
Well, his argument is not about the number of chips but rather component origins. If you are about to pretend that you are in the 80s and you develop a computer in that time, you've got to use what was available. I argue that ZX Spectrum used custom-made ULA and you kind of emulate that approach to design with ATMEGA and he just laughs :)
The truth is that we live in magnificent times where there's a wide range of choice on how to do things and it is amazing.
That’s not much of a constraint—the https://en.m.wikipedia.org/wiki/Hudson_Soft_HuC6280 was an 8-bit CPU produced in 1987, and yet was powerful enough (in combination with the PC Engine’s 16-bit PPU) to drive very rich experiences.
The display controller chip seems to be a hard to source part if you want to stick to 80s components. Do you know what his recommendation would be for that piece?
Yes, that’s been a frustration for me. I’ve settled on the Motorola 6847 which requires a couple supporting chips. This is scavenging parts off eBay though, which seems to be a plentiful source. I’ll find out shortly whether any of them are actually good.
Not an unreasonable choice, but it has to be noted that home computers of the era already used LSI/VLSI dedicated ICs for this purpose, condensing the functionality of these 20 chips into a single package.
That would be so cool. While you're walking away from your parking space, you get a satellite view or an AR view where you can drop defenses. I think the satellite view would be harder since you'd have to pick your parking spot so the UI can place your top down car picture and then place obstacles and defenses
Somehow I still get those chills while reading source code of programming languages interpreters / compilers. I guess it's like getting to the origins of life, or something.
It is. As a kid, I kind of understood programs, but I could not even imagine how an interpreter (or compiler) could work. I partly got a CS degree just so I could learn the answer.
Incredibly similar to my story. I did not even know how to program, but the mere idea of something that took code in and spit executables out was completely magic to me. I absolutely had to know how they worked.
I guess it would depend how compelling the signal was. If it's just rogue radio waves with seemingly intelligent structure that the average person cannot understand, then the reaction would likely be "Yeah, right. Again".
If it's some kind of encoded message containing information that can be decoded and verified by scientific communicators, perhaps not.
If I were in charge of broadcasting a communication signal into space, I'd make it a very obviously square wave, or something else that doesn't usually occur on its own. So it really depends on if the signal was meant to be seen as a discovery mechanism.
That's the problem, isn't it? How can we decode a theoretical alien transmission without transposing our own sense of meaning onto it?
It's like the X-Files episodes where they find the aliens are using encodings that were designed in modern history by humans. Binary (as if there is one standard binary), bar codes... These scenes are supposed to be huge revelations, but for any sort of programmer or engineer, they seem silly. Which wouldn't be so if they made a point about how the aliens are intentionally using our encoding systems, but that's not how it's presented.
Not hating on the X-Files though, that show is awesome.
Usually they take a quick detour to destroy Big Ben and the great wall of China first, but after that it's straight to a plucky American kid's backyard.
My product came out of my own pains of providing end user documentation and tech support, so how can I not recommend it? 50% off BFCM.
Here's a post that summarizes my work, at least partially: https://www.helpinator.com/blog/2019/09/18/recommended-readi...
It does not work this way since Google started to rely on behavioral metrics of content, not just keywords. Useful articles that people read and refer to frequently, go up, junk goes down. So creating articles with only aim to sell something is the sure fire way to anger people and waste your money. Content marketing nowadays is about producing quality content IN HOPE that it will help to sell your product.