> "Clean-room design" was an underhanded way to legally reverse engineer and clone a competitor's product. It works like this: engineer A produces a specification after studying the competing product, a lawyer signs off on the spec not including copyrighted material, and engineer B re-implements the product from the spec A created. A and B have the same employer, but since they're not the same person there's technically no copyright infringement. This technique was used during the fiercely-competitive market rush of early personal computing.
What a weird way to position this.
There's nothing "underhanded" about it. There is literally no copyright infringement in this case. It's pure reverse engineering as is done in many other industries and fields.
And the outcome is increased interoperability and improved competition by eliminating artificial barriers of entry in the market. Without it, the computing world would be a very different place, and I doubt anyone would like it.
In fact, the market was far from "fiercely competitive" prior to the IBM BIOS being reverse engineered. Before that work was done, IBM basically had a lock on the PC market. It wasn't until the clone rush of the 80s that prices came down and computing became accessible to everyone. Hell, IBM valued that near-monopoly so much they introduced the MCA bus in the hopes of locking the PC back down. Fortunately their competitors succeeded in establishing open alternatives, including PCI and so forth, and the rest is history.
I wonder how this author feels about the Google v Oracle lawsuit regarding Google's reimplementation of a bunch of core Java APIs...
I seem to remember Windows 95 and the quest for "plug 'n play" being what pushed the PCI standard into wide spread use. Basically the first standard that didn't require all the tedious mucking about with IRQ's, DMA and I/O Ports.
IIRC it was considered fashionable back in the day to install a card, wait for it not to work and then make a joke about "plug 'n pray" before finding an older, ISA card and getting it to work by setting the jumpers correctly.
VESA Local Bus supplemented ISA. It did mostly see use as a graphics card expansion slot, but you could get various I/O cards. I saw a couple of boards in my lifetime that had those slots, thankfully didn't need to support them.
I assume "clean room" BIOS implementations were important to avoid copyright infringement claims since the BIOS source code was actually provided by IBM in the system's technical reference manual (similarly to other systems of the time):
There is no copyright infringement when an engineer studies a competitors product and pretends that their design is independent?
I don't know if this is a real thing or not (I can see it being real in smaller companies, but incompetent when it comes to these things so what do I know), but that's the case described.
> There is no copyright infringement when an engineer studies a competitors product and pretends that their design is independent?
So long as no actual code was copied, no, there isn't.
An independent re-implementation of an API/interface/whatever for the purpose of interoperability is perfectly legal. This explains the clean-room approach, as it ensures that the developer re-implementing the functionality has absolutely no access to the original code, thus making it impossible for them to engage in copyright infringement (BTW, if you want to watch a fantastic dramatization of this, go watch Halt and Catch Fire. I love the show for a lot of reasons, but their dramatization of the reverse engineering of the PC BIOS was just... so good).
This is why, for example, LibreOffice can go ahead and implement their own Word doc implementation without worrying about Microsoft suing the pants off of them.
It's also why Wine is perfectly legal (though not without controversy when developers have tried to take shortcuts).
Heck, we wouldn't have AMD if this type of thing was illegal! AMD got into the microprocessor market by, yup, reverse engineering the Intel 8080 and cloning it:
> It was originally produced without license as a clone of the Intel 8080, reverse-engineered by Ashawna Hailey, Kim Hailey and Jay Kumar by photographing an early Intel chip and developing a schematic and logic diagrams from the images.
So how come you can independently re implement an API but if you try and do that with a hard good and slap a ford mustang badge you reimplemented independently onto it you get sued? Further more this is software. Why can't the team looking at the source code just write up the new code rather than write up a spec? Seems like such a useless performance considering you could just run a diff or something to catch plaigerism.
> do that with a hard good and slap a ford mustang badge you reimplemented independently onto it you get sued?
Ah, now we're getting into a completely different forms of IP protection.
The Ford Mustang badge is a protected trademark, so you're not allowed to use it without their permission.
The car itself might also contain patented technical or design elements that you may not be allowed to duplicate without a license.
Broadly there's four classes of IP protection out there: copyrights, patents, trademarks, and trade secrets. This discussion vis a vis reverse engineering is primarily centered around copyright protections.
As for the rest of your comment, check out my other reply where I hopefully clarified things.
Because if you basically just rename variables and some other lazy obfuscations, it's still a copyright violation. This isn't about efficiency. It's about using a process that's relatively resistant to legal challenge.
Copyright is the exclusive right to a particular creative expression, traditionally text. If you recreate the same functionality as another program without copying its text, there is no copyright violation. There may be patent infringement, but that's another matter.
They don't pretend the design is independent, they explicitly have studied the design. The implementation is independent, because they have not studied the code. The code is what is protected by copyright. The design would be in the realm of patents.
On the last point, it's interesting to note that, at the time the IBM BIOS was being reverse engineered by various parties (around 1981/82), software patents were still not particularly well established. It wasn't until the Federal Circuit was established in '82 that there was a consistent treatment of software patent litigation:
One has to wonder if, today, the reverse engineering of the IBM BIOS would be possible, given the likelihood that IBM would've patented their implementation.
Interesting point. We partially originally had software licensing in the IBM System/360 days because IBM was skeptical about how useful software patents were as a means of IP protection.
Something I've never understood is the requirement to have separate people doing the design and the implementation. How does that provide greater legal protection than one person doing the design, a lawyer signing off on it, and that same person doing the implementation?
The person doing the design studies the original product and will likely gain incidental knowledge of how the original product is implemented. The person doing the implementation ideally has no exposure to the implementation of the original product, so they can't copy the original, not even subconsciously.
Another way to look at this is: what would a judge believe?
Software developers independently come up with the same solution to common problems all the time! We even have names for it: algorithms, design patterns, etc.
If the same person both reverse engineers an existing implementation, and then writes a new one, then that kind of incidental duplication could look like copyright infringement, and the only defense is the developer saying they didn't do it. That's pretty tough to prove.
A clean-room approach solves this problem. Parallel re-invention of the same solution can easily be proven because the developer genuinely never saw the original implementation in the first place, and a lawyer examined the reverse engineered specification to ensure no copyrighted material was contained therein.
So I'm still struggling to get this. Why can't you just directly design and implement sans copywrited material? How is that different than designing without copywrited material then having someone else an employee B do the implementation without the copywrited material. Does this method somehow let you use copywrited material? How would engineer B even know about the copywrited material in that case if they are supposedly 'blind'?
Basically they are documenting non-copyrightable parts of the interface. If you put these numbers in these registers and call this interrupt the result is foo. You need to write a routine to do that but we're not going to tell you what the IBM code did to produce foo.
Suppose I've developed an API that has a few function calls:
void foo(int)
int bar(char **)
char *baz(float)
I have then written a bunch of code that defines my specific implementation of 'foo', 'bar', and 'baz'.
That specific implementation--my specific code--is subject to copyright and no one is allowed to copy my code and use it without my express permission. And that includes obvious things like copying the code and obfuscating what you did via renaming of variables and so forth.
But suppose you want to implement your own compatible version of that API so that someone else can use your library instead of mine.
To create your version you decompile the code and you see that the API is composed of those three functions 'foo', 'bar', and baz'. You then read the decompiled versions of those functions and you see what they're logically doing.
You absolutely can then go away and write your own version of this API! As long as you don't literally take copies of my code and just go write 'foo', 'bar', and 'baz' in a way that semantically does the same thing, then you're safe!
However, suppose 'bar' is a simple in-place sort, and you and I both implement a standard quicksort to do the job.
Sure, you wrote yours totally independently of me, but I could still go to a judge and claim that, no, you copied my version! Your copy of 'bar' is in fact a copyright violation because you just stole my code!
How would you prove otherwise?
So instead, what we do is get a third party. Their job is to read the decompiled versions of 'foo', 'bar', and 'baz', and then write down a totally independent specification that describes how those functions work, but doesn't contain any of the code.
To be extra safe, we even get a lawyer to read the resulting specification and certify that, indeed, no copyrighted code is present.
Then we hand you the specification, and you use that specification to implement 'foo', 'bar', and 'baz'.
Now, the specification might say 'bar is an in-place sorting function', and so you go ahead and implement your own version of quicksort.
But now, when I claim you copied my code, you can go to the judge and say "Au contraire! I worked strictly from this specification, here, that my lawyer has verified contains no copyrighted code. The only person that actually read the code is that guy over there points dramatically to the audience and he didn't write any of the code."
This provides a much much stronger defense that your implementation cannot possibly contain illegally duplicated, copyrighted code, and that any similarities are entirely incidental.
Yes, this whole thing probably seems like a crazy dog and pony show, but when you're a little upstart company like Compaq going up against the behemoth that is IBM, you can be damn sure you're gonna dot all your i's and cross all your t's, because they will send an army of lawyers your way, and it won't be pleasant.
You can't possibly have infringed copyright if you never touched the copyrighted material yourself. So companies have other engineers examine that material and produce documentation that you can use to write code. A simple layer of indirection between developers and any legally toxic material.
This is just a popular method to increase the likelihood of success in the event of lawsuit. It's not actually standard, required or anything of the sort. It certainly doesn't prevent competitors from suing you anyway and burning your time and money in court. They can also be awarded an injuction that stops you from making money until the courts decide who's right.
Sony vs Connectix is an example of a company that directly reverse engineered firmware and still won in court but still lost in the market due to an injunction.
> the PlayStation firmware fell under a lowered degree of copyright protection because it contained unprotected parts (functional elements) that could not be examined without copying.
> While Connectix did disassemble and copy the Sony BIOS repeatedly over the course of reverse engineering, the final product of the Virtual Game Station contained no infringing material.
Independent creation is a defense to copyright infringement, but only if the accused did not have access to the original work.
In the BIOS cases, this means that the implémenter can produce the exact same assembly code as the original BIOS and it’s not copyright infringement, but only if the implementer never saw the original code.
> Independent creation is a defense to copyright infringement,
Yes.
> but only if the accused did not have access to the original work.
No, but the accused having access to the original work makes it less likely that a trier of fact (jury or judge depending on the kind of trial) will conclude that the creation was independent rather than copying.
You’re right. I should be more careful about using “only if”. Practically, however, access to the original will probably defeat the independent creation defense.
The person doing the specification has been "contaminated" by the original's documentation and product; it's possible to argue that they carried over knowledge directly from the original product. The lawyer in the middle establishes a firewall ensuring nothing was taken from the original outside of the specification.
I don't know of any actual cases that hinged on this, but lawyers tend to be belt-and-suspenders types. One remnant is the advice from the GNU folks on building clones of Unix tools---you could even be tainted by looking at the Unix source, as long as the clone was architecturally very different. (Which led to a lot of better implementations of Unix tools...)
It's an extra precaution: it guarantees that the people who are doing the implementation never see the original copyrighted work, so copying is impossible.
If the same person reviews the product being cloned, writes a spec, has it reviewed, and does the implementation, the other side can argue that knowledge that was not in the spec was used in the development. They might or might not succeed in this argument, but cautious companies firewall off those who have seen the original product and those who develop the clone, especially if the competing company has a litigious reputation.
If you've never read "Lord of the Flies", you'll have an easier time convincing people that your new book about a bunch of shipwrecked boys doesn't infringe its copyright.
I read many computer books and magazines from the 60s, 70s and 80s. It is really interesting from many sides; some logic explanation books from that era I find better programming beginner explanations than modern books. Modern books usually skip rigid logic introductions but are more about tooling, hello world and then more examples of practically implementing working solutions with that knowledge. Which contains a lot of plumbing including logic but gives the beginning programmer the idea that low level stuff is somehow mashed in with the high level stuff. I had many juniors crash on that; they cannot keep them apart at first and worry about implementation details while the actual logic/algo basically has in no way at all formed in their head.
Even in asm books they would show flowcharts of the logic first and then try to implement those flowcharts while many modern things assume you have that global idea and jump right into the details.
I never sweat the details until I need to which seems to make me a child of those times, which I am of course.
On the flip side, these old books also discuss far more low level stuff, some to the gates level.
Somewhat related? "Soap Bubbles" (C. V. Boys) [1] was a book from over 100 years ago that reads like a series of lectures to a science class on the physics of soap bubbles.
I was impressed with the step-wise manner in which the book proceeds from simple observations of soap bubbles and how they behave in order to develop more complex theories as to why they behave that way and the physics behind it.
Similarly, vintage army videos about analog computers, mechanical couplings (hydraulic, differential) or even waves are extremely high quality. Practical yet precise enough.
I would suggest that perhaps the intended audience played a factor as well.
I picked up a series of books on vacuum tube circuits from the 1950's created by and for U.S. military personnel and it is probably the easiest and most digestible explanation on how vacuum tube circuits work that I have come across. Wonderfully illustrated with diagrams as well.
Would you be able to provide any sort of ISBN or document number for those books? The older references for RF are always of great quality. I haven't been able to lock down if it's because they were created when all the original research was done, or if because generating the diagrams was more labor intensive so greater care was taken.
I believe the difficulty of the past enforced a set of healthy constraints too. Today everything is organic and can be "fixed later" and few really put efforts to make it right in the first place. Value is in the mass.. like architecture.
I think the costs of a roll would be pretty small compared to the rest of the costs of production. Even today these are all considerations. It's costly to have to hire a team out for longer than you have to even if they use digital, or make people redo lines or spend more time editing things together you could have gotten in one take. Distributed product still has to be made with respect for the intended length of the video (e.g. is this a 5 min PSA or a 45 min training video? Do you want to host a 45 min long video?).
Yeah... I have watched so many corporate video addresses which just drone on and on and on, often with minimal production qualities too, like a webcam and questionable microphone skills.
These things would not have been filmed at all in the past, or they would have been much shorter. There would be a film crew. Someone doing the microphone setup and telling the subject to enunciate etc.
And us corporates would have watched it in the cafeteria before lunch on the 16mm projector.
The low barrier to entry today is wonderful, but it also means a lot of stuff made is just fluffy and not very necessary at all. I guess it's also easier to skip most of it, so there's that.
Exactly. I found myself fascinated about these vintage army videos. Vintage training videos from the 70s/80s are also very good. Compact, effective, no bullshit, and occasionally a sense of humor.
I also find a common theme in vintage training videos: They don't assume the audience is an idiot. I find that the more dumbed-down the information is, the less trust they placed in the audience, the more they felt complled to entertain rather than inform the audience, the worse the result is typically.
I find this to be true and very relevant in modern education. You don't need to gamify mathematics. You need to clearly communicate just how awesome and interesting mathematics itself is.
Agreed! I have a 6502 Assembly Cookbook from 1980. It is mostly about how to do floating point math on a 6502, and it is full of flowcharts which very cleanly introduce and build much more advanced concepts in a simple way. It was a wonderful bridge for me between application and theory when I read it ... 40 years ago as a highschool student without internet.
I will check tomorrow as I have these as actual physical books. The best one I have is a Dutch book but it might be a translation; I will try to find it.
What makes it even more amusing is that people on this forum, even if they don't know FORTRAN would find it ludicrously elementary considering it was actually the textbook for an MIT class at the time. (It was basically a programming course for non-CS/EE engineers and the assumption was basically that they had never touched a computer before.)
I had that exact BIOS Interface Technical Reference manual, which (along with one describing the FAT, I think) were the only technical books that came with my PCjr. I spent the 10 hour drive home after purchasing it reading them, assuming that was the stuff I needed to know.
I subsequently had a quite granular mental map of how higher-level software interacts with the hardware through BIOS and DOS interrupts, and knew a lot more than I generally needed to about where the MBR is stored, disk cluster size, etc.
It was difficult times to learn about computers - no friends used them, there was no internet, so any knowledge acquired was hard-earned, and tended to really stick.
Tangentially (?) I would not underestimate the impact the manuals of various early computers had on young programmers.
I wore my Vic 20 manual out copying out programs from it while learning not only BASIC but some pretty fundamental stuff about computer architecture. It was so significant to me I had to buy a copy from eBAY decades later to keep on my shelf.
i assert that the very practice of typing listings from books and magazines back in the day was exactly the sort of slow paced repetition that was required for really deep memorization. i can still remember PEEK and POKE from BBC Basic 40 years later.
Adam Osborne from Osborne & Assoc. wrote a number of highly readable perfect bound books in the mid 70s in his Introduction to Microcomputers series. Volume I "Basic Concepts" covers everything from gates and truth tables to common assembly language instructions. Later volumes targeted assembly language for specific processors of the day such as the 8080 and 6502. I learned to program from these books (and K&R).
Osborne & Associates went on to sell a portable CP/M machine that was very bulky and heavy but earned the right to be called portable.
Can anyone help me find the book? I remember it vividly as a child in 90s, translated into russian, so not new. It had very distinct illustrations with lots of pointing fingers rolling on wheels (pointers?) and samples of same program in different programming languages, like Lisp or Fortran.
I'm trying to find a book that sounds similar, one I read in the mid-1980s, in English. Can you recall any more details? I don't particularly recall distinct illustrations from mine, but they could have been there. I am sure the one I read had Fortran, Basic, Assembler, Forth, Apl, and PL/1. I'm not sure mine had the same program for each language, but the theme seemed to be giving a taste of each language, but not getting too deep into any of them. I think the one I am trying to find had a mostly orange cover and was a hardcover-sized paperback.
I don't really remember much about the contents, it was too advanced and foreign for me back then. It was hard cover, pages of about A4 size, the cover was gray with thin white horizontal lines. One of the pages contained the famous photo of "first actual case of a bug". The illustrations were in the 80s corporate style with subtle gradients.
I love vintage programming books. Interlibrary loan has been my best friend in getting books you can’t find easily, or if you do they might be over $100 to buy.
I remember being in high school in the 80s and discovering that there was source code for a C compiler on the IBM mainframe at UIC. Then when I started digging into it, I found that parts of it were written in SNOBOL. Fortunately, the local public library's collection of computer books was hopelessly outdated and they had a book that covered SNOBOL on their shelves.
I actually started a small collection of old children's programming books [1, 2] a few years back. I've done some research regarding "typing practice" for programmers and how older generations did not have the luxury of copy/paste. I think one of the aspects that helped students overcome the technical barrier was the fact they had to reimplement the code themselves, rather than cloning a git repo.
One of the problems I faced when my family got our first computer in '97 was there was no included programming environment on Win 95, not even a Basic interpreter. I remember my local library was full of those kids programming books with little games you could type in, and I had no way of putting them in and running them.
Worse, the only programming books I could get running back then were those "Learn Java in 24 Hours" style books that came with a cd that included a compiler, but no IDE. I remember giving up after a lesson or two because all I had was notepad to type in the code, and the errors thrown were too complex to understand. (Probably a missed semicolon.)
I believe win95 still had qbasic (with its built-in IDE and help/examples) in the Windows folder, but it wasn't in the Start menu anywhere. May have been on the CD instead.
I seem to remember it being there, but it didn't include any of the built in examples or games. Maybe I tried programming things from the books and either couldn't get them to work, or they would run too fast to be playable.
Either way, it wasn't predominately featured and without knowing anyone with programming experience, I didn't have any one to ask.
QBASIC was my first programming language, which I learned on my parents PC running Windows 95. It wasn’t installed with Windows 95 by default; I had to copy it from the Windows 95 CD.
That is kinda how I remember it. Our computer was from Packard Bell, it didn't come with a proper windows disc and used some proprietary image installer for reinstalls. We did get the Win95 Extras disc as I clearly remember playing Hover!
Not only is it an interesting pastime, it can be quite useful for older languages. I am currently working my way through some of the recommendations in "The Definitive C Book Guide and List" from StackOverflow and a lot of the selections are from decades past.
I love vintage game programming books. Just finished "Gardens of Imagination" by Christopher Lampton, and "The Black Art of 3D Game Programming" by Andre Lamothe. Lots of good info in both for my raycaster pet project.
I sometimes wonder if in the near future we'd have programming historians to dig up old brilliant but forgotten ideas, which were not feasible back then, but might add value now with our new computing capabilities.
Artificial neural networks, which underlie most of deep learning, is an example of that. Maybe not totally forgotten, but set aside multiple times in history due to issues around practicality. There are probably a few more of these out there, but it's one of the best known ones.
There is surely a distinction between vintage and timeless, where the first is representative of the time it was written (and may offer relevant insight for today), and the second offers some universal truth, regardless of when it was written.
I'm gonna have to go against the grain here and completely disagree with this. While TAOCP is lauded and celebrated as one of the best (and at the same time unread) CS books of all time, the fact that it uses an assembly language is a strong indicator that it's 'vintage'. It still has that 'universal truth' but at the same time, as you said 'is representative of the time it was written', considering MIX is picked for teaching people how to program.
Another way of counting is that the book "The MMIX supplement", which contains the MMIX equivalents of every single page/section/program in Volumes 1–3 that is affected by the details of MIX (the book is not by Knuth but the preface indicates Knuth reviewed it very thoroughly pre-publication), is 224 pages long.
Volumes 1, 2, 3 are together 672 + 784 + 800 = 2256 pages long (and Volume 4A is 912 pages). So roughly, less than 10% of TAOCP deals with either MIX specifically, or assembly language in general.
The newer volumes use a newer version of MIX, MMIX¹ based on RISC architecture, which will also appear in the "ultimate" revisions of Vols I–III. And while you might think that assembly language is ipso facto an indicator of vintage, I think having at least one assembly language² in one's experience is helpful for having some idea of what the computer is actually doing.
2. For people targeting a platform like JVM or CLR (dot net), understanding how those machines are implemented might also be useful, although I've found my ancient memories of 6502 and 370 assembler are sufficient for having a mental model of how the code works.
I also should provide DEK's own reasoning about why assembly and not a higher-level language:
>Many readers are no doubt thinking, ``Why does Knuth replace MIX by another machine instead of just sticking to a high-level programming language? Hardly anybody uses assemblers these days.''
>Such people are entitled to their opinions, and they need not bother reading the machine-language parts of my books. But the reasons for machine language that I gave in the preface to Volume 1, written in the early 1960s, remain valid today:
>• One of the principal goals of my books is to show how high-level constructions are actually implemented in machines, not simply to show how they are applied. I explain coroutine linkage, tree structures, random number generation, high-precision arithmetic, radix conversion, packing of data, combinatorial searching, recursion, etc., from the ground up.
>• The programs needed in my books are generally so short that their main points can be grasped easily.
>• People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise the programs they write will be pretty weird.
>• Machine language is necessary in any case, as output of many of the software programs I describe.
>• Expressing basic methods like algorithms for sorting and searching in machine language makes it possible to carry out meaningful studies of the effects of cache and RAM size and other hardware characteristics (memory speed, pipelining, multiple issue, lookaside buffers, the size of cache blocks, etc.) when comparing different schemes.
At the same time, his programs do things like modify themselves. CPUs and operating systems haven't let you do stuff like that for a while now, at least not without disabling system protections.
Also, the underlying hardware he's describing isn't universal. Look at the assembly language for a vector machine (like a Cray). It's nothing like MIX.
> And while you might think that assembly language is ipso facto an indicator of vintage, I think having at least one assembly language² in one's experience is helpful for having some idea of what the computer is actually doing.
These two statements are not contradictory. We've stopped teaching CS using assembly language a long time ago, but we still do teach assembly in OS or computer architecture contexts, which is totally fine and should continue.
It's precisely because it uses a "fake" assembly language that it can continue to be timeless; the books written using FORTRAN are now historical artifacts, and perhaps only C would really survive that long; and C is basically glorified assembly language anyway ...
I'd say the reasons you say it's vintage are the exact reasons I'd say it's timeless! It's a fictional assembly language which is at the level of abstraction he wants and something that won't age because it never existed to begin with.
No, it's not timeless. Only the fact that it was updated to MMIX [1] proves that it's not.
And the fact that it's fictional it doesn't mean it won't age. Because he didn't pull the language out of thin air and he wasn't working in a vacuum, he was inspired by the trend at the time of its invention [2]. I bet if it was written today, it would look quite different, as proof of his language update for the recent editions [3]. So, it's anything but timeless, as said by even the author.
> won't age because it never existed to begin with.
Same as Da Vinci's helicopter? I think that one also aged quite poorly.
Sometimes original codepaths (such as BIOS) are still present in modern machines, but basically forgotten. In those cases you may be able to execute code that hasn't been tested/expected in quite awhile.
However, I think the raw BIOS functions don't work in extended mode, but I may be wrong.
Thanks, I know nothing about the BIOS and other related hardware topics. Does "extended mode" mean UEFI? I guess an introductory OS lecture is going to be helpful.
Thanks for the explanation! I Googled a bit and found the topic fascinating (especially the ones regarding malwares that are particularly difficult to remove)
> "Clean-room design" was an underhanded way to legally reverse engineer and clone a competitor's product. It works like this: engineer A produces a specification after studying the competing product, a lawyer signs off on the spec not including copyrighted material, and engineer B re-implements the product from the spec A created. A and B have the same employer, but since they're not the same person there's technically no copyright infringement. This technique was used during the fiercely-competitive market rush of early personal computing.
What a weird way to position this.
There's nothing "underhanded" about it. There is literally no copyright infringement in this case. It's pure reverse engineering as is done in many other industries and fields.
And the outcome is increased interoperability and improved competition by eliminating artificial barriers of entry in the market. Without it, the computing world would be a very different place, and I doubt anyone would like it.
In fact, the market was far from "fiercely competitive" prior to the IBM BIOS being reverse engineered. Before that work was done, IBM basically had a lock on the PC market. It wasn't until the clone rush of the 80s that prices came down and computing became accessible to everyone. Hell, IBM valued that near-monopoly so much they introduced the MCA bus in the hopes of locking the PC back down. Fortunately their competitors succeeded in establishing open alternatives, including PCI and so forth, and the rest is history.
I wonder how this author feels about the Google v Oracle lawsuit regarding Google's reimplementation of a bunch of core Java APIs...