My favorite Knuth story, attributed to Alan Kay (if you're around, would love confirmation):
When I was at Stanford with the AI project [in the late 1960s] one of the things we used to do every Thanksgiving is have a computer programming contest with people on research projects in the Bay area. The prize I think was a turkey.
[John] McCarthy used to make up the problems. The one year that Knuth entered this, he won both the fastest time getting the program running and he also won the fastest execution of the algorithm. He did it on the worst system with remote batch called the Wilbur system. And he basically beat the shit out of everyone.
And they asked him, "How could you possibly do this?" And he answered, "When I learned to program, you were lucky if you got five minutes with the machine a day. If you wanted to get the program going, it just had to be written right. So people just learned to program like it was carving stone. You sort of have to sidle up to it. That's how I learned to program."
I’ve posted this comment on HN before, but this is my best Knuth story:
In the 70's I had a co-worker, perhaps the best programmer in the department, that had gone to school with Knuth. He told me that one day while in college Knuth was using one of the available key-punch machines to punch his program on cards. My friend was ready to punch his program so he stood nearby to wait for Knuth to finish. Knuth, working on a big program, offered to Keypunch my friends program before finishing his own because my friend's program was shorter and Knuth could keypunch quite fast.
While watching over Knuth's shoulder, my friend noticed Knuth speeding up and slowing down at irregular intervals. Later he asked him about that and Knuth replied that he was fixing the bugs in my friend's Fortran as he punched it out.
> He did it on the worst system with remote batch called the Wilbur system.
I think you mean WYLBUR.
I had the "opportunity" to work with WYLBUR once in 1993, and I remember it to this day. "the worst system with remote batch" dramatically understates how bad it was. Hearing this raises Knuth even higher in my estimation.
> He did it on the worst system with remote batch called the Wilbur system.
How funny, because Wylbur was a large improvement on the default at the time, MVS's TSO (Time-Sharing Option). Wylbur was miles easier and faster than TSO.
So, not actually 'the worst system with remote batch' because that trophy would go to TSO.
The tl;dr is that Knuth wrote an elaborate implementation of a program to solve a particular problem, and Doug McIlroy replaced it entirely with a six step shell pipeline. (Knuth's program was written using his literate programming tools, could be typeset in TeX, and involved some precise work with data structures and algorithms.)
I love this story as an example both of Knuth's genius and perspective, but also as a way to show what his level of dedication can achieve. It's an amazing intellectual accomplishment.
I also love this story as a demonstration what those of us without that skill and dedication can achieve using the advancements built on the work of Knuth and others.
That was quite unfair criticism, and even Doug McIlroy knew it (as he admitted later). The background is this:
- Bentley, the author of the column, invited Knuth to demonstrate literate programming using a program of his choice.
- Knuth insisted that to be fair, Bentley ought to specify the program to be written; else someone might object that Knuth chose a program that would be good for literate programming.
- Bentley chose (what we'd now call) the term frequency problem (list the top k most frequent words in a text file), and accordingly Knuth wrote a system program for solving just this one particular task. (Did everything from opening the input file to formatting the output, etc.)
- Doug McIlroy was asked to “review” this program, the way works of literature are reviewed. He happens to be the inventor of Unix pipes. Towards the end of his review, along with many other points (e.g. Knuth didn't include diagrams in cases where most of us would appreciate them, something I still struggle with when reading TeX and other programs), he used the opportunity to demonstrate his own invention, a shell pipeline using now-standard Unix tools (tr, sort, uniq, sed).
There are a few things wrong with this criticism:
- The main thing is that DEK wrote the program he was asked to write, so pointing out that he shouldn't have written that program is a criticism of the one who chose the program (Bentley mentioned this when printing McIlroy's review).
- At the time, Unix wasn't even widely available outside Bell Labs and a few places; it definitely wasn't available to Knuth or most of the column's readers.
- Knuth's program, fine-tuned for the task, is more efficient than the shell pipeline.
- Even if you use a shell pipeline, someone has to write the “standard” programs that go into it (the "tr", "sort", "uniq" and "sed" above), and literate programming can be used there. In fact, Knuth did exactly that a few years later, rewriting Unix's “wc” (IIRC) and compared the resulting program with that of Sun Unix's wc, and his LP version had, among things, better error handling. (He's explained it by saying that in conventional programming, if you have a small function and 90% of it is error-checking, it looks like the function is “about” error-checking, so there's a psychological resistance to doing too much of that, while with LP you move the error-handling to a separate section entirely about error-checking and then you tend to do a better job. BTW, TeX's error handling is phenomenally good IMO; the opposite of the situation with LaTeX.)
All that said, there is some valid criticism that Knuth prefers to write monolithic programs, but that works for him. He seems not to consider it a problem that to change something you have to understand more-or-less the entire program; he seems to prefer doing that anyway (he reads other people's code a lot: in Peter Seibel's Coders at Work, he was the only person interviewed who read others' programs regularly).
re: At the time, Unix wasn't even widely available outside Bell Labs and a few places; it definitely wasn't available to Knuth or most of the column's readers.
At that time, 1986, Unix was widely available - there were more than 100k Unix installations around the world by 1984. AT&T Unix Sys V and UCB 4.3bsd were available. Knuth was friendly with McIlroy, who was head of the Bell Labs computer science research group that begat Unix. Sun Microsystems was formed in 1982 - their boxes ran Unix, and Sun was a startup spun off from Stanford.
Hmm interesting; I remember checking this for the time when TeX was written (1977, because of questions about building on top of troff) -- what I remember finding is that Unix wasn't widely available at colleges then. Perhaps things changed in the next 9 years. As far as I know, Knuth's access to computers at the time was still through Stanford's AI Lab (that's why the first version of TeX in 1977-1978 was written in SAIL; see also the comment for 14 Mar 1978 in http://texdoc.net/texmf-dist/doc/generic/knuth/errata/errorl...). Do you know if Unix was installed on Stanford lab computers by 1986? What was the distribution of these 100k Unix installations (academia/industry)?
OK, Unix was probably available to Knuth, but the task given to Knuth was not to promote any already written programs! Had he done so it would be claimed that he missed to do what was requested of him to do.
Even today, if you would get the exactly same task, with the goal to make the most efficient solution when you have to care about the limitations of hardware available to you and to produce the self contained program (e.g. because your algorithm should run with hundreds of billions of words of input) you'd still at the end probably produce something closer to what Knuth did then what McIlroy did.
Which doesn't mean that it's not brilliant. But it's also not obvious, i.e. not something a "normal user" would "know":
- even if you knew that "tr command translates the characters to characters" did you know that you could (and must) write
tr -cs A-Za-z'
'
to perform the first operation from 6? What the -c does? What the -s does? That you could and even had to form the command line to contain the newline? I bet a lot of Unix users of today would still not know that one.
- did you know what the fifth line was supposed to do "sort -rn" Would you know that you're to sort "numerically" (-n) and that it would "work"?
- "sed ${1}q" how many people even today would know that one?
And after all that, the first of two sorts needs to sort the file that is as big as the original input! If you have hundreds of gigabytes of input, you'd have to have at least that much more just to sort it. McIlroy's approach is a good one for one-off program or not too big input processing, and if you knew that you can use these commands as he used them. But it's still not "a program" in the same sense a Knuth's program is.
Knuth's algorithm would, unsurprisingly, handle the huge inputs orders of magnitude more efficiently. And that is what McIlroy was aware of and intentionally hand-waved it in his "critique." Read the original text:
But the major point is still: Knuth's task was not "use the existing programs" (or libraries) but "write a program" that does what's said to be done: The fair comparison would then include the source of all the sort, uniq, tr etc. programs which McIlroy used.
And once that is being done, McIlroy's code would still be both less readable, less efficient and worse overall.
Which on the other side also doesn't mean that for some purposes "worse" isn't "better": https://yosefk.com/blog/what-worse-is-better-vs-the-right-th... But for some purposes, on "better" works and "worse" simply doesn't, e.g. when the scale of the problem is big enough. And Knuth teaches us how to solve such, harder problems. And presents the complete solution v.s. doing tricks (just call that library/executable which I'm going to avoid to explain you how it is implemented and what its limitations really are).
And giving the misunderstanding in the difference between showing how something is implemented (most efficiently) and that "just use pre-written tool X" approach, I understand even more why Knuth uses assembly in his "The Art of Computer Programming" books.
> But it's also not obvious, i.e. not something a "normal user" would "know":
> - even if you knew that "tr command translates... "sed ${1}q" how many people even today would know that one?
Are you suggesting it's ever been more likely for people to understand how to manage a trie structure in Pascal than use Unix command line tools? Or look flags up in the manpages?
Personally speaking, I'm comfortable doing both, but can't imagine many scenarios where I'd rather have ten pages of a custom datastructure than six lines of shell. (And they all involve either high volumes of data or situations where I can't easily get to the process-level tools.)
> The fair comparison would then include the source of all the sort, uniq, tr etc. programs which McIlroy used.
If you're including the code that provides the surface abstractions, where do you draw that line? If the code for sort, uniq, etc. is fair game, why not the code for the shell that provides the pipelining, the underlying OS, the file system, the firmware on the disk? After all, who's to say that the programs in the pipeline don't just run one after another with temporary files written to disk, rather than in parallel? (Which I've seen happen in reality.)
The same is true for the other side, of course. The 'fair comparison' could easily require Knuth's solution to include the source for weave/tangle, TeX/Metafont/CMR, the OS, etc.
> And once that is being done, McIlroy's code would still be both less readable, less efficient and worse overall.
What definition of 'worse' are you using?
* I expect sort/uniq/tr/sed to be more well tested and understood than a bespoke program.
* If there are issues with the program, it'll be easier to find skills/time to maintain a shell pipeline than custom trie-balancing code written in Pascal. (Sitting aside a prose description of the same.)
* The shell pipeline runs on a machine that can be found in a retail store today, rather than requiring an elaborate download/build process.
* It's possible that the custom solution runs faster, but not obvious without testing. (None of which is worthwhile outside of demonstrated need.)
Point being: it's very easy to find a definition of 'worse' that applies more to the custom solution than to the pipeline.
> The shell pipeline runs on a machine that can be found in a retail store today, rather than requiring an elaborate download/build process.
That argument points to the fact that your “view” of the whole topic changes the assumed definition of the problem that was given to Knuth to solve. Read once again the original text: he was supposed to illustrate how the “Literate programming” could be ised while writing a program which solves a given program. It was definitely not “write an example of calling the existing pre-written programs”.
And, of course, it was all in 1986, definitely not “to target the machine which can be found in the retail store in 2018.”
McIlroy already behaved as the goal had been different than it was.
> McIlroy already behaved as the goal had been different than it was.
How would you feel about McIlroy's solution if it was semantically exactly the same, but written in a literate approach? (Essentially a version of 'weave/tangle', but for shell scripts.)
How would you feel if somebody would present the “six steps clicks in Excel and SQL Server” that eventually produce the same result? The starting goal was simply not “show how to use and combine external programs.” Even if you need some kind of skill to combine them. It’s exactly the same kind of missed fullfilment of the given starting task.
The reason I asked about a literate programming version of the shell script is that it speaks directly to Knuth's original stated goal: "I’ll try to prove the merits of literate programming by finding the best possible solution to whatever problem you pose"
In the context of that requirement, the it's the use of literate programming that's more of a concern than the specific implementation. (Which is why I asked about a literate version of the shell pipeline.)
Earlier in the thread, you also mention this concern around data volumes:
> If you have hundreds of gigabytes of input, you'd have to have at least that much more just to sort it. McIlroy's approach is a good one for one-off program or not too big input processing,
There, your concern is not justified by the stated requirements of the problem: "I did impose an efficiency constraint: a user should be able to find the 100 most frequent words in a twenty-page technical paper"
I do think McIlroy failed to solve the problem of demonstrating the value of literate programming, but I'm not sympathetic to arguments that he should've used more complex algorithms or relied on less library code. This is particularly the case when the additional complexity is only relevant in cases that don't fall into the given requirements.
(A literate program that uses SQL server or Excel might be an interesting read....)
> The reason I asked about a literate programming version of the shell script is that it speaks directly to Knuth's original stated goal: "I’ll try to prove the merits of literate programming by finding the best possible solution to whatever problem you pose" In the context of that requirement, the it's the use of literate programming that's more of a concern than the specific implementation.
And McIlroy's "solution" is provably not the "best possible solution" if you are interested in the algorithms, algorithmic complexity, the resources used, you know, all the topics studied by people doing computer science. All these topics are still relevant today.
That is the domain that was of interest to both Bentley and Knuth, and McIlroy "sabotaged" the whole by presenting effectively only a list of calls to the stand-alone programs which he hasn't developed himself, which he avoided to present, and even without looking at them, just by analyzing what are the best possible implementations of these, every student of computer science can prove that McIlroy's solution is worse.
If you carefully read the original text (and if you understand the topics of computer science), you can recognize that McIlroy was aware of the algorithmic superiority of Knuth's solution.
While I have been watching this subthread with increasing dread, I feel I should point out it was not a competition or contest to be "won" -- Knuth wrote an interesting program, and McIlroy wrote an interesting review of it.
Sure, it wasn't a competition. The fact remains: McIlroy criticized Knuth's presentation of complete algorithms which effectively solved specified problem, by presenting just a sequence of calls which provably have to call the implementations that must implement provably worse algorithms. In the column in ACM whose topic were... algorithms, edited by Jon Bentley.
So if you consider that two sides presented their arguments regarding the algorithms related to the specific problem, we can still say that Knuth "won" that "dispute."
> presenting just a sequence of calls ... that must implement provably worse algorithms.
You've never really established why this matters given that the goal of the challenge was to present the value of literate programming.
The goal wasn't an optimal algorithm or minimal resource consumption - the goal was to demonstrate the value of literate programming on a small data processing problem.
This is a very different problem than writing an optimal or highly scalable algorithm.
> The goal wasn't an optimal algorithm or minimal resource consumption - the goal was to demonstrate the value of literate programming on a small data processing problem.
No. You obviously still haven't read the column and the article that explained what the goal actually was. The actual goal was to demonstrate Knuth's WEB system, which was explicitly made only for Pascal in 1986. That was what Bentley asked Knuth to do (quoting from the article "Programming pearls: literate programming", Communications of the ACM, Volume 29 Issue 5, May 1986 Pages 384-369):
"for
the first time, somebody was proud enough of a substantial
piece of code to publish it for public viewing,
in a way that is inviting to read. I was so fascinated
that I wrote Knuth a letter, asking whether he
had any spare programs handy that I might publish
as a “Programming Pearl.” But that was too easy for Knuth. He responded,
“Why should you let me choose the program? My
claim is that programming is an artistic endeavor
and that the WEB system gives me the best way to
write beautiful programs. Therefore I should be able
to meet a stiffer test: I should be able to write a
superliterate program that will be noticeably better
than an ordinary one, whatever the topic. So how
about this: You tell me what sort of program you
want me to write, Crin d I’ll try to prove the merits of
literate programming by finding the best possible solution
to whatever problem you pose’--at least the
best by current standards.”"
So, in the context of demonstrating Knuth's WEBon the substantial piece of code, the modification of the request was only that Knuth wasn't allowed to use the program he already wrote, but that he had to write a new one! (So the starting goal was in effectively all the premises exactly the opposite of what McIlroy then showed!)
So the goal was to write and present a wholly new program in Knuth's WEB, which, under the standards of evaluation of the quality of the solution as widely accepted by computer scientists, would be "the best solution." Which is exactly about the optimality of the algorithms, resources use etc.
If you still don't fully appreciate the context of the Knuth's program, do search for all the other columns and computer science books written by both Bentley and Knuth -- the topics of both were never "how to use existing programs" but how to develop/use the best algorithms.
> You obviously still haven't read the column and the article that explained what the goal actually was.
...
> do search for all the other columns and computer science books written by both Bentley and Knuth
Since this is public, I'll conclude by noting here that I had indeed read both of the articles, and a bunch of other text by both Bentley and Knuth besides. (In fact, the Programming Pearls books are particularly high on my list of recommendations...)
Then what I can conclude is that you claim that you've read the articles but that you still ignored their content, in case you've read them before writing here, otherwise you would not instead present here the false interpretation of what the Knuth's specific task in these specific articles explicitly was.
I think it's possible and desirable to simultaneously respect both approaches for their respective merits. Maybe ironically for this profession, the choice isn't either/or.
To his great credit, Knuth included McIlroy's critique, in full and without response, when he published Literate Programming. Knuth also conceded in an earlier chapter that literate programming was probably a crazy idea and that promoting it was perhaps an act of madness! It's delightful that we live in a world where there's room both for industrial-strength Fabergé eggs and hack-it-together pipelines. :)
McIlroy's critique is both astute on it's own yet perhaps not relevant in the context of what Knuth had been asked to do, namely present an example of literate programming. That's not to say the McIlroy's script can't have a literate form, only that it wouldn't serve as a good exemplar.
It all serves as a good reminder of how Knuth's work should be used & viewed: It shouldn't be taken as insight into how to manage and develop for software projects. It's more useful as teaching tools for how to think about solving problems. In my mind it's the difference between "pure" science and practical engineering, though I'm not sure that's a perfect analogy.
To try another analogy, it's like Knuth was asked to make a custom set of clothes. And McIlroy's critique seems like saying "Not everyone can afford custom made clothes, and the GAP produces perfectly serviceable clothes that are far more practical in most situations" It's correct, sure, but rather besides the point in the context of the original request.
I love that story, although I think the takeaway needs to be slightly updated. It's quite possible that a sorted word count was easiest to do with bash and unix utilities in 1992. Now, it's easier and more comprehensible in your favorite scripting language (which I'm assuming isn't bash).
The real lesson is to program using the most powerful tools at your disposal.
That's a good lesson, but we shouldn't stop there. Knuth is a computer scientist, but also an artist: writer, musician, typographer, and creative programmer. His insights into programming-as-literature have infused an often-soulless industry with something soulful. He began with the popular notion that programming should be a creative, playful act, and tried to elevate mere playfulness into the realm of fine art. An incredible detour in an amazing career, and an enriching lesson of a different kind.
> It's quite possible that a sorted word count was easiest to do with bash and unix utilities in 1992. Now, it's easier and more comprehensible in your favorite scripting language
It'd be interesting to see how true this is in reality. I'm not at all convinced that most scripting languages can do this quite as concisely as bash and the Unix userland. (But, being honest, this problem seems very well aligned to that tooling's strenghts.)
It is easy to do in your favorite scripting language. But is unlikely to be as short. Comprehensibility is in the eye of the beholder.
Here is a Perl solution for comparison.
use strict;
use warnings;
my $limit = shift @ARGV;
my %count;
for my $line (<>) {
while ($line =~ /(\w+)/g) {
$count{lc $1}++;
}
}
@words = sort {$count{$b} <=> $count{$a} or $a cmp $b} keys %count;
$limit = @words if @words < $limit;
print "$_\t$count{$_}\n" for @words[0..($limit-1)];
sub MAIN($limit = Inf) {
my %bag is Bag = words.map(&lc);
say "{.key}\t{.value}" for %bag.sort( {
$^b.value cmp $^a.value || $^a.key cmp $^b.key
} )[^$limit]
}
And for what it's worth, the `words` function is lazy, so it won't read all of the words into memory first.
Python is relatively short, batteries included and all that:
import re
import sys
import collections
words = re.findall('[a-z]+', sys.stdin.read().lower())
c = collections.Counter(words)
for word, count in c.most_common(int(sys.argv[1])):
print(f'{count:>7} {word}')
Three times the character count of the Unix pipe version, but IMHO a lot more readable (and generalisable).
import re
import sys
import collections
c = collections.Counter()
for line in sys.stdin:
words = re.findall('[a-z]+', line.lower())
c.update(words)
for word, count in c.most_common(int(sys.argv[1])):
print(f'{count:>7} {word}')
The real benefit is that each is a single command that is easy to test in isolation and it's multi-process. That's not possible in most scripting languages.
> McIlroy literally calls it a script in his review (notice the ${1}).
Ok, there's a single shell substitution, if it was fixed would you still call it a script? Technically the result of that is itself a sed script "3q", but if you count either of those then there isn't a lot of wiggle room between script and command, the arguments to tr are by far the most complex "script" involved.
> Nothing prevents you from unit testing in scripting languages.
That is a world away from what I'm talking about. Each line of that command can be executed on the CLI in isolation, you'd be replicating a lot more in nearly any scripting language, except maybe perl and awk.
> Multi-process, sure, but most people aren't looking for that.
Neither am I generally, still quite nice when you get it for free though.
You're really splitting hairs. You can execute a Python command in a REPL. There's little material difference between scripts and commands for our purposes. And scripting languages provide facilities to test functions in isolation.
> He has fashioned a sort of industrial-strength Fabergé egg—intricate, wonderfully worked, refined beyond all ordinary desires, a museum piece from the start.
Curiously, I independently came up with this metaphor in my review of The MMIX Supplement to The Art of Computer Programming [1]:
"You can marvel at its intricacies like one marvels at a Fabergé egg."
That's a good one. It's one of the classic examples of the Unix philosophy and how pipelines can be powerful, although of course not applicable in all cases, and they have their issues too - which often becomes an argument on HN (about text streams vs. object pipes a la PowerShell, etc.).
I had seen it a while ago and blogged about it here, with solutions (approx.) in Python and shell:
There is also the book "Exercises in Programming Style" that uses the term frequency problem to illustrate 33 different programming paradigms (including a shell solution).
The main reason I like the story is that it's good fodder for discussion about priorities in software development. After all, counting words isn't exactly an interesting problem in 2018, but deciding the effective use of libraries and programmer time is still very interesting.
Knuth has said that the story as told above is apocryphal. I quote from here (http://www.informit.com/articles/article.aspx?p=1193856):
> Donald: The story you heard is typical of legends that are based on only a small kernel of truth. Here’s what actually happened: John McCarthy decided in 1971 to have a Memorial Day Programming Race. All of the contestants except me worked at his AI Lab up in the hills above Stanford, using the WAITS time-sharing system; I was down on the main campus, where the only computer available to me was a mainframe for which I had to punch cards and submit them for processing in batch mode. I used Wirth’s ALGOL W system (the predecessor of Pascal). My program didn’t work the first time, but fortunately I could use Ed Satterthwaite’s excellent offline debugging system for ALGOL W, so I needed only two runs. Meanwhile, the folks using WAITS couldn’t get enough machine cycles because their machine was so overloaded. (I think that the second-place finisher, using that "modern" approach, came in about an hour after I had submitted the winning entry with old-fangled methods.) It wasn’t a fair contest.
> As to your real question, the idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."
Most people who learned to program on batch-oriented systems developed those habits. When you had to wait hours or even overnight for your compile to run, you spent a lot of time at your desk checking the code for syntax errors, and running the logic in your head.
I once had to write a program for a class didn't have access to a compiler. I wrote it all in notepad and double checked everything more thoroughly than I ever would have done otherwise. When I got the computer lab the compiler found one or two type-o's and the program ran perfectly the first time. The experience changed how I write code. It's worth trying out at least one time.
When I was a kid, I learned Pascal from a book. We didn't even have a PC at home (it was the 80s) but my mother was a college professor and had access to the mainframe there. I wrote out a whole program to run Conway's Game of Life on a bunch of notebook paper, and one day when I was off from school I came in and typed the whole thing in at a terminal. I ran the command to compile it, and it spit out hundreds of syntax errors. Then it was time to go home.
It was not an auspicious start to my career in computer science.
That's how I learned C. I bought a copy of K&R, but didn't have access to a Unix system with a C compiler for about a month (when my new job started). I read the book and wrote out all the exercises in a notebook.
When it came time to try my code, I spent a while correcting some basic misconceptions that I had developed but that hadn't been corrected by an actual compiler.
> It was not an auspicious start to my career in computer science.
I'd argue the opposite!
Learning how to "run" code in your head, cover edge cases and invariants, is a great skill to practice. When I was at a hospital after a difficult operation, I used to write down pages and pages of C64 programs in my notebook, too :-)
Even if I never even typed all them out later, I like to think this made me a better programmer. It trains your brain in a way that REPL doesn't.
I used to sit in my grade 12 English class working out polygon fill-and-shade routines on graph paper instead of whatever nonsense Shakespeare we were going through at the time.
Worked out really well for me - I'm employed well as a programmer, and I got to come to Shakespeare basically fresh when I was old enough to actually appreciate the work.
How old is that? I'm over 50 and I still can't get into Shakespeare. Watching a performance is OK, though not something I'd choose to do on my own, but reading the plays? Can't get into it at all.
Have you tried an annotated one? Shakespeare is full of, essentially, in-jokes and memes from the 1500s. There is a ton of depth in his writing that you'll totally miss if you don't know all of that context.
Also, lots of bawdy humor that gets glossed over by staid and respectable modern productions. For example: the "bite your thumb" gesture that gets used in a few plays is pretty much the equivalent of somebody from today giving a double middle finger while sticking their tongue out.
I originally learned to program in 6502 machine code. Typically I would write code in spiral notebooks while watching the late late late show and hexkeypad it in afterwards. made it a little easier to edit things that way...
To me, this is one way to show evidence of mastery of an engineering discipline: The ability to do it once and have it come out right. If you asked a modern software engineer who's used to fast modern tools, they'd tell you this was impossible. Everyone's pretty much settled into this kind of workflow:
Yes. A real feedback loop is better than a mental one in your head. “Turn off your targeting computer and use the force, Luke” only works if you hone a lot of mental power into it (like Knuth), but it would have also discouraged a lot more people without that ability from programming at all.
In the past you had to be Hawkeye to program, now most of us just put on our Ironman suit and get on with it.
Nevertheless, the more complete and accurate your picture of what you are attempting to achieve, and where you are, is (at every level of abstraction), the fewer iterations it will take to get done - or the fewer bugs remaining after a given elapsed time.
The number one thing I try and build up for my teams is a mental picture of the entire system. It is amazing, to me, how much resistance I get to the idea.
Yes, if you're writing your -own- code, learning to envision what the system must do in your head first is very, very valuable.
When you're on a team, however, you -have- to compartmentalize. You have to create abstractions, black boxes, functionalities that you -could- dive into, but which you will accept only understanding at a contract (interface) level. The skill of envisioning the system is still useful, of course, but there will be black boxes.
The problem that causes, of course, is that every abstraction is leaky. You didn't know that calling this method also fired off this event, and so you -also- triggered that event. Or whatever. Hence, bugs and iteration. You also have to deal with -bad- interfaces, a bajillion parameters, or a parameter object that isn't intuitively obvious how to initialize, and you start having to iterate.
The problem is not just, or mainly, leaky abstractions - you can't make a system out of a collection of opaque abstractions. An important part of Knuth's genius is seeing deeply and clearly how they have to interact. Leaky abstractions are just one sort of problem that arises when this doesn't happen.
If we want to go there, modern tooling allows you to forget some of the details you used to care about and focus more on other things, the tooling is rarely comprehensive and there will always be more to consider until the entire programming process is automated.
As modern tooling chips away at the accidental difficulties of software development, systems-level situational awareness becomes relatively more central to the process.
Sort of. I've seen these productivity-enhancing tools used by people who have absolutely no clue what those tools are doing on their behalf produce some nightmares that sort of work sometimes. To get geekier with the analogy, a lot of people are like spiderman (the Tom Holland spiderman) who need to learn to use the powers of their suit before they're given access to the more powerful, more dangerous stuff.
It is much more impressive that Knuth came up with and wrote TeX than that it was bug-free or written all in his head.
Computer time is no longer at a premium and bugs can be fixed.
Useful, creative, and clear solutions usually trump perfectly correct ones, except in special cases such as when life is on the line or you're writing programs to fly to the moon where getting it right the first time is of utmost importance.
"Useful, creative, and clear solutions usually trump perfectly correct ones"
Actually I've found that the most useful solutions are those that are creative and clear.
But yeah, it totally depends what one is doing. Usually if the algorithm is mission critical and the component is mission critical you can weasel out enough time to make it right (so you don't have to return it to ever again and you get a reputation as a guy who's code just works).
In some domains it is, for example you won't create fun arcade game without iterating 1000s of times on the basics (control scheme, physics, etc). Often it isn't.
I learnt programming writing small games, so quick iteration is my first instinct, and I can see myself sometimes jumping to write code too quickly.
During big refactorings, I admit I sometimes find myself relying on compiler errors as a crutch to find (for example) which API layer I haven’t added the parameter to yet.
I’ll also rely heavily on my IDE’s function signature completion, to remind me whether some method I’m calling takes an int or an unsigned int, rather than have it memorized like I used to.
This might be why a lot of people (including me) hate whiteboard-coding interviews: we’ve gotten so spoiled by our tools that we can’t code without them!
> During big refactorings, I admit I sometimes find myself relying on compiler errors as a crutch to find (for example) which API layer I haven’t added the parameter to yet.
That's not a 'crutch' - it's literally what those compiler errors are for! The alternative would be for the compiler to do something non-sensical, which would error out at runtime.
And when it comes to white-board coding, you should arguably be using pseudo-code anyway - your goal is then not to come up with something that will run, but to convince your interviewer that the code is 'morally' correct and that any subsequent fixes are well within your skill level.
My biggest problem with whiteboard coding is writing text in straight lines. You never realize how much of a liability being left handed is until someone asks you to do whiteboard coding (ya, many lefties train for this, but some don’t).
Computers have been a godsend for my penmenship. On the other hand, I guess I relied on them too much as a kid.
No, we definitely do it forwards; fountain pens and cursive used to be a serious problem, as is having your hand over what you've just written, so I write "overhand".
(To see this, do a 45 degree '/' with your right-hand pen, then leave the tip in the middle and make your left hand into a mirror image in the plane of the '/')
My dad was left-handed. He'd use ball-point pens that didn't smear, and when he was taking notes he'd flip the notebook so that the binding was on the right.
Pens are the worst, they konk out so quickly! iPad pros are really useful here. I wish my hands were transparent, occlusion is also a real problem (it is hard to write a straight line of text if you can't see what has already been written).
This might be why a lot of people (including me) hate whiteboard-coding interviews: we’ve gotten so spoiled by our tools that we can’t code without them!
Isn't that like someone saying they've gotten so spoiled by training wheels, they can't ride a bike without them? It's not like I'm one to talk 100% of the time. I think I couldn't get my company's project to build, without a few weeks of reading and understanding a lot more of the build system. However, I've also coded on an 8 bit machine by flipping 8 switches and pressing a commit button for each byte.
It's a worthwhile exercise to do some coding with nothing but a text editor and a debugger once in a while. That isn't going 100% to the bare metal, but it's a level that's very worthwhile for working on basic skills. Entire programming education books have been based on this idea.
The difference is that training wheels are intended to be a temporary assistance until you've learned to operate without them, while development tools are meant to be a productivity boost. If you're working with a toolchain and aren't leaning on it, you're not working to your full potential.
Whether being able to work without the chain is also important is an independent issue. (I happen to prefer vim+scripting languages over IDEs+compiled. But I recognize it as a personal preference, and not a question of moral superiority.)
The difference is that training wheels are intended to be a temporary assistance until you've learned to operate without them, while development tools are meant to be a productivity boost.
Right, but pro cyclists don't say they couldn't do it without piece of equipment X. X is just a performance boost. Some coders say they couldn't practically do it at all without X. There would be a big difference in fitness between a commuter being unable to make a certain trip without a motor assist, and a rider who could do the same trip without the motor assist. I think most would look askance at a "pro" in the 1st category.
If you're working with a toolchain and aren't leaning on it, you're not working to your full potential.
Yes, but you need to be wary that you're using the toolchain for its intended purpose. The toolchain is supposed to be saving you typing and lookup time. It's not supposed to be substituting for your actual understanding of the code. The former is a good thing, and you should be good at using the tool for that. The latter is a bad thing, and you shouldn't be doing that. By working out with nothing but an editor sometimes, you can work out in a way that guards against that.
No football player plays actual games running through tires, but the exercise is apparently helpful.
But I recognize it as a personal preference, and not a question of moral superiority.
It's not moral superiority. It's using tools as intended and not substituting for understanding.
A lot of what compilers catch are careless errors like mistyped variable names and missing semi-colons. This is not fundamental understanding of the code. This is a level of care that it is perfectly fine to let your toolchain handle for you.
A more interesting case is type checking. One of the reasons to use static type checking is that you are free to change the type of a variable or argument to a function, and then let your compiler tell you what needs to be fixed. Each individual fix is straightforward and easy - you understand it. But thanks to the toolchain you don't have to find them.
Choosing to work with dynamic typing and exercise vigilance is a reasonable choice. (In fact this is what I choose.)
Choosing to work with static typing and let the toolchain help you in ways that it was designed to help is a reasonable choice. But choosing to work with static typing and refusing to take advantage of how it lets a toolchain help you is not a choice that makes sense to me.
That you should know how to operate without the tools is one thing - I strongly support it. But maintaining your practice in doing so is quite another.
Choosing to work with dynamic typing and exercise vigilance is a reasonable choice. (In fact this is what I choose.)
The biggest chunk of my professional work is still in Smalltalk.
But choosing to work with static typing and refusing to take advantage of how it lets a toolchain help you is not a choice that makes sense to me.
Yet despite being a Smalltalker for years, I'm still an advocate of static type annotation. It enables you to do more refactoring than not having it. It enables you to know sooner about incorrectly written code.
That you should know how to operate without the tools is one thing - I strongly support it. But maintaining your practice in doing so is quite another.
The "maintaining your practice" I'm advocating in this thread is merely: "Code without an IDE once in awhile to make sure you know how to operate without the tools." You don't have to fire drill every day. There is some benefit to doing it once in awhile, however.
I'm not advocating not using an IDE. I'm merely advocating for knowing exactly what it is the IDE is doing for you and doing to you. It's just wise practice for any professional power tool.
Yes, but you need to be wary that you're using the toolchain for its intended purpose. The toolchain is supposed to be saving you typing and lookup time. It's not supposed to be substituting for your actual understanding of the code.
The problem is, I don't think anyone completely understands the code when they write it. 90% of the time, you're writing code with a library written by someone else and you have an abstract understanding of what it does. That's the whole point of code is to abstract as much detail as possible. Its so you don't have to know what the machine code equivalent of X CPU add instruction is when coding a website for example. You just type + and the compiler/interpreter does everything for you.
The problem is, I don't think anyone completely understands the code when they write it. 90% of the time, you're writing code with a library written by someone else and you have an abstract understanding of what it does.
The trick is this: Do you actually have that good abstract working understanding, or have you only convinced yourself? This is the difference between sloppily convincing yourself you understand a word salad, or being able to coherently teach a concept. It's even a further step to be able to understand the specification of something in enough detail to be able to implement it and to see potential pitfalls. (This is the difference between real science and cargo cult science: Predictive Power.)
You just type + and the compiler/interpreter does everything for you.
There's a world of difference between just typing "+" or "/" because you've seen it and just going on token frequency/pattern matching, and really understanding the concept.
X = Y + Z
Is often going to be quite different in control flow consequences from
X = Y / Z
If Z happens to be zero.
If one is a careful programmer who has done substantive work, one should know there's a world of difference between a specification of a program that sounds good on the surface and a really good specification. After many years, one will have encountered many specifications which had to be re-thought one or more times to be practically implemented.
No, you shouldn't have to always rewire your program in hardware from NAND gates you make yourself from silicon in the bucket of sand under your desk. (I've actually quit a job because the manager was going overboard with that attitude.) But you should be able to peek under the next level of abstraction, and have enough working knowledge to become wary and know when you should be peeking. Either you can do this, or you do not have that level of skill/knowledge. Simple as that.
(Addendum: If you think to yourself that you can't do that, there's two common reactions to that. Either you tell yourself excuses and denigrate and don't bother, or you roll up your sleeves and learn it for yourself. You choose.)
There's a world of difference between just typing "+" or "/" because you've seen it and just going on token frequency/pattern matching, and really understanding the concept.
X = Y + Z
Is often going to be quite different in control flow consequences from
X = Y / Z
If Z happens to be zero.
Obviously, but that is more of a concept in understanding mathematics than writing and understanding code. For example, do you need to know that Y / 0 returns a custom exception inherited from several parent classes under the parent Exception? Are you really thinking about all that when you code? Or is it mostly irrelevant and you just need to know that an error occurs and you need to be mindful of it (regardless of whether the error comes in the form of an exception, error code, hardware interrupt, etc.)
API's are written specifically to avoid needing to peek under the code and you should only really need to if they are poorly documented. Even then, you only seem to think only the next level of abstraction is warranted and not other levels (i.e. IL/x86 assembly or machine code). This next level of abstraction can yield important learning in code optimization because what you write in a higher level language can be implemented in multiple ways at a lower one (sometimes at a performance determent).
Either way, most coding is a black box exercise. While looking under the hood is useful and informative at times, nobody has the brain capacity nor time to absorb it all and apply it. Which is why the smart people built upon other smart people to put a model in place that can be applied without knowing what machine instructions your computer spits out after compilation, or without knowing the implementation details of how you get a list of a specific type or how it adds/removes/copies/etc. Same as science. You don't need to do an experiment every time to know the motion of the planets, measuring their positions in the sky each time to calculate the orbits and deriving the equations through calculus. You simply skip to the step of Newtonian mechanics and kinematic equations.
Imagine if everyone had to learn how an engine worked in order to operate a vehicle. Hardly anyone would be able to drive.
Obviously, but that is more of a concept in understanding mathematics than writing and understanding code. For example, do you need to know that Y / 0 returns a custom exception inherited from several parent classes under the parent Exception? Are you really thinking about all that when you code?
Indeed. In a conversation like this, the mention of divide by zero is just supposed to evoke all of that for an audience of programmers.
API's are written specifically to avoid needing to peek under the code
Any product is designed to be simply used. The difference between an end user and a pro is that the pro can sometimes go a little further and sometimes needs to because they can push the product harder.
Either way, most coding is a black box exercise.
As is most professional activity of any kind. Most of any job is kind of routine. That's why they call it "routine." What distinguishes the consummate pro is the ability to go beyond when needed.
While looking under the hood is useful and informative at times, nobody has the brain capacity nor time to absorb it all and apply it. Which is why the smart people built upon other smart people
Being a smart person means taking into account context and getting the best cost/benefit. Nobody who is "smart" would advocate knowing absolutely everything about everything, all the time. That's clearly a straw man. (Perhaps you are pressing things in a certain direction?) It's also clearly not the position I'm advocating for. Likewise, nobody who's smart would simply advocate for ignorance. Not even the smartest people are infallible. Smart people are simply prepared for when things go wrong.
Imagine if everyone had to learn how an engine worked in order to operate a vehicle. Hardly anyone would be able to drive.
Funny you should mention this, but I was about to bring up that analogy, then decided to leave it off. I guess you're indicating it should be brought up. A typical driver doesn't need to know much about their engine. However, a professional driver of one of several different types is very well served by some knowledge of engines. Such knowledge isn't needed all the time, but when it is needed, the potential costs of not knowing can be quite high. You could lose a race, lose money, or lose a life.
The problem is, what you define as a pro is completely vague. There's no adequate definition of one as its simply presumed that a pro "knows what to do in X situation in X domain" which is almost no different than "knowing what to do in every situation that resides in X domain". There's literally no difference, its that vague.
Professional drivers really don't need to know much outside of the behavior of what they experience while driving. This is why there are things like pit crews and staff that support the driver. Its so the driver has to think about driving, not what fuel to air mixture is adequate to prevent piston knock.
This is the problem, nobody adequately defines what a professional is and can only be seen when "someone knows what to do" which implies having broad, wide scoping knowledge about a topic which again, is supposed to be something that isn't required and is the point of having models in the first place.
Its not that having in depth knowledge is bad, its just that having in depth knowledge is often not required and it doesn't make you any less of a professional for not "knowing it at the right time".
And with regards to Y / 0, a programmer is mostly thinking about how to catch the exception properly for a given task. He does not care that the exception is nested X levels deep in the class hierarchy. He does not even need to know the mathematical understanding of Y / 0, just that its an error state. He does not care how the error code is generated, just that it exists. He does not care how the list adds/removes items and what the worst case runtime is, he just cares about its use. Because the point is, you're not supposed to care about implementation details. Abstraction is king and you can go a long way without knowing a lot.
The problem is, what you define as a pro is completely vague.
I think it's pretty clear from this thread. In the context of programming, it's someone who can do what a "mere user" can do, but who can push the thing to extreme limits, modify it to do something completely new, or fix it if it's somehow broken or has holes in its design.
Professional drivers really don't need to know much outside of the behavior of what they experience while driving.
In the case of racecar drivers: They need an intuitive understanding of what their engine is capable of, and how far they can push it based on what it's been doing. They need an understanding of driving physics. They need a good feeling of driving physics, and they need to understand how the wear they've put into their tires could affect performance. There's a lot to know about competition rules and regulations. They need to know enough about the physics of car aerodynamics, and how it affects their grip in different situations. Not knowing such things has literally gotten professional drivers killed. And professional encompasses more than just racecar drivers. Truck drivers working in extreme conditions have to know quite a bit, it turns out. Truck drivers who are pointedly ignorant about their engines can end up costing someone a lot of money in repairs. "Professional driver" actually encompasses a number of careers. The point is, professionals have to know a lot that goes much, much deeper than just being an "end user."
And with regards to Y / 0, a programmer is mostly thinking about how to catch the exception properly for a given task.
Not necessarily. This would vary a lot depending on the language and the particular task.
He does not care how the list adds/removes items and what the worst case runtime is, he just cares about its use.
Not necessarily. This would vary a lot depending on the language and the particular task.
Because the point is, you're not supposed to care about implementation details. Abstraction is king and you can go a long way without knowing a lot.
The king is just another fallible mortal. All abstractions leak. I'll grant that there are professionals what can mostly get away with basically just being an end user. 95+% of the time, everything will be routine, and copacetic. It's that last few percent where things can get really dicey and expensive. (Also the basis of a lot of the money for "consultants.") If someone wants to run a shop where someone isn't prepared with the know-how to deal with that, I guess it's their business. That's not what I'd consider a very a high level of "professional."
I think it's pretty clear from this thread. In the context of programming, it's someone who can do what a "mere user" can do, but who can push the thing to extreme limits, modify it to do something completely new, or fix it if it's somehow broken or has holes in its design.
Its not clear at all because you can simply move the goal post. Someone with 20 years of programming in client side apps struggles with programming a website and can't figure things out on X situation without help. Does that mean they are automatically not professional?
In the case of racecar drivers: They need an intuitive understanding of what their engine is capable of, and how far they can push it based on what it's been doing. They need an understanding of driving physics. They need a good feeling of driving physics, and they need to understand how the wear they've put into their tires could affect performance...
Again, another arbitrary definition. I bet you can go to most professional drivers and they won't understand the physics at all, they have an understanding from experience and consultation of the experts, but I would bet that 90% of drivers aren't going to pull out calculus or kinematic equations to analyze a race track. They either hire someone to do that or use a computer (most likely) to simulate the race. And even then, the simulations are inaccurate due to driver emotional state. My guess is they have, at most, a surface level understanding of driving physics.
Not necessarily. This would vary a lot depending on the language and the particular task.
So does everything, but we know that Y/0 is a programmed failure state based on the library being used. So the solution is either, catch the exception (and by catch, I mean control in a broader sense since you can condition the input), use a different library, or write your own library. I would hope, in a business setting and in most settings, you would choose to catch the exception or control for it in some way.
It's that last few percent where things can get really dicey and expensive. (Also the basis of a lot of the money for "consultants.") If someone wants to run a shop where someone isn't prepared with the know-how to deal with that, I guess it's their business. That's not what I'd consider a very a high level of "professional."
So you base your decision on the few percent of people that can do X vs. the 95% of people that can't, but can solve all other problems without knowing the details? Seems like a very irrational outlook considering the following:
a) Some of these top 5% of people may not even exist
b) If they do exist, they are most likely consulting
c) They are being paid more than most businesses can afford
d) They work for companies you probably don't work at
e) They are doing research work and publishing papers you probably never read
f) They probably can't solve X problem outside their expertise without assistance
But the point is that even if abstractions leak, they rarely do. And that's the whole point of engineering and technological advancement in general, is so that you don't need to know the details.
Its not clear at all because you can simply move the goal post. Someone with 20 years of programming in client side apps struggles with programming a website and can't figure things out on X situation without help. Does that mean they are automatically not professional?
People are experts in different things, but general skills can be applied. My wife notes that there's an effectiveness and mindfulness constant to be applied to "years of experience." There are people in her field who have 20 years of experience, who know less about the regulations and subtle aspects than she has learned in 2.
I bet you can go to most professional drivers and they won't understand the physics at all
Note that I wrote intuitive understanding. It would be highly inaccurate to say they "don't understand it at all." I would question the general understanding of someone who would say that.
My guess is they have, at most, a surface level understanding of driving physics.
Something that someone has practiced over many years in a competitive environment isn't just "surface." This is why educated people should have at least two areas in which they've delved deeply, so they have a firsthand knowledge of what "deeply" means for knowledge.
I would bet that 90% of drivers aren't going to pull out calculus or kinematic equations to analyze a race track.
That's a ridiculous suggestion. Projecting that position on someone is either grasping at straws to make a straw man, or some other form of bias. If a driver knows enough to intuit there might be a way he can improve his line, such that he can seek out another expert's help, then I'd say he could well be a "consummate pro." It's the curiosity, awareness, and drive to peek under the surface which is the difference.
So you base your decision on the few percent of people that can do X vs. the 95% of people that can't, but can solve all other problems without knowing the details?
A more concise way of putting it, is, "Are you smart and informed enough to know what you don't know? Is that sufficient to keep you out of trouble?" The Pareto often rears its ugly head in reality. That last few percent can really, really cost you.
If they do exist, they are most likely consulting
I was an example.
They are being paid more than most businesses can afford
There's an old saying for this: "A fool and his money are soon parted."
They work for companies you probably don't work at
Again, I was once such a consultant. Also, there are coworkers at my current job who are curious, energetic, and smart enough to have such a position, but who don't want one right at the moment.
They are doing research work and publishing papers you probably never read
Nah. Just a modest level of basic curiosity is enough to get you there.
They probably can't solve X problem outside their expertise without assistance
Which is fine, if they're smart enough to know what they don't know, so that they can gracefully navigate their situation.
But the point is that even if abstractions leak, they rarely do.
Boats leak. Could be rarely. Could be a lot. Both can be true of the same boat. It depends on how hard you're pushing that equipment. People can and do make money driving a boat no harder than a dilettante hobbyist. People can and do make money using technology at about that level too. In either case, I just hope everyone knows what they don't know, so no one gets in over their head and drowns.
And that's the whole point of engineering and technological advancement in general, is so that you don't need to know the details.
The point is to get stuff done and to save money while making money. Knowledge is power, but ignorance helps someone else's margins. "You pays your money, and you makes your choice."
Regardless of the fact that this post is full of contradictions, for example claiming years of experience requires mindfulness and effectiveness and then saying someone who has practiced over many years doesn't just have a surface level understanding.
You're also straw-manning me by misrepresenting what I said about the driver having a surface level understanding of vehicle physics which you agree by saying its absurd for the driver to use equations to analyze a race track. Its pretty clear, to any physicist, that if you don't have an understanding of how to model kinematic movement, that you have an intuitive or surface level understanding.
Intuition is probably the worst thing to champion as its not measurable and often unreliable. For example, it was intuitive that the sun revolved around the Earth and that large objects fell faster than smaller ones.
The whole point of life is to do things you don't know how to do because otherwise you never grow. It seems like you're saying the opposite, that people should know they don't know and never approach it.
The only way you get stuff done and save money is if you reel in the details and make things easier. Again, its the reason why people aren't writing their own language from scratch and instead using an existing language and framework.
It's like a lumberjack so used to cutting down trees with a chainsaw that they'd have trouble to put down a decent tree with an axe. Powerful tools do certain subtasks for you, if you expect to use them all the time (and it's a reasonable expectation in your domain), then it makes all sense that you'd forget these subtasks.
It's like manual memory allocation and proper deallocation - been there, done that, but after 10+ years of working with GC languages, I would definitely have some memory leak bugs if I suddenly had to do that again. It is a basic skill, but just as many other basic skills, it's one that you can ignore in most domains.
It's like a lumberjack so used to cutting down trees with a chainsaw that they'd have trouble to put down a decent tree with an axe.
Bad analogy. It's more like a "lumberjack" who thinks all it takes is pressing the controls on a chainsaw. There's some more, very important things to know, and not knowing them can cost time and money or even get someone badly hurt. The chainsaw has to be maintained, with possibly severe consequences if it isn't. You need to know how to get the tree to fall where it's supposed to. You have to know how to cut so the weight of the tree doesn't clamp the chain.
It's a bad analogy, because the important issue is whether someone is letting the tool substitute for understanding. I guess some rube might think their chainsaw is so powerful, they don't have to worry about how they cut down the tree. It's more like sailors who think they can just lean on GPS instead of having skills. Those are the guys who collide their ships and get people killed. It's more like pilots who weren't great student pilots, and they make critical mistakes and program the autopilot into the side of a mountain or do the wrong thing when the plane is stalling or always depend on the auto-landing system and don't really know how to do a manual landing and wreck the plane. (Those are all things that really happened.)
Just because some big fraction of time being a "professional" is just being an end user doesn't mean there isn't something more beyond that which is very important. I guess it just has to do with what level of "professional" you aspire to be.
It's like manual memory allocation and proper deallocation - been there, done that, but after 10+ years of working with GC languages, I would definitely have some memory leak bugs if I suddenly had to do that again. It is a basic skill, but just as many other basic skills, it's one that you can ignore in most domains.
If you're doing something hard enough, you still have to think about stuff like that in a GC environment. I know, because I worked for a Smalltalk vendor. You can even have memory leaks in a GC environment. There's a lot of stuff you can ignore -- most of the time -- but can come and bite you real hard if you're not prepared for it.
>During big refactorings, I admit I sometimes find myself relying on compiler errors as a crutch to find (for example) which API layer I haven’t added the parameter to yet.
Sure, I do the same. I wouldn't call it a 'crutch' though; it's just using the tools you have available to you.
I know you probably exaggerated, but isn't causing so many feedback loops still the sign of a bad programmer?
I wouldn't trust someone to deliver quality software who produces multiple syntax and multiple runtime errors while coding, especially when modern IDE's fix typos on the fly? What about all the runtime errors he didn't test for?
Is that flow really that common?
Sure, I cause errors all the time, but my workflow includes a compile time error maybe once in 2 trys and an obviously fixable runtime error once in maybe 10 times. Sure there are these countless of edgecase errors I don't even know about, but that's not the thing catchable with this workflow and I wonder if someone with that workflow, who misses the obvious, does miss even more edgecases.
Also don't a lot of tech companies require (mostly) correct whiteboard coding exactly because of this for quite some time?
The ability to do it once and have it come out right.
One coworker of mine was the son of one of the engineers of the XC-142 tiltwing aircraft. He started a project to make a functional scale model, and this was before Arduino, so we decided to use a Gumstix Linux board. (Because of its generous number of GPIO outputs.) I wrote a bit-banging implementation of the flight surface control mixing in C. It "just worked." No errors. It just ran the 1st time. It was even flown on a simpler aircraft.
This isn't my usual way, however. Usually, I'm quite iterative. If you're going to write a program that works the 1st time, then it helps if the control flow is relatively simple, there isn't a lot of complexity that can come about with interaction with state, and it does just one thing.
There are individuals out there who possess sufficient cognitive capabilities such that they do not require much in the way of cognitive load mitigation tooling even when working on very complex and technical tasks.
I had a similar experience! When I first learned programming, it was through a book I borrowed from a teacher, who would only let me access her computer once a week. So I basically wrote all the programs I wanted to write with pen and paper, and then typed them and ran them once a week. It definitely gave me a much thorough understanding of many algorithms, as I basically have to simulate execution on paper.
When I was at Stanford with the AI project [in the late 1960s] one of the things we used to do every Thanksgiving is have a computer programming contest with people on research projects in the Bay area. The prize I think was a turkey.
[John] McCarthy used to make up the problems. The one year that Knuth entered this, he won both the fastest time getting the program running and he also won the fastest execution of the algorithm. He did it on the worst system with remote batch called the Wilbur system. And he basically beat the shit out of everyone.
And they asked him, "How could you possibly do this?" And he answered, "When I learned to program, you were lucky if you got five minutes with the machine a day. If you wanted to get the program going, it just had to be written right. So people just learned to program like it was carving stone. You sort of have to sidle up to it. That's how I learned to program."
[0] http://www.softpanorama.org/People/Knuth/index.shtml