I ran into loop invariants in my CS undergrad where they were teaching "formal logic" to prove correctness of programs. At the time, it was a bit above my head and i struggled with it. i'm just beginning to understand what it was that we were studying at the time. cool stuff. wish i could go back and re-learn that stuff. Also google TLA+ - there's a book on it by Leslie Lamport.
i have an anecdote that you might find interesting..
many moons ago when i was an undergrad, there was this one class that i hated. i got practically nothing out of it. and the reason was that the professor had prepared all the course material in powerpoint. every damn proof, every single block of code.. everything. he would come in and read the deck, line by line in the class. i can trace my hatred of powerpoint to that single class. i remember thinking "i can read damnit. if you're going to read the powerpoint back to me, why don't you email that to me and i can read it on my own. i am not a toddler that needs to be read to by an adult".
prepare ahead of time - by all means. plan the coursework, the lecture. but whatever you do, don't read your prepared material to the class.
incidentally, i was also taking a class on public speaking at the same time. and near the end of the semester we each had to give a speech as our final "project". and one gent stood out - Sean (don't remember his last name) - he put up one poster and gave a 20 minute speech that i remember to this day after some 20 years or so. he didn't "read down" to us with a wall of prepared material.
since then i have sat through countless presentations/meetings. i can recall any presentation as vividly as i remember Seans'.
P.S.: Sean, if you find this, you convinced me that day that BMWs are the ultimate driving machines.
I use it for almost everything. Take ETL type tasks. It's almost impossible to remember where I downloaded certain data files six months after initially creating a project. I put that into the Makefile. Everything I do on the command line, I put into a Makefile. That way, 6 months later when I need to download new data, do some transformation on that data and load it into the database, 'make' re-traces my steps from 6 months ago. Broadly, my ETL makefiles look like this:
My first computer in 1977 that booted up into BASIC - Commodore PET (Personal Electronic Transactor).
It had 8k of memory, and I paid a whopping $400 for the 32k memory expansion module. The original PET cost me $800 used.
My first game was a horse racing / betting game, reflecting my Mom and Dad's penchant for the ponies (OTB, you owe me big time ;)
IIRC, I could only use graphics characters if all the letters were uppercase. My game had 3 horses to a race that ran from left to right based on a random number of which horse and how many spaces it moved.
I generated a random number of 0 to 3 for each of the horses with:
100 FOR R = 1 TO 3
110 X = INT(4 * RND(1))
120 NEXT R
130 SPC(A)
You could bet based on fixed odds, and the payoff would show at the end of the race (bet * odds).
My memorable joy was watching my Mom and Dad rooting at the 9 inch monochrome green screen! I was hooked on coding, but only at home. I rarely worked coding for a living.
It booted up into PET BASIC, and aside from some PEEK/POKE limitations, you could access all of it. People hooked up joysticks later to the user port, and hacked speakers or buzzers for sound.
I loved the Datasette (cassette tape drive) for storage! You had to put the tape in the drive, instruct BASIC to LOAD "PROG", and then it would prompt you to hit 'PLAY' on the Datasette. I think you then typed RUN "PROG" when if finished loading.
I would go to the store where I bought it in NYC, and they had like 4 or 6 plastic bags with cassette tapes in them and a one sheet or a few sheets of instructions. I wanted FORTRAN or APL, but APL was not available on my PET.
I would love a real LISP Machine, even an historical one for the pleasure of it really being 'turtles all the way down'! I love Lisp more than BASIC, but PET BASIC will always have a special place in my heart, and in the cobwebs of my mind.
I saw a Commodore PET at that time in a local store. In school I then got access to a Commodore CBM 3032, which was a more robust model of the PET. It had a better keyboard, for example. Booted into BASIC, too.
They were beasts. I left mine at my parent's house, and was hoping to reclaim it, but my cousin John had borrowed it and sold it years later. I should have kept tabs on it!
When I look at the video of Kalman Reti running Symbolics Lisp Machine in an emulator [1], I am still blown away by how much more sophisticated and aesthetically pleasing it was compared to my PET or the Apple ii. The difference seems almost asynchronous, like time-traveler tech.
How you could just drop in to any system library, or even the kernel, and use the same language, Lisp, to modify anything live is astounding. A lot of people have dismissed any talk of how great LISP Machines were as a bunch of nostalgia, but I don't think any of them have watched 15 minutes of somebody operating in that environment. You can't look at it, and keep a straight face when talking about how great the Apple ii was or the Lisa for that matter later on.
The demo by Kalman is pretty cool. Over a period of a decade there were many high-end applications for the machines. The base system could cost from a few ten thousand to $250000 for full set up used in the TV and Broadcast industry: machine, console, software, color screen, huge amounts of memory, large disks, video tape recorder, graphics co-processor, graphics tablet, ... That's far away from the home computers from Commodore.
If you've been able to keep up with others solely with the know-how you've picked up on your own in terms of software development, then great; my advice: take as many non-CS classes as possible. After you graduate, you'll spend all of your time in CS - constantly learning. But you'll rarely get a chance to learn anything else. College is where you do it.
Really, I see college as a way to learn how to learn. You've done that with CS, it seems. Now learn how to learn in other areas. You'll probably never get another chance.
During college I hated having to take any non CS class. In hindsight, I'm really glad I had to.
> To begin with you have to have a single target for each file produced.
Try this next time (only the pertinent lines are included):
SOURCES=$(wildcard $(SRCDIR)/*.erl)
OBJECTS=$(addprefix $(OBJDIR)/, $(notdir $(SOURCES:.erl=.beam)))
DEPS = $(addprefix $(DEPDIR)/, $(notdir $(SOURCES:.erl=.Pbeam))) $(addprefix $(DEPDIR)/, $(notdir $(TEMPLATES:.dtl=.Pbeam)))
-include $(DEPS)
# define a suffix rule for .erl -> .beam
$(OBJDIR)/%.beam: $(SRCDIR)/%.erl | $(OBJDIR)
$(ERLC) $(ERLCFLAGS) -o $(OBJDIR) $<
#see this: http://www.gnu.org/software/make/manual/html_node/Pattern-Match.html
$(DEPDIR)/%.Pbeam: $(SRCDIR)/%.erl | $(DEPDIR)
$(ERLC) -MF $@ -MT $(OBJDIR)/$*.beam $(ERLCFLAGS) $<
#the | pipe operator, defining an order only prerequisite. Meaning
#that the $(OBJDIR) target should be existent (instead of more recent)
#in order to build the current target
$(OBJECTS): | $(OBJDIR)
$(OBJDIR):
test -d $(OBJDIR) || mkdir $(OBJDIR)
$(DEPDIR):
test -d $(DEPDIR) || mkdir $(DEPDIR)
I've been using a makefile about 40 lines long and I've never needed to update the makefile as i've added source files. Same makefile (with minor tweaks) works across Erlang, C++, ErlyDTL and other compile-time templates and what have you. Also does automagic dependencies very nicely.
> Generating all the targets to get around this is a nightmare that results in unreadable debug messages and horribly unpredictable call paths.
If you think of Makefiles as a series of call paths, you're going to have a bad time. It's a dependency graph. You define rules for going from one node to the next and let Make figure out how to walk the graph.