Hacker Newsnew | past | comments | ask | show | jobs | submit | arakyd's commentslogin

Yeah, he says that Lisp is too hard for the common man, and then sort of implies that this is because it doesn't use ALGOL derived syntax. I don't think that's the issue. It's not Mort who's fallen in love with ALGOL syntax, it's the CS graduate Elvis's who cling to their expensively acquired "skills" (priesthood memberships) with religious fervor (he says, as a CS student).

I program in C#, and lambdas and delegates are nice (I can't use expression trees because the company I work for standardizes on .NET 2.0). But the system is too gnarly, still verbose as fuck, and nowhere near as nice as Lisp (as Krishnan freely admits). This just increases the need for sophisticated developer tools and makes things more obscurantist, not less. C# is just another variation on Java, i.e. a way to drag Elvis types 10% closer to Lisp without depriving them of their oh so precious ALGOL syntax.

Fuck the priesthood, seriously.


Something I've never gotten around to is building an algolish-lisp -> lisp translator.

    [a, b, c] -> '(a b c)

    f(x) -> (f x)
These two translations are strictly syntatic modifications to lisp.

Some syntactic sugar:

    {
        f(x)
        g(x)
    } ->
    (progn
        (f x)
        (g x)
    )
With these three tweaks, we are probably 90% of the way to recruiting the common man (if syntax is really what puts them off).


You can do these within PLT scheme quite easily. In fact, PLT Scheme comes with an Algol 60 reader that reads Algol 60 and turns it into scheme forms which are then executed. It's a cute trick.

http://docs.plt-scheme.org/algol60/index.html


This looks great for trivial code, but what about more complex code?

    destructuring-bind([a b &rest c] some-list lambda([x] +(a b x)))
or

    loop(for i from 1 to 10 collect 1+(i))
or

    with-current-buffer(get-buffer-create("foo")
      save-excursion(
        insert(text)
        buffer-substring-no-properties(point-min() point-max())))
This looks horrible. There is an advantage in using the same syntax for both lists and function application.


I've done things like this in Scheme - not robustly, but it's worth pursuing IMHO. I can't stand math formulas in SEXP notation. Array/slice/hash lookup notation would be nice too.

Lisp gets it right by excluding sugar from the language kernel, but a "batteries-included Lisp" should include it in a standard library.


I tried this with arc (worked with anarki based in arc2)

http://arclanguage.org/item?id=8172

The syntax is inspired by McCarthy's in his paper from 1958. It works, but i don't find it useful.


I never did understand this averseness to anything not algol-derived... I personally like Lisps syntax (fuck off, even though it looks different and uses "s-expressions" and code as data and all the rest - ITS STILL SYNTAX) and I like point free syntax and forth-like syntax and prolog intrigues me and ML has interesting syntax too and ...

My point is: ITS JUST SYNTAX! Syntax doesn't define the language, just the look. I do like a nice, clean, concise syntax though.


Syntax does define the language. It's very difficult to write macros for an ALGOL-style language. It's possible but very difficult. Some people are put off by Common Lisp's LOOP macro or the FORMAT function and will try and avoid using the more complex features of them. Those are small examples of how the syntax encourages, or discourages, the use of language features.

Hell, I even hate doing any complex shell scripting because I can never remember the difference between [ ] and [[ ]] and ( ) when using conditionals.


In this case your better off just reading Norvig's original presentation.


The thing about OOP is that it is described as way to make programming easier because objects are so "intuitive," and the examples for this are all toy simulations with naive object "models." What the post describes, and what everyone who actually does a significant amount of OOP programming eventually figures out, is that the only things OOP makes intuitive are stupid approaches. It's possible to program well in OOP languages, but there's nothing intuitive about doing anything non-trivial well, in any language.

I blame Alan Kay for inventing a new paradigm and then not explaining how it was supposed to be used (it's like Lisp meets biology!), thus allowing it to be taken over by others who filled the vacuum with a multitude of their own (usually half-baked) ideas bastardized implementations, and "software architect" positions.


I think you're right that there's a serious problem with how OOP is described. Why that's so is a full topic in and of itself.

In the case of Alan Kay, I can't help but feel like he was trying to create one thing but actually created another.


"Smalltalk is object-oriented, but it should have been message oriented." -- Alan Kay

http://folklore.org/StoryView.py?project=Macintosh&story...


My favorite book of this type is Thinking Forth, available online at http://thinking-forth.sourceforge.net/. You will get a large dose of the Forth mindset, but it's mostly higher level design tips and not as specific to Forth as, for example, How to Design Programs (read: Functional Algorithms in Scheme). It's a fun read, and a great way to get a perspective on design that is different than what you get from the usual object-oriented or functional language centric sources.


Is that Charley the Charley Shattuck mentioned in the 7/21 entry at http://www.ultratechnology.com/blog.htm? Looks like he's still working with Chuck. Lucky guy.

The best thing about Forth is that it's the distilled result of decades of iteration on a complete programming environment by a great programmer. I don't know of any other examples of this. The worst thing about Forth is that it will probably die with Chuck and be mostly lost.


Well, there's ColorForth, the environment Chuck uses, which I think is used by Chuck and another person or two. That really might be lost.

Then there's Forth, the huge family of languages, including an ANSI standard. There's no risk that it will be lost within the next few centuries, unless the human race disappears.


Chuck Moore is one of very few people on the planet that can say that he built his whole platform, chips included from the ground up. There ought to be a statue somewhere in name of the guys achievements.

I very much hope that somebody will get to document his legacy before it gets lost. That would be a worthwhile undertaking.

I'd gladly volunteer for the job :)


Well, Jeff Fox has been documenting it for a while.

Can you think of any other people who have done the same thing? Chuck surely isn't the only person ever to have designed his own CPU, but I don't know of any others who have done both the hardware and the software.

I admire what he's done but I'm still not sure whether the work is actually important in some objective sense. It certainly appeals to the Robinson Crusoe in me, but I'm not sure if it's actually useful, just because the level of compatibility is so low.


It's amazingly useful! It teaches you more about computing than anything ever will.

There was a bit on here about a guy that did his own computer from the ground up using ttl level logic, but I'm not aware of anybody that did his own chips including the routing.

Is it important to others ? Yes, again I think it is:

- a series of amazingly performant chips came out of that (novix, shboom and now the a series)

- lots of people got to see that it was possible which is inspiring (it certainly is for me)

- the fact that the compatibility is so low is actually a boon, this means that you get to think in ways off the beaten path, which is a good thing. It gets people to think outside of the box for a bit. That's always a great way to widen your perspective permanently.

Even today, there isn't a cell phone that does not have a forth interpreter stashed away in its innards! (I can find you a reference for that if you want).


Well, to me "educational" and "useful" are kind of orthogonal. I've made lots of things that were highly educational to make, often because they forced me to think "off the beaten path", but that doesn't make the things themselves useful. In fact, most of them were kind of crap. So I'm mostly thinking about the "important to others" aspect.

The Novix NC4000 and ShBoom came out at a time when compatibility was a lot less important. I agree that they were important and useful to others! Similarly Forth: it didn't start out compatible with anything else except for some computers, but now it runs on any chip, under any OS, and can control all kinds of peripherals. Which is why every cellphone has some Forth in it. (I don't know specifics but they have enough different processors that I'm sure you must be right.)

I'm not so sure about the c18 line (what's the "a series"?) because it seems like they're competing more with FPGAs than microcontrollers, but they don't have the tool support that FPGAs have.

When the ShBoom came out more than 20 years ago, the Verilog and VHDL toolchains were very limited. Now you can download all kinds of crazy stuff off OpenCores, synthesize it, and put it on your FPGA.

Similarly the available C was very limited, and it might take some work to get it to compile for whatever kind of random no-name manufacturer minicomputer you had, or if you had a microcomputer, it might take you a lot of work to find out that it just wasn't going to fit. And of course the available open-source code was pretty limited. Today you can be pretty sure that there's code out there that solves a big chunk of your problem, you can compile it for your computer pretty easily (unless it's a microcontroller or a C18 or an FPGA) and it will fit.

So I think it's possible that the c18 series will turn out to be "amazingly performant" in practice for a wide range of applications, just as FPGAs have the potential to be "amazingly performant". But it's far from guaranteed. You can't just put a bunch of FPGAs (or 40c18s or GA-4s) on a board and be done. After that you have to build software for it.

And I'm pretty sure that Chuck is the only one who's going to use the 40c18 for ASIC simulation.

It might turn out that it's so much easier to build software for a 40c18, or that the GA-4 is so much cheaper than a Spartan-II, that the tooling and compatibility differences will turn out not to matter. But they are real problems. Saying that it's "actually a boon" that you can't run any existing software on the chip is kind of dumb.


No, he was a German guy living in the Netherlands, originally he worked for 'NikHef', the nuclear physics lab in Amsterdam, then he founded his own company and I was one of their coders (originally co-founder but there were some 'issues' so I declined).


Second the idea that a lot of statisticians don't know what they're doing. The first mistake is to make things easier by glossing over, or mangling, the mathematical details. Many statistics textbooks do this, and the only real protection is to have a good math background. The second mistake is to treat statistics like a bunch of techniques to be learned and applied with little regard for the philosophical problems inherent in every attempt to model the real world.

The Cartoon Guide to Statistics is an excellent way to go from zero to a good overview of the basics with a minimum of hard math. After that, if you're mostly interested in applying basic techniques to your own stuff, you want a good undergrad textbook. I don't have any good recommendations here, unfortunately. If you have a good math background (or are motivated to get it) and you want to keep going, Statistical Models; Theory and Practice by David A. Freedman (http://www.amazon.com/review/R2XUNM92KYU7BB) has the math, the philosophy, the hands on analysis of studies, and the exercises to put you in a better position to evaluate statistical research than some people who produce it.


There's something a little strange about the idea that a company/product becomes real when it appears in a movie (or in the research/advertising for a movie). Isn't it the other way around? Cool clip though.


You've never read Walker Percy's The Moviegoer. He writes eloquently about how we seem to believe that the only truly real people are those we see on the big screen and if by chance we come into contact with one, we often feel more alive...


If we are living in a computer simulation, that may be true.


An alternative explanation for feeling more alive upon having contact with someone from the big screen is that we perceive such people as demigods.


It's not a fundamental difference, it's a consequence of the fact that hardware people are at the bottom of a very big stack and have a massive financial incentive to be as solid and predictable as possible. Higher up the stack everyone prefers to use relatively cheap programmers and build stuff quickly.

The problem is not having to deal with software APIs, the problem is the sheer size of the stack and the sheer number of accumulated assumptions that are built into it. Moving more pieces into hardware might improve the stack's overall integrity and reduce bugs, but it won't do much to reduce the size.

The real issue, IMHO, is that no one wants to admit that the general reuse problem is hideously, horrifyingly difficult. The biggest problems it causes are diffuse and long term, and in the short term everyone can do things faster by hacking together their old code with someone else's 95% solution library, so that's what everyone does. Putting enough thought into each new application to really do it right tends to be hard to justify on a business level, and most programmers have neither the inclination nor the skill to do it anyway. It's so ingrained that even people who are frustrated with the way things are think that a different operating system or language would solve the problem. It wouldn't - it would only start the process again, with at best a percentage reduction in stack size and corresponding percentage increase in time to frustration. I think it boils down to the fact that code reuse is basically a 2^n problem, and the bigger and more opaque the stack gets the harder it is to cheat that 2^n.

The only potential solution I've seen is what Chuck Moore is doing with Forth chips. He's now at the point where he can design and manufacture chips that are cheap and simple in design but are very good at running Forth very quickly. Of course the tradeoffs are (perhaps necessarily) as horrifying as the reuse problem in that it demands a lot more from programmers in general, and in particular requires them to learn a radically different way of doing things than they are used to while at the same time strongly discouraging reuse at the binary and source levels. In other words, he's spent decades designing a small core language and hardware to run it, and that's really all you should be reusing (along with data transfer protocols). Needless to say, no desktop or web or server programmer (or said programmer's boss or company) is ever going to go for this unless problems with reuse become far worse than they are now. (Even then the supply of cheap programmers might grow fast enough to keep masking the problem for a long time.) Most programmers are not very good, managers like it that way, and most of the smarter programmers are nibbling around the edges or looking for a silver bullet.

In short, there are no easy solutions. If you don't like the direction software is going, think about becoming an embedded systems programmer.


It's not "different strokes," it's a change of perspective causing massive confusion all 'round. Computer science isn't a knock-off approximation of mathematics, it's a subfield of mathematics. Programming, which is what the post is talking about, is a third thing. The derivative is the inverse of the indefinite interval (and has the same arity), but the function he refers to computes a definite integral which is not the same thing. Etc.

Confusion about this stuff is a sign that you didn't really understand it the first time. Believe me, I've been (am still) there...


There is even more confusion out there. Think of abstract algebraic structures.

Who needs groups? No one. Until one realizes that certain simple equation transformation rules are not based on natural numbers, or real numbers but on groups. Once something like that clicks in your brain, you can suddenly solve weird equations with sets and symmetric differences, or bitstrings and xor's or other pretty akward things which looked really scary earlier.

Who needs rings and semirings? No one, until one realized that a certain algorithm requires a structure... and this structure is a semiring! Thus, if one can prove that two operations and a set form a semiring, you can apply this algorithm without any effort, because it will just work! :)

Or, even more. Who needs the theory of katamorphisms, Anamorphisms and such? No one. Until one realizes how beautiful recursive datastructures are and how easy it is to program them once you understood the idea behind them (Check http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1... for a nice paper about this)

Who needs integration (besides those numerical people)? No one, until one realizes that any simulation, most iterations and such are discrete intergrations in very, very akward and convoluted algebraic structures.

Ach, I somehow wish I studied math before studying computer science by now.


Regarding where CS stands in relation to math: http://news.ycombinator.com/item?id=690798


Yeah, I tend to say "computer science" when I mean "theoretical computer science." To me there is a clear split between the stuff that's math/logic and the rest of it which I'm happy to lump under the catch-all of engineering (and don't feel comfortable labeling as any sort of science).


I'm not comfortable labeling the concepts behind operating systems, networks, computer architecture and programming languages as either math or engineering.


The "nice little tear-down" is in the book review mentioned, but not linked, by the parent (http://scienceblogs.com/pharyngula/2009/07/unscientific_amer...). Apparently these guys aren't very fond of PZ, and he isn't very fond of them either.


Should've linked that one first!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: