Hacker News new | past | comments | ask | show | jobs | submit login
Pain we forgot (lighttable.com)
364 points by Morgawr on May 17, 2014 | hide | past | favorite | 153 comments



What a wonderful article.

I'm not allowed to forget a lot of this pain. I teach programming, so I see it anew every semester.

And of course I still experience much of it in my own work. So, yes, let's deal with these issues better.

One little disagreement. In the "What?" and "Why?" sections the writer present some ideas for debuggers. While these are good ideas, I prefer to think of the inadequacies of existing debuggers as motivation for good practices: modularity, loose coupling, unit testing, etc. Certainly, it would be nice (say) to be able to examine the entire state of a running program easily. But I would rather code in such a way that I do not need to.

So to those who would write the World's Greatest Debugger, I say, "Good for you." But even better would be to turn your efforts to producing languages, libraries, frameworks, and programming environments that make such a debugger superfluous.


> I would rather program in such a way that I do not need (say) to examine the entire state of my code.

But if you could examine the entire state, why wouldn't you? People build correct intuitions by comparing their mental model to reality. If you don't ever show see reality, or only peek at it through sprinkled print statements, then it becomes harder to build an accurate mental model. This applies whether we are talking about a beginner trying to understand how pointers work or an expert grappling with a complex system.

Programming is always limited by our ability to comprehend complexity. So enabling simpler systems is a win, sure, but so is getting better at understanding things.


> But if you could examine the entire state, why wouldn't you?

Well, I would, of course. As I said, I think his debugger ideas are good ones.

OTOH, there is a limited amount of effort available for improving programming, and I would prefer that such effort be directed toward other ends.

Part of the problem is that, in my experience, debuggers tend to encourage sloppy thinking. And the better the debugger is, the sloppier is the result. Using a debugger, together with thinking hard about a problem, can be a powerful combination. But all too often, I see debuggers used as substitutes for thinking hard; the way debuggers are designed seems to encourage this.

To put it in the terms you used, constructing a mental model and comparing it with what the debugger shows, is a great idea. But in practice, I'm faced with a choice: work on my mental model now or fire up the debugger now. The latter is less taxing, and is often chosen; this is likely harmful in the long run.


> But all too often, I see debuggers used as substitutes for thinking hard; the way debuggers are designed seems to encourage this.

I wish there was a device that would help me think less hard...oh wait, there is! Its called a computer!

The lack of a decent debugging experience in languages like Haskell mandates deep thinking upfront along with powerful bondage-style type systems. I can imagine that a language with a more advanced debugger would push things the other way.


Maybe the "lack of a decent debugging experience" for Haskell is a consequence of there not being much need for one[1]? Have you actually used Haskell for any project of a meaningful-enough size to form an opinon? You sling out these accusations and hyperbole, but you don't seem to back it up with any practical examples.

[1] I couldn't entirely parse your sentence as posted, so please forgive me if I misunderstood it.


This is by design of Haskell, and you aren't arguing with me (beyond being offended by my hyperbole). My trouble with using Haskell on small projects means I would probably never go near it on large projects, it prefers people who "think to program" while I'm more of a "program to think" type.


What were your problems using it on small projects? If it wasn't obvious, "large projects" is a proxy for "have you actually tried to use it at any practical level?". FWIW, I had a lot of difficulty with Haskell at the start, but "easy to learn" doesn't necessarily translate into "a lot of power", but I found that Haskell gave me incredible power at the cost of a steep learning curve. It's not for nothing that they call it the best imperative programming language.

Oh, and don't worry, I would never pull the I'm offended by X card! I don't get offended easily, and I don't think you do either ;). This is all friendly argumentativeness (is that a word?) as far as I'm concerned, I hope you feel the same.


I worked on lots of small haskell projects as a student, wrote ocaml at Jane St and spent the past few years working in clojure full time. I definitely felt the need for a debugger in every one of those languages.

Equational reasoning is useful but it is still reasoning that I have to do. The whole point of having a computer is for it to do some of the thinking for me.


I think there is a certain kind of person naturally attracted to Haskell. I once attended an working group meeting [1] with many of the top Haskell people (e.g. SPJ), and I came to the realization that they were super smart and their brains were just wired very differently from mine. But I also thought a lot of effort was spent on solving problems elegantly in Haskell that were otherwise very easy to solve in other programming environments.

It is not the learning curve that gets me; I know many languages already, some of them with very steep curves. It is the thought processes that the language promotes, the order that it forces me to write my programs, the flow doesn't suit me well. I want the freedom to work at my own rhythm.

[1] http://www.cs.ox.ac.uk/ralf.hinze/WG2.8/


>bondage-style type systems

For the rest of you, when somebody describes Haskell in this manner you can be assured of a few things.

One, they don't know Haskell better than extremely superficially. Two, they don't understand parametricity and haven't read the papers. Three, they don't know what it's like to do the same kind of work (day-to-day programming) in a dependent type system.


I personally call Haskell's type system "paranoid". Yet I have no difficulty asserting that it's way more flexible than C++'s or even Ocaml's (which makes you route around the value restriction, and the lack of type classes).

I love paranoid type systems: they debug my thinking up front.


> haven't read the papers

I've never needed to read a paper to understand any other language. If I need one to learn Haskell, that's an immediate downside.


You don't need to read any papers to use Haskell, but if you're going to explain the type system, you need to understand how it actually works.


Some fine ad hominems; claiming I don't get ad hoc polymorphism? How dare you :).

Haskell isn't dependently typed, at least not yet.


The deeper point is that you don't yet understand parametricity, something Scala managed to blow right past.


I totally agree. It's true even for us, experienced Web guys (and gals), in this mess ruled by packages and their managers.

For example, I am trying to make my first Rails app outside a tutorial.

Okay, I want to use bootstrap. Should I use gem? Guess that's the ruby way. Okay seems like there a few of them. Tried one, another one. Doesn't work. Don't know why, since I am just copying half-cryptic stuff because I am new to Rails.

A friend suggests to use Bower. It's easier. Right! I had totally forgotten about bower, it rocks! Google: bower rails. Okay, there is that thing sprockets which I apparently need to configure. Googled, a few blog posts opened, they offer a bit different advice each, let's try. Doesn't work. Nope. Not really... Google: bower rails bootstrap sprockets. And yay! Works.

Two hours later, I have included Bootstrap. Properly.


Download bootstrap css, put into app/assets/bootstrap. Should be all that's needed.


Yeah, it would have worked. But I am too lazy (among real disadvantages) to download all the libraries manually :)

Real disadvantages such as updating becomes so boring that you will probably never do that. And 'bower search <something>' followed by 'bower install <found it!>' is less distracting than googling and unzipping etc.

Being lazy pays off.


If you're a programmer and you can't even bother to download some files and unzip them, then...

Lazy pays off because you innovate in the hopes of automating something, not that you just are actually completely lazy.


And repeat for every update of Bootstrap? Why?


I'll delete this if you don't like it, but I think this is why non-engineers like Steve Jobs, Jony Ives, and a person at HP I won't name, were able to make remarkably good consumer products without knowing how they work. It may be better for a non-engineer to design a laptop than for someone who actually knows how it is put together and exactly how it works.

Of course, in a narrow sense, this means such a person isn't really designing it at all: the real engineers are, which may cause resentment. There is a very good chance that a non-programmer can design a programming IDE that is two or three orders of magnitude better (by whatever standard) than the status quo. This means such a person can't actually implement any part of it, or even know exactly what it's doing.

Quite a surprising conclusion.

By the way I have experienced this myself, when designing for a target I didn't know yet: after/while I was learning it, the resulting design iteration process was much worse than when I didn't know the implementation details. It's harder to think from the user's perspective, after you have been forced to think from the implementation's perspective.


I'm starting to agree with this. If I was working on this alone I would have arrived at something just as complicated and baroque as what came before it. It was only when we started putting our prototypes in front of real users that I started to realise just how much of what I do every day is completely unrelated to the problem at hand. Having the that constant feedback is really vital to challenging existing preconceptions about what programming should look like.


I don't think it's quite ignorance of the implementation domain is the key here, but rather outside-in vs inside-out thinking. Nearly all engineers solve problems from the inside in my experience.


It's not just that though. Something may be impossible to do in the application, except with knowledge of x, but trivial for someone who knows x. If you know x you might totally forget that without this knowledge the task is basically impossible for the user.

So if building the application engenders learning x, then you might totally miss this design problem, unless you step away long enough to forget x again yourself - and that may never happen.

It is a bit too much to ask for one person to learn all of the details and totally own the implementation space, and yet be able to instantly forget all those details and go back to when they didn't know anything, to look at things from the user's perspective.


Personally, I think this is an extremely important topic, we need to change the way we program. It isn't the 70's anymore, why do we still program like it is.

Why does it take teams of developers to create and manage applications which do really simple tasks in the grand scheme of things (and I realize the amount of complexity in building applications is staggering, but large portions of it could be better automated). Where are the auto generated GUIs, where is the ability to ship execution control to arbitrary devices, where is a hypermedia layer with independent view and presentation code?

I'm approaching this from a different angle than the light table guys appear to be (I agree with everything they are saying). My angle is an attempt to build a cross platform module system (where platform includes runtime and programming language as well operating system and architecture): https://github.com/OffByOneStudios/massive-dangerzone

My argument is: before we can build the next generation of useful tools, we need a framework for managing both generations. massive-dangerzone is an attempt at bringing next generation usage (like that described in the article) to existing tools. It's still a big work in progress though, and is barely useful at the moment.


> I want to just type 'email' and see a list of functions and libraries relating to email.

Do people even remember what life was like before Google?

I type 'email python' and get back a link to an email module. Am I being closed-minded in thinking that can't get much easier?

Yeah, I need to understand a bit as to how email works, smtp and imap/pop and what not, and how to send vs receive email, but some level of understanding is just necessary.


I have a similar experience with Go in my spare time projects (very refreshing coming from a FORTRAN/C++ day job) which is a boon, but falls short of what the article describes. The way I interpreted it, even having to do the search in the first place is a problem to be solved. Being able to start typing "email" and have a context menu with functions available from a list of third-party libraries relating to email would be the goal. That way when you select "SendEmail()" in your code, it automatically pulls the library, installs the binary where it needs to be located (or pulls source and builds) without interrupting your workflow.

With package management systems standardized within a language (such as Go) the problem domain is reduced, but extending the solution across multiple languages sounds like another third-party tool than a language feature.


I actually don't see much of a problem with doing a web search for the available libraries. Maybe it would save me a few keystrokes to type "email" and be presented with some function signatures from different libraries, but I (and I'm sure most others) have a few heuristics for choosing a library that go beyond the SendEmail function signature.

For example, if I see that a repo hasn't been touched in 3 years, I'm probably not going to pull it into my project. If the setup required to use a library looks clunky, I'm likely to avoid it. If the documentation is dense and difficult to parse, I'd much prefer an alternative.

These are the problems that I don't see an automatic package-fetching IDE solving anytime soon. Not that it's beyond our capacity as developers to solve such problems, but because I doubt that the number (hundreds, easily) of hours it would take to develop such a solution would be justified the 10 minutes it might save a developer looking for a library. Also, it will be quite a while before the general faith in an IDE's ability to pick a good library for us is high enough that we'd trust it with such a responsibility. A mistake would easily cost more than the 10 minutes it saved, and given that computers can't even pick the right music for me with 100% accuracy, I don't see something like this being a reality anytime soon; and I'm fine with that. I'd rather we focus on solving real problems.


You mean something like Bing Code Search but with NuGet support? That would be cool.

http://techcrunch.com/2014/02/17/microsoft-launches-smart-vi...

http://visualstudiogallery.msdn.microsoft.com/a1166718-a2d9-...


Yes, exactly like that. I can't wait to see how Bing Code Search evolves as it starts to learn from its users.


> With package management systems standardized within a language (such as Go) the problem domain is reduced

I postulate the problem domain changes. I don't have experience with Go's package management system or NuGet, but I do with PHP's Packagist (Composer packages).

Finding packages is easy; evaluating their quality, if they do what they claim, and how buggy they are.. not so much. The problem's shifted to become one of too many libraries for any task, most of which are a liability in any project.


> I type 'email python' and get back a link to an email module. Am I being closed-minded in thinking that can't get much easier?

It certainly could be streamlined. To quote the preceding paragraph:

> For common tasks google will probably find you entire code samples or at the very least some javadocs. The samples will be missing lots of implicit information such as how to install the necessary libraries and how to deal with missing dependencies and version conflicts. Transcribing and modifying the examples may lead to bugs that suck up time. It's not terrible, mostly thanks to sites like stackoverflow, but it's still a lot of unnecessary distractions from the task at hand.


If you need help initialising a concept, then add some 'initialising' words to the search. "Set up email python" or "how to email python", rather than "email python" or somesuch. Don't pollute every single computer concept search with a how-to-set-this-up-from-scratch-as-a-newbie guide. I often throw in "how to" and it works pretty well.


> I want to just type 'email' and see a list of functions and libraries relating to email.

cough smalltalk, emacs, etc. cough :)


Oh man, I've definitely felt this pain many times lol:

"The samples will be missing lots of implicit information such as how to install the necessary libraries and how to deal with missing dependencies and version conflicts. Transcribing and modifying the examples may lead to bugs that suck up time. It's not terrible, mostly thanks to sites like stackoverflow, but it's still a lot of unnecessary distractions from the task at hand."

So many times, the actual programming isn't tough, it's just getting all of the stuff around it setup that is hard.


This is what I have mostly with Rails; when I started working with it I was good at Ruby but didn't use Rails and for a beginner it's horrible. There tons of options for everything, everything is badly documented, most things scratch an itch so when you use it, some of the completely logical things you expect are missing. Samples of gem usage randomly skip steps which can be a hair pulling exercise for a beginner. Then when you finished your app and try to update gems a week later, everything breaks completely. And this is not an attempt at trolling; I now use Rails daily and build rather huge projects in it and this still upsets me. When things work and you know how they work, all is easy and great, installing and using. When either or both of those are not true, you are in absolute hell, especially as a beginner. And nothing of that has to do with actual programming and it shouldn't be needed; your environment should solve this for this imho.


I was somewhat surprised, but very happy, to see Verse mentioned in this context (although he did get Eskil's last name wrong, it's "Steenberg").

I co-developed the initial version of Verse with Eskil; I think it was a bit before its time perhaps. It was hard to get real traction for it, but at least the things Eskil has gone on to build on top have gotten some real attention. Great, and very well deserved!


Oh man, I am full of typos today :S

Verse was a huge inspiration for me and it's a shame to see it largely ignored. We're exploring similar ideas for getting our language tooling away from plain text and towards structured, networked representations. I would love to hear your thoughts on how to do that, or any advice you have from building Verse - jamie@scattered-thoughts.net

EDIT I also thinks Eskil's ideas on 'what we can do' vs 'what we can get done' applies just as well to the rest of the programming industry. We're so focused on scaling up to millions of lines of code that we end up with environments where lunch_app takes a week to build and deploy.


Articles like this one annoy me, because it's easy to diagnose big problems -- here, watch me do it: "Why should compilers choke if you forget a semicolon? If they can diagnose a syntax error, can't they also fix it? Can't functions be smarter about seeing when they're used improperly and tell you while you're writing the code instead of when you're running it? Why can't the code be understandable to anyone who knows English?"

What's hard -- and often impossible -- is fixing those big problems, because a lot of times they're genuinely intractable; and when they're not, they're often so difficult that they might as well be.

So just sitting around and complaining about them sounds insightful, but it doesn't really get anything done. And yeah, I know that they're allegedly working on "fixes" for these issues, but based on the track record so far (LightTable promised all sorts of revolutionary views of what an IDE could be; it's delivered... not a whole lot of revolution), I don't have any faith that Aurora is going to amount to much either.

And I don't want to be too negative, because sometimes a previously-intractable problem turns out to now be tractable, and it takes someone who was willing to question long-accepted pain to find that out. So I'd be pleasantly surprised if this project delivered something that had even as much effect as the development of, say, git or xunit. But I'm not holding my breath.


I also wish that programming were a lot different today than it was when I started learning it. That being said, a lot of this article's points are things I've heard before. They led to the development of Visual Basic & co., mostly by people who had no contact with the Smalltalk and Lisp environment in the 80s, while people who did were shrugging and throwing tantrums like WHY THE FUCK DIDN'T YOU FUCKING LIKE IT TEN YEARS AGO?

IMHO, all these things went down to the bottom of history because things like these:

> Anon the intern needs to be able to open up Programmingâ„¢ and click 'New Web Form'

are adequate for people who usually don't program, and extremely inadequate for people who usually do. Generally, and for good reasons, programmers will dislike a tool that hides implementation details for ease of operation. Past a certain level of complexity, the time spent manually doing the right cruft becomes significantly smaller than the time spent manually cleaning up after a smart tool.

I sympathize with Anon the intern, but perhaps he should rethink his expectations about complexity; if discoverability is a problem, perhaps he could switch to something that's better documented?

And at the risk of sounding like an elitist schmuck who rants about how things were back in his day, maybe he ought to start with something other than web programming. The size and complexity of that tech stack is humongous, to the extent that a large proportion of those who use it don't understand it more than two layers of abstraction down. Programs are also hard to pack and the environment that runs them is hard to setup. Because it involves at least two servers, possibly with several add-ons in order to allow the server-side languages to run, learning at least three languages (assuming server-side JS is an option), two of which (HTML and CSS) aren't quite being used for their original purpose. This is a beginner's nightmare and it has exactly nothing to do with the development tools.

And then there are things that are far harder to solve than they originally seem:

> I want to just type 'email' and see a list of functions and libraries relating to email.

Related how :-)? Should MIME-related functions, needed to reason about attachments, also come up here? HTML parsing/converting, in case you need to deal with HTML email? Information cluttering does nothing to alleviate the opposite problem of information breadth: if Anon the intern's problem is he doesn't know how to Google for libraries or how to make efficient use of documentation, an IDE that presents him with a gazillion of possibly related things won't help him. Especially when, like all beginning programmers, one of his main difficulties is correctly defining the problem he's working on which, in turn, makes it likely for the solutions presented by the IDE to be nowhere even close to the one he needs, because the IDE (like Anon himself) thinks Anon is trying to solve another problem.

There is, on the other hand, a lot more truth in this:

> Tightening the feedback loop between writing code and seeing the results reduces the damage caused by wrong assumptions, lightens the cognitive load of tracking what should be happening and helps build accurate mental models of the system.

I do think that the real resolution to this problem is writing simpler programs whose state is easier to track. On the other hand, programming tools today suck considerably at presenting program meaning. Things like evaluating what an expression comprising entirely of constants, or at least evaluating it based on the default values of the variables involved, are well within reach for today's tools, and yet programmers' calculators are still employed because 99% of the available IDEs couldn't evaluate ADDR_MASK & IO_SEGMENT if the life of every kid in Africa depended on it.

This is wicked cool: http://repository.cmu.edu/cgi/viewcontent.cgi?article=1165&c... . However, I also find myself thinking that the very fact that we need debuggers that are this smart is proof enough that we don't reason about our programs well enough. Except for the fringe case of having to quickly debug a (possibly horrible) codebase I haven't written, I'd much rather prefer being good enough a programmer to avoid the need for a debugger that can tell me why foo is not 8 despite the fact that I fucking said foo = 8 ten lines above, than being a programmer with good enough tools to help me when I'm stupid.


It's kind of depressing, but I tend to agree. What works for absolute beginners does not and will not work for experts because it's inherently limiting. A lot of the "let's make programming more accesible for beginners" experiments have already been performed and they didn't cause more beginners to become experts (nor did they empower experts). IMO[0] we really need to move closer to mathematics and precision to actually achieve software that you can rely on. That means foundational work in PL semantics, type theory, PL pedagogy, etc.

[0] And this is just my opinion because I have no actual rigorous research to back this up. At least I can admit that, contrary to most of the people who are agitating for approach A or B to software development.

EDIT: s/studies/rigorous research/. "Study" can mean anything from "I though about it for 5 minutes" to "rigourous randomised trials", so I shouldn't use that term.


Dijkstra (the one who has a graph algorithm named after him, aside from being a really influential computer scientist in many ways) argued more or less exactly what you're arguing: that formal methods (that is, formally proving, mathematically, that your code will behave as it should) are the appropriate way to teach computer science/programming. His argument is basically that computer science is very hard applied math, and should be treated as such.

You might find https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/E... an interesting read, if you haven't read it before. There's been some interesting discussion of it on HN in the past, too.


Dijkstra also complained about lazy students who wasted valuable computing time on assemblers instead of just writing out the machine code by hand.

I am strongly in favour of mathematical methods but that doesn't preclude exploratory work or visualisation. The idea that mathematicians spend all their time constructing rigorous theories and proofs is a complete illusion. It is common practice (outside of the Bourbaki school, at least) to alternate between building intuition through examples and solidifying intuition through rigorous proofs. See eg http://terrytao.wordpress.com/career-advice/there%E2%80%99s-... (by Terence Tao, one of the most talented mathematicians alive).

By comparison, we are extraordinarily lucky in programming in that everything we deal with is computable and representable. It is far easier for us to display examples and visualise systems. So while I agree that more understanding of mathematics and proof would aid in programming complex systems and I have certainly found that my degree in maths was a good preparation for programming, I don't see why we shouldn't also take advantage of the power of computers to help build the intuition that is necessary for rigour.


we really need to move closer to mathematics and precision to actually achieve software that you can rely on.

What's meant by reliability?


That's a good question. Speaking for myself... I'd settle for some comibined metric of "fewer security bugs" and "fewer unintentional behaviors" (per spec). You could argue about UX and such, but most of the real damage that's caused by defective software is basically about security bugs. The "fewer unintentional behaviors" bit is triciker, but math and precision (about what you want) can help with that. Usually a (mathematical) spec is shorter and more concise than the program implementing it.


The problem is that once you get to defining your specs as programs themselves, they can have bugs, and the closer those specs get to maths, the less the people who actually define the specs in the first place - usually neither programmers nor mathematicians - can understand them, reducing the likelihood that anyone will notice the deviations between the spec and what users actually want.

Additionally, frankly, "reliable software" isn't something that's needed at least 99 times out of 100. Software that more-or-less gets some specific task done is. Those tasks can change from day to day, and generally require a few tries before the person who gave the task is even sure what they want, so RAD is more of a necessity than spec-based development.

Basically, we now have this entire class of information workers, and very few tools to give them that are any better than an Excel spreadsheet for the most part. I don't see the article as trying to give traditional programmers new tools, it's trying to give information workers new tools, and probably traditional programmers will find some of those useful.


> I don't see the article as trying to give traditional programmers new tools, it's trying to give information workers new tools, and probably traditional programmers will find some of those useful.

That puts it much more clearly than I managed. Thanks :)


Eventually though, software has to solve a real problem for a human being, and real problems (and humans) tend to be messy. A spec should model the problem and its solution, and will inevitably be sort of messy as a consequence. I also doubt that any spec survives implementation. Solutions must be adapted as the understanding of a problem and its appropriate solutions grow, and it is almost impossible to gain that understanding without experimentation (partial implementation).


HN couldn't have been designed via spec.


Huh? I'm pretty sure it started out as an (informal) spec in somebody's head, which was then modified to account for the inflow of noobs like me :)

At the very least the modding system required some abstract analysis to account for trolls and such.


I think the distinction is between agile (we'll start with a vague spec, and flush it out as we go) and waterfall (we'll start with the perfect spec before we start coding). Obviously, there are many points in between.

Nice concise mathematical specs are rare in practice, not because programmers are incapable of expressing such specs (and most probably aren't), but most programming problems are not expressible rigorously via math; they might not even be amenable to much analysis (so called "wicked" problems are common in programming).


I'll take a guess that something like pure functions lend themselves to formal, concise specs. Pure functions are, in the end, just input and output, not something complex like a program that outputs some complex bitmap every 30 milliseconds. Haskell has found pure functions to be very practical to have as 'distinguished citizens'; there are enough problems which can to a great degree be expressed with pure code.

Someone might say "but Haskell itself doesn't solve practical programming problems", or whatever. The point isn't Haskell or pure functional programming per se; it's just an example of things that easily lend themselves to formal specs. Whether more complicated, perhaps effectful programs lend themselves to formal specs (I think ATS might be able to express such specifications) is more hard to answer.


HN was written in a language where functional style is the norm, and yet HN couldn't have been designed via spec. I say this as someone who has thoroughly studied HN's codebase. If you spend time examining it, it becomes evident that most of the features were extemporaneous. There is little which was thought out in advance. The recipe for HN was: do the most important thing each day; repeat for 2000 days. Such a recipe is fundamentally anti-spec, because by day 5 your spec is outdated.

Spec xor exploratory programming. The codebase is the spec.


  > Anon the intern needs to be able to open up Programmingâ„¢ and click 'New Web Form'
  
  are adequate for people who usually don't program, and extremely inadequate
  for people who usually do.
Let's change the example to "Start Web Server" instead.

I think the best solution for this is the way Go does it. It takes the best of both worlds.

  http.ListenAndServe(":8080", nil)
That's one line to start a web server on port 8080. It's as easy as pressing a 'Start Web Server' button, yet it scales to dl.google.com.

The problem is when you press 'Start Web Server' or 'New Web Form' and it generates 300+ lines of boilerplate code that you now have to look at, maintain, and waste your attention on. Advanced users don't want that, and novice users aren't happy about it either.

Instead, take high level things and make them available via high level APIs, that are also extensible and flexible to allow tweaking of what you want [1], if/when you need to.

[1] https://github.com/golang/gddo/pull/143#issuecomment-3194318...


It's remarkable (or depressing) how many of the points in the article that are addressed by Smalltalk -- and yet out of reach because in most environments Smalltalk is just another program, rather than used as the environment in which we do everything.

It includes things like a source browser that allows looking up ways to deal with eg: email, if you have the/some source code, you can easily run it, a built in debugger and a tight write/test/fix-loop etc.


Most of what was Smalltalk has been adopted into modern IDEs (edit and continue, class browser, graphical debuggers, object-oriented programming). Now we are beginning to move beyond Smalltalk; e.g. Smalltalk never provided very powerful search facilities, real live programming, or speculative execution.


How did Smalltalk not provide "real" live programming?


Change your code in smalltalk, say from "w color: black" to "w color: red". And even though the code of the program is "changed," nothing happens because the modified statement is not automatically re-executed. In a live programming system, you get useful feedback about your code edits, not just "fix and continue," and so you would immediately see affected widgets go from being black to red.

Chris Hancock's dissertation [1] is really good at making this point:

> And yet, if making a live programming environment were simply a matter of adding "continuous feedback," there would surely be many more live programming languages than we now have. As a thought experiment, imagine taking Logo (or Java, or BASIC, or LISP) as is, and attempting to build a live programming environment around it. Some parts of the problem are hard but solvable, e.g. keeping track of which pieces of code have been successfully parsed and are ready to run, and which ones haven’t. Others just don’t make sense: what does it mean for a line of Java or C, say a = a + 1, to start working as soon as I put it in? Perhaps I could test the line right away—many programming environments allow that. But the line doesn't make much sense in isolation, and in the context of the whole program, it probably isn't time to run it. In a sense, even if the programming environment is live, the code itself is not

i.e. Smalltalk code simply isn't live.

[1] http://llk.media.mit.edu/papers/ch-phd.pdf


Thanks for clarifying -- I'm still not sure which environments are much "more" live than Smalltalk. If you do make changes, usually that code will be executed on the next call?

Or are you talking about some strict old Smalltalk-80?

While you could change a parameter like color in some text-representation of the live code, wouldn't it be more natural (in Smalltalk) to simply change the object [ed: object instance] in question (and see the immediate change)?

I'm considering things like:

http://pharo.gforge.inria.fr/PBE1/PBE1ch7.html#x32-111006r22...

If Smalltalk allows editing values of variables of running code, and hot code loading -- what is missing? Is it the fact that other systems define the source code as "the truth", and consider the byte/compiled code to be a "secondary artefact" that causes (my) confusion?

Because as far as I can tell, Smalltalk does allow you to manipulate the objects that make up your system live?


Direct manipulation of the program's state (as opposed to writing/editing code that changes the state) is always live of course in any language. So ya, Smalltalk objects are "live", but Smalltalk code is not. So when Hancock invented this term called "live programming" back in 2003, he was focusing on the code (keep in mind, this work exists in a world where Smalltalk is well known, the Morphic paper came out in 1995, and Seymour Papert of Mindstorms/Logo fame is Hancock's adviser).

The problem with editing the program's objects directly is that it is not very powerful in its capability: fine if you want w to always be red, but not fine if their are a hundred w's to edit individually, or if w is red because of some kind of active condition. In any case, hot code replacement doesn't really help either; its useful for sure, but the next step is to change the program's state retroactively [1] to match the new code. Smalltalk never went there, and probably couldn't given the hardware constraints of the time. But now, I think we can go there.

The Pharo folks are claiming that their system supports live programming capabilities, but it is just the same old live object/fix-and-continue features that Smalltalk has had for ages. I suspect that they just really don't know what live programming means, and are latching onto the word because its "trendy."

[1] http://dynamicaspects.org/blog/2012/08/15/changing-the-past-...


Hm. I still think it's a rather thin distinction, and one more concerned with what programs, data and objects are (and their relation to source code), more than with what is possible in Smalltalk.

If we look at what is possible, namely changing data and instructions in ram -- it seems to me that Smalltalk already provides that.

I suppose there is some value to viewing a program as a kind of differential equation over time (where we assume the future has not yet happened, and all input has been given at certain points in time) -- and so be able to change the state at a given present, by changing the program, and accounting for how that change would translate to a different state in the present, given the same inputs... But we cannot stop time, nor the real world -- so most real systems can only be approximated in this way.

The best we can do (with any real system that isn't isolated from the outside) is to take a snapshot at a given time t, make changes to see how that state could have been at t with our new code -- but we then would have to bring our system back into the present -- either by playing back recorded input or by some other means...

I don't think your points about w manage to put weight behind (or communicate) your point: in Smalltalk you could either programaticaly alter all ws, or if you changed the condition on which w depended, all ws would update (in a GUI, or other system with an event loop...). I get that that's not your point though.


The distinction is massive, and why all those Bret Victor demos have nothing to do with Smalltalk. You can record a bit of your program's execution during development.

Anyways, this an active research topic...see:

http://research.microsoft.com/pubs/211297/managedtime.pdf

If you haven't yet.


Interesting, hadn't seen that (have to read it in more detail later). Does sort of remind me of the approach taken to synchronization/managing time for Croquet[1]: Tea time[2] (although the aim there had nothing to do with live programming). Seems like a different need for managing time, but a not entirely dissimilar approach.

[1] http://www.opencobalt.org/about/history

[2] I think the paper I read is this one -- but I can't verify it from here:

"Designing croquet's TeaTime: a real-time, temporal environment for active object cooperation", David P. Reed http://dl.acm.org/citation.cfm?id=1094861

See also: http://www.opencobalt.org/about/synchronization-architecture


Thank you for the references. I've seen some of these before, but connecting the dots can be difficult (meaning, I have to keep going back to them later). A lot of seems to have gone into Kay's new project @ VRPI.


An earlier paper by Reed also seems relevant:

"Implementing atomic actions on decentralized data" http://dl.acm.org/citation.cfm?id=357355

Again, the goal here is not live programming, but the "boxing" of blocks of code into "atomic actions" feels very similar to the MS paper.

[edit: Some interesting parallels with the earlier story on Soundcloud's Roshi system too https://news.ycombinator.com/item?id=7732696 ]


Thanks, I'll take a look! Look like I'm going to make many changes for the camera copy.


Somewhat tangential (again thanks for the link, and the comments on "live programming"):

"In the Smalltalk model, for instance, state is persistent and code changes don't affect data. In the Clojure model, code is "mostly functional", with a small amount of carefully-managed state. Either model could be a starting point for a system where continuous code changes can be seen as continuous effects." -- Bret Victor http://worrydream.com/LearnableProgramming/


Right, but no one knows yet which will work (my hunch is a combination). My point was that the experiences he showed weren't Smalltalk experiences; we have finally got a lot of great ideas what to do post smalltalk.


The problem is: live programming does not scale and the very few examples are also not convincing.

i.e. live programming isn't programming. It's just a toy now.


It's a dream and a big risk. And I'm sure the destination will not be as envisioned, but we can finally make some real progress in interactive programming over Smalltalk/lisp.


It's too late for an edit, but looking at the article again, especially the conclusion, I wonder even more about the seeming dismissal of Smalltalk:

> * storing code in a networked database with version control and realtime sync

This sounds a lot like Monticello and/or what Lively Kernel does over Webdav?

> * a structured editor to enable rich ASTs with unique UUIDs

I'd say something similar is achieved by Smalltalk with it's view that there is no "source code" -- there is only the compiled code that is also viewable/editable as text (this is not so much about code, but a feature of a truly object oriented system).

> * managing environments declaratively so that evaluating code is always safe

I don't see how "evaluating code" can "always" be safe? Or is this similar to having the ability to deploy a separate dev-image in Smalltalk terms?

> * a uniform (logical) data model where every piece of state is globally addressable

One example of this would be an object graph?

> * a model for change that tracks history and causality

I take it this is the "live programming" bit discussed alongside here. If all input and output is via messages, it would seem feasible to simply log messages (possibly collapse some messages (ie: +1, -1, +1, -1, +1 becomes just "+1").

> * a powerful query language that can be used for querying code, runtime state, causal graphs, profiling data etc

I'm not entirely convinced we need a "powerful" query language. But something to specify context (show me all counters in the date widget) would be good -- but it would be fine to express it as "in:someDateModule var:count". Maybe I just read "powerful query language" differently than the author (intended).

> * composable gui tools with transparent guts

We seem to be reinventing these forever.

> * a smooth interface to the old world so we don't end up sharing a grave with smalltalk

I really didn't want to go all "Smalltalk did that" (I'm not even sure it did I only know new-ish Smalltalks) -- but I think the eco-system is in the process of proving the death-by-different wrong: We use Apps on phones and through the web -- and the fact that eg: Google spreadsheets has pretty a pretty crappy interface with "the old world" doesn't seem to bother anyone. I mean, yeah, we know that we probably need to interface with some form of external file system -- but now we can pretty much just throw http/dav at the problem.

I guess I'm just grumpy -- arguably there's no difference between running a javascript vm on bare metal vs running a Smalltalk system on bare metal (except for the language part, but we can implement the language(s) we need on top of js anyway…). I just can't help but feel we might do better with a system better designed to be a system, rather than the web browser that accidentally became a vm and virtualized display driver. It doesn't strike me as likely that we'll see a proper security model that actually works for javascript in the near future, for example.


What you glossed over was the 'model for change' part and the implications that follow from there.

It's a shame that it's not being said more clearly and explicitly in the post; but that 'model for change' implies immutable/persistent data structures and all that comes along with that.

The foundational assumption that data is never deleted or updated but just garbage collected does really change a lot.

Essentially (I'm hoping!) they're just saying: let's build another Smalltalk but with persistent data structures and reactive programming instead of Smalltalk's MVC model.

But yeah; it does sound like they need to study the Smalltalk ecosystem a bit better to figure out that apart from those two things Smalltalk did already do pretty much everything they want to be doing here; or find a marketing angle that's less off putting to those that know Smalltalk well...



> They led to the development of Visual Basic & co

Visual Basic was phenomenally successful as an end-user programming tool. There are entire businesses that still run off VB6. It worked because they nailed the early learning curve. It doesn't matter if your language is faster or more maintainable or less bug-ridden and insane - the majority of people will always gravitate towards the tool that is easy to get started with. This is why VB6 and PHP are so widespread.

The moral of the story is that if you want people to program in a sane and scalable way you must make it the path of least resistance. No amount of advertising or education has ever been able to overcome the fact that most people don't give a shit about programming and are just trying to get stuff done.

> if Anon the intern's problem is he doesn't know how to Google for libraries

Anon the intern's problem is that the information returned by google is incomplete and requires manual effort. I don't want to replace stackoverflow, I want to make it executable.

> I do think that the real resolution to this problem is writing simpler programs whose state is easier to track.

Well, that's where the managed state part comes in. We are trying to separate essential state which is actually part of the program logic from accidental state which is just an implementation detail (an idea expounded in http://shaffner.us/cs/papers/tarpit.pdf). That's a whole other post though.

> I'd much rather prefer being good enough a programmer to avoid the need for a debugger that can tell me why foo is not 8 despite the fact that I fucking said foo = 8 ten lines above, than being a programmer with good enough tools to help me when I'm stupid.

There is an implicit assertion here that not having a debugger makes you a smarter programmer. Would removing other feedback print statements, unit tests and the repl help too? Why even execute code at all, just print it out and think hard.

Thinking hard doesn't work if part of your mental model is wrong. Being human (unfortunately) my mental model is often wrong and a quick glance at a stack trace does more to enable correct thinking than hours of working the problem out by hand with pen and paper.

The whole point of having a computer is that it can do some of the thinking for you.

> maybe he ought to start with something other than web programming

The goal is not for Anon to learn to program, it's to build lunch_app. The web sucks but it is the only good distribution platform we have. Either we use that or we build something easier on top of it.


* The goal is not for Anon to learn to program, it's to build lunch_app.

If anon wants to build software then learning how to should not be optional. The obvious alternative is to buy software, just as we buy most other things (cars, books, bread, etc).


My point is that its not enough to point to an easier environment if that environment does not allow them to accomplish the original goal. 'Sure, just go away and study a completely different ecosystem for several months' is not an appropriate learning curve for a simple task.


I guess my point is that creating software is not a simple task.

Simplifying the parts you mention does nothing to simplify the important things in software development, which, as VB has demonstrated, leads to two things: crappy software and people who know how to create a barely working program, but not how to structure it to be maintainable or to prevent data loss in somewhat unexpected situations.


> Visual Basic was phenomenally successful as an end-user programming tool.

Was? So what happenened, where did it go, why isn't it still filing the role you are aiming at?

Another example, held in somewhat better regard by programmers, of a phenomenally successful tool that let end-users build working programs, is Hypercard. Which definitely isn't around anymore.

Looking at the prior art of tools that have succeeded at letting end-users build programs is probably a good bet. What they did right, what they did wrong, why they don't exist anymore even though they were succesful, if they don't exist anymore.


> Was? So what happenened, where did it go, why isn't it still filing the role you are aiming at?

The last proper version of Visual Basic was released in 1998 -- Microsoft discontinued it in favor of VB.NET. If they continued to develop VB, I think it would still be a heavily used platform.

Hypercard is not the same thing. People didn't build real applications in Hypercard -- it was a toy. I spent a lot of time playing with Hypercard and it was very VB-like when programming for it but the UI was more like Flash. It's best use was making simple games.

The real problem with simple tools is that they solve simple problems. Once you have to start doing something outside of the tools range, you're stuck. This happens with every tool but more complex tools have a bigger range.


> People didn't build real applications in Hypercard.

Remarkably, they did! Wikipedia:

>A number of commercial software products were created in HyperCard, most notably the original version of the interactive game narrative Myst, the Voyager Company's Expanded Books, and multimedia CD-ROMs of Beethoven's Ninth Symphony CD-ROM, the Beatles' A Hard Day's Night, and the Voyager MacBeth.

It's true that Hypercard was not an ideal platform for development. Apparently, Myst developers had trouble making the high resolution images load quickly on machines of the time.


As I said, Hypercard was like Flash -- so it's not surprising that all these commercial products are sort of point-n-click multimedia projects. But you couldn't, for example, build an email application in Hypercard. You could, however, build one in Visual Basic.


Sure, you could build an email application in HyperCard. I'm pretty sure there actually were several. TCP/IP was available via HyperCard plugins or extensions or whatever they were called. HyperCard's card/stack data model would work fine for email. Making the list view would kind of suck, as lists always did in HyperCard, but you could use a plugin like WindowScript.

I wrote several commercial software products in HyperCard and SuperCard back then. It provided awesome tooling leverage. I worked on an EDI application that was written in SuperCard (with the EDI processing running in a C plugin) which was working, but later the company's president decided we needed to re-write the whole thing in C++ with the MacApp framework (http://en.wikipedia.org/wiki/MacApp) because C++ was a "real" programming language. This was the cfront era -- build times were long, debugging meant manually de-mangling names, etc. Crazy slow. We spent a long time on it but the schedule slipped long past the end of my time there.


Many people/companies built Hypercard applications that talked to backend databases. It wouldn't be difficult to build an email application.


Had no idea that the original version of Myst was Hypercard, but it makes sense in hindsight.


The reason HyperCard couldn't support applications that advanced wasn't due to its concept or scope, but that Apple never really supported it well, and the last real major update to it came in 1990, before the internet exploded. HyperCard actually had decent support for AppleTalk networking (I wrote some network games and a network email client/server system in HyperCard as a kid, all solely over AppleTalk). I'm convinced that if Apple kept supporting HyperCard that it would support TCP/IP and anything else required to be competitive.

The common knowledge about why Apple never supported HyperCard whole-heartedly is that they didn't know what it was - it wasn't a programming environment, it wasn't a database, it wasn't a paint program, it wasn't a multimedia environment. Yet it was still all of the above.


> So what happenened

MS killed it in favour of a 'real' programming language. See eg http://msdn.microsoft.com/en-us/magazine/jj133828.aspx

> Looking at the prior art of tools that have succeeded at letting end-users build programs is probably a good bet

Indeed. We have a 50 odd page google doc full of exactly that :)


Why do you think MS killed and stopped supporting such a successful project? (I am not being rhetorical or sarcastic!)


The article I linked explains it pretty well. A small but vocal minority of users wanted more power. MS gave it to them in VB.NET, and with that power came complexity. The huge majority of people who just wanted to do simple CRUD stuff stuck with VB6 (which is supported to this day).


Link to that google doc? Would love to read it.


> Was? So what happenened, where did it go, why isn't it still filing the role you are aiming at?

VB6 still runs an astounding amount of apps that are used by millions of people every day.

I know you're not implying anything, but, there's several hundred billions of dollars done in the US annually with just VB6, excel macros, and shitty Access DBs. It's incredible, really.


FWIW, a lot of people were very angry when Microsoft pulled Visual Basic and replaced it with "Visual Fred" (aka, Visual Basic .NET).


I worked for a Fortune 500 company that still relies heavily on VB6 programs for some of its branches.

Another humungous company (think 10.000 employees) has its core system in VB6 (and it works REALLY WELL).

The company I work for has > 80% of its programs in VB6.

It didn't go anywhere. I'm not sure what the spiritual successor is, though.


I hope my comparison with Visual Basic didn't come out as derogatory. Because, at the risk of no sane programming talking to me ever again, I will admit that I really do admire it! It wasn't built in a period that would have allowed it to live on today (not without significant effort from Microsoft...) but hey, it was nice for its time.

I made this lengthy introduction because I really like LightTable. What you folks are doing is impressive! I'm not particularly fond of the tech stack behind it but it's the first significant, organized kick in the tools' butt I've seen for many years now that isn't confined to academia.

> There is an implicit assertion here that not having a debugger makes you a smarter programmer.

The implicit assumption I'm making if that I were a smart (and careful!) enough programmer, I wouldn't need a debugger :-).

So yeah, if I had the choice between being so good that I don't need a debugger and having a really, really good debugger, I'd pick the former any time. Sadly, that's not... quite a choice, so I'll grumpily let gdb tag along.

> The whole point of having a computer is that it can do some of the thinking for you.

This is something to be enthusiastic about, certainly! I was glancing over Moseley and Marks' paper (I read it a while ago, but not too carefully) and, now that I think about it, the mantra about writing simple programs is held in highest esteem by communities that traditionally had to struggle with their tools (Unix, because 1970s, and more recently the web folks, because they were building complex web applications long before JavaScript had anything that would even resemble a reasonable debugger). Much of this was done so as to eschew the limitations of the platforms themselves. PDP-11/20 had 64 KB of addressable RAM; the fact that you could write and compile programs on it alone was remarkable. Letting the computer take some of the thinking load was quite certainly impossible.

I'm still debating with myself whether allowing the computer to do some of the thinking for me would be good for the quality of my programs. I'm a very clingy father :).

> The goal is not for Anon to learn to program, it's to build lunch_app.

I don't wish to sound rude, but this sounds like a lump made by marketing types :-). Is lunch_app a program? If so, Anon has to learn at least the basics of programming, much like someone who builds a desk has to learn at least the basics of carpentry or someone who wants to learn to play a musical instrument has to learn at least the basics of music.

More helpful tools will definitely help Anon, but there are things no tools will, or should, do for him: he will still have to understand how web applications are served, how they are hosted on the same server along with other applications and so on. Not knowing these things will be a lot more detrimental to everyone using his application than spending a couple of weeks learning all this boring stuff will be to Anon.


> I don't wish to sound rude, but this sounds like a lump made by marketing types :-). Is lunch_app a program? If so, Anon has to learn at least the basics of programming, much like someone who builds a desk has to learn at least the basics of carpentry or someone who wants to learn to play a musical instrument has to learn at least the basics of music.

Isn't this the destiny of most software☨, and another step along the craft to utility curve [0]? Software of this type is needed to do a chore, and honestly if the tools are in place, it could be re-created in an afternoon rather than extended when the time comes.

I would use the analogy of printing; we used to need a typesetter and go to a printer; now we select a font, type and hit 'print' and - if the damned printer on our desk cooperates - we get our work back. Websites are reaching this level of utility; there's no reason simple web apps or desktop software won't too.

[0] This graph http://1.bp.blogspot.com/-lrxB3ntRzKQ/U05ClSIXwGI/AAAAAAAAEy... taken from http://blog.gardeviance.org/2012/06/pioneers-settlers-and-to.... The graph matches what I see in the web industry; his blog is a fun read

☨ Because thinking of all software ever written, including scripting for automation, the really complicated and large and needing well-engineered software is far less common than we sometimes think, and not what someone like Anon is considering building with these tools.


> Software of this type is needed to do a chore, and honestly if the tools are in place, it could be re-created in an afternoon rather than extended when the time comes.

I agree that it's supposed to do a chore, so it should be as easy to churn it out as possible. However, if we were to follow your printer analogy, the proper way to do that with a printer would be to just build a machine that makes it very easy to create a custom printer that types out exactly what you want it to type out! What we have instead is a very general typesetter that can be convinced to print a particular picture (by feeding it an automatically-generated program in e.g. PS!).

The application in question is pretty much a simple CRUD that generates a report in the end so that whoever is ordering knows what to order and from where. IMHO, just like printing doesn't require the user to know PS, this should not involve programming. It's not that programming should be easy, it's just that the CRUD model is so easily understood and so orthogonal that building one should not require anything more than a wizard asking you about the data it should collect.


Or, to use the desk analogy: if you just want a desk, you generally don't learn the basics of carpentry; you just go to IKEA (or a furniture store).

(Which is a great thing for carpenters and furniture store owners. We're those people in the analogy, so why get uppity about people needing our services eventually?)


Actually, the very basics of carpentry are the things you use to put together the IKEA desks!

No, seriously. IKEA didn't have a store open around here when I was a kid; some of the less pretentious furniture in my parents' house was made by my father, and since I was a kid, he had no choice but to need my assistance. Putting wooden things together with screws based on drawings was literally the first thing I learned about carpentry, just before sawing and all that.

The fact that it's so easy should be no surprise. Patching together some stuff in BASIC is exactly as easy.

Yeah, it's fucking disgusting that today's "personal computers" are such impersonally cluttered, patchy, uselessly complex designs that even experienced programmers can barely wield them.


I think what we are missing is smarter tools. I think that's the argument here. What we really need is the ability to say something like "I want a data structure D holding objects of type Q, with an ordering using this key-accessor(A) called Alpha, and an index with this key-accessor(B) called Beta" and then later in the code say "I want to observe D for the following conditions ..." and let the compiler figure out how to store that data in memory, how to most efficiently build a data structure to trigger those conditions, etc. (Instead of saying List<> or IList<> or writing a custom data structure, etc).

And then if you really care (and this is where the non-text/denser ASTs come in; as well as a deeper understanding of computer science and programming) you can add annotations to your description of the data structure like "I want reverse traversal of ordering Alpha to be O(n)", annotations could even cause a compile error like, "I want to insert via index Beta in O(1) time, with an O(n) in order traversal time of Beta". And even fine grained annotations like "For index Beta use a red-black tree" or more specifically "To implement index Beta use module 'foo.bar.baz' by 'FancyBeansCorp', using class 'AwesomeEnterpriseIndex'"

Let the tools do the heavy lifting! Let data structure and systems programming experts write data structure implementation strategies. Hell, I bet you could build a marketplace of code just for optimized data structures, if it was easy to swap them in and out. Heck those sorts of dependencies should be at the company level, without even touching the project in question (a company level code policy to use a specific library of data structure implementation strategies, regardless of project being build (unless the project or user overrides it)). Let alone a market place of implementations for algorithms, general code patterns (conditionals, whiles), AI, networking, UI, etc.

Compilers should take a long time. A very long time. Sure we can short circuit them for debugging, and just use the best workable code we got in a few seconds, so we can test (and besides, with constant compilation, it should have already solved most of the problems reasonably well by the time you click run). Our compilers should be smart enough they can change their output for different machines not just at an assembly optimization level (better intrinsic, etc), but for different cache sizes, for different memory performance characteristics (changing between intensive memory data algorithm to intensive processor algorithm) as necessary. And yes, I expect it to be deterministic for every platform.


So, basically you want a uniform way of defining data structures that can use different kind of indices on different fields. Let me add a few more desiderata to your list:

- It would be cool if these data structures could be transparently saved to disk, because data usually lives longer than code;

- It would be cool if they could be used by multiple applications, with some sort of guarantee that each application sees a consistent view of the data at any particular time;

- It would also be cool to have a declarative language for accessing several of these data structures at once, "joining" them on the values of certain fields and dynamically picking the best indices to use. Since you want a marketplace of implementations, it would make sense to standardize that language, so different data structure libraries can support it. A good name for it would be Structured Query Language, or SQL for short.

Tl;dr: You are completely right that network databases (object graphs and special-purpose data structures) are inferior to relational databases (declarative general-purpose data structures that separate querying and indexing, and allow a host of other general-purpose functionality). That idea has the potential to transform the whole software industry and lead to billion-dollar profits, as it has amply demonstrated since the 1970s. But since today's programmers are a forgetful lot, perhaps you could rebrand the idea and sell it to them as "Big Data 2.0" or something.


I understand how SQL ties into what I want, but there are a few points you might have overlooked for what I wanted:

* Micro-optimizations. The implementations should be granular, not monolithic like SQL servers (let alone the fact that optimized SQL rarely translates between the implementations, as most optimization commands are not cross platform).

* Embeddable DSL, this one is honestly a problem with programming languages in general, but I've yet to see a static programming language which can tell me when my query is malformed at compile time (or hell, even provide decent syntax highlighting). Let alone importable, custom, extensions to SQL, or embedding lazy functions within the data structure.

* Intent vs Exact. The ability to describe partial intent, rather than exactly how to organize, the data (e.g. describing the operations expected to be executed across it, rather than the indexes to build). As well as having information available from the compiler about how the data structure is used (so that without the explicit declarations, the intent can still be optimized for).

SQL is a fine direction to approach the big picture problem from, but it isn't anywhere near the goal I care about. I'm not interested in dealing with big data. I'm just tired of having to know the difference between arcane implementation names for ordered sequences [1], I want to be able to describe what I want my ordered sequence to do, and let the compiler figure out the best implementation available.

[1] http://www.programcreek.com/2013/03/arraylist-vs-linkedlist-...


Absolutely. This is actually part of our plan. We're writing mid-level-ish specifications in a logic language and then compiling them into an incremental dataflow network. Our current prototype just uses the same implementation for each rule but later we intend to be able to pick and choose different data structures and join algorithms per index/rule.

I have a lot to say about separating specification from implementation and trying to better capture programmer intent (eg http://www.freelists.org/post/luajit/Ramblings-on-languages-...) but the blog post was already pretty long.


To me it seems the only way to smarter tools is static analysis, but there's only so much one can do in a dynamic language.

There seems to be this dissonance between dynamic languages, which tend to be popular with new programmers for a variety of reasons, and better tools enabled by static analysis of (usually) static langues.

The static languages seem to present this "hump" that a lot of people are averse to, and if we really want these smarter tools, we're going to have to find a way to get programmers to use more static techniques; we need to get them over that "hump", possibly then these smarter tools would ease things in the long run, but doesn't have quite the same immediate-gratification aspect of dynamic languages due to the up-front effort.


There is also so much one can do with a static language.

Dynamic languages are preferred by those who don't want the conservative verbose static type system to get in the way of writing code. You can design a static type system that is less verbose (e.g. via more inference), but then it can become more conservative (H&M's inability to deal very well with semi-unification in the form of subtyping and assignment). That is the "hump."

Smarter "more magical" compilers are a nice idea in theory, but hard to realize in practice. There are even limits to the kind of analysis we can do dynamically, but they are a bit less constrained then what we can do statically.


You can do a lot of static analysis of dynamic languages. Check what PyLint/PyFlakes does for example

It's probably not the dynamic language that is the problem, but its constructs.


I would argue that tools solve specific category's of problems well. Photoshop allows non programmers to do all sorts of wonderful things, perhaps not send the first panda to the moon but hey you can fake it reasonably well.

The core issue is programming does not really focus on any one problem which is why there are no great general solutions. People forget that just because you call something a database does not mean SQL and noSQL solve interchangeable problems there just local maxima. For all lisps power and simplicity people don't actually use it to write a lot of embodied code.

PS: It might be helpful to consider to note in general case compression does not actually work. But, 7zip exists because we don't live in a generic world we live in this one. So, don't hand people a steam shovel and ask them to bake you a cake.


The analogy to me seems similar to that of programmers and compilers. I want to write programs to do xyz, but I don't particularly care to know the lower-level to machine level implementation. Instead, I know a roughly-english set of instructions to tell the compiler what I want to do, and it takes care of the rest for me.

Could it not be possible to take this even further, to the point where I say "I want to do ..." and all of the "code," be it high or low level is generated for me.

I realize making something capable of this level of abstraction would be incredibly difficult, but it would certainly be fun to try.


There are companies trying.

In Uruguay, a tool called Genexus enjoys a high level of success promising something like that :) .

For simple CRUD apps, it's pretty much magical - and it looks awesome in demos, heck, it can generate code for every mobile phone out there just throwing around some controls, specifying your data fields and writing a few rules, but if you want to make it do something that isn't explicitly considered, it's very annoying (and it has some UI and UX problems). And don't ever try to touch the autogenerated code :P .

That said, it's probably the easiest way to get from 0 to a complete CRUD app, once you learn how it works.

http://www.genexus.com/products/genexus?en


Not often I can read an essay that leaves me at the same time so excited and so disappointed.

Excited because I found myself agreeing with every point: isn't it obvious to everyone that the programming experience could be so much better? We created wonderful tools for graphical expression and number crunching, that keep getting better [1] [2], while our own day-to-day tools remain basically in a rut.

Disappointed because it seems LightTable is foregoing an incremental path to reach the goal, choosing the boil-the-ocean approach. We do need more long term start-from-scratch rethink everything kind of projects; like what VPRI's STEPS project aims to achieve [3]. But there is a lot that can be done to improve the programming experience today, and I don't see enough work in this area. IMO, the latest meaningful improvement in software development tooling was Intellij Idea around 2001 (arguably the functional programming renaissance represents another meaningful improvement, but the real breakthroughs there happened in the 70s). LightTable moving to the Blue plane leaves the PinkPlane unattended.

[1] http://www.adobe.com/technology/projects/content-aware-fill.... [2] https://www.youtube.com/watch?v=UccfqwwOCoY [3] http://www.viewpointsresearch.org/html/writings.php


We aim to have some demoable in the next few months and actually usable this year. We're purplish, at worst :)


Regarding Anon the intern wanting to build lunch_app I've been in a similar position learning programming from the basics and wanting build some apps. I'll put in a vote for web2py as being a good tool that gets around a few of the gripes in the article. It's largely one click to install server, editor, db, debugger etc. and was specifically designed for teaching beginners (it has versioning as well using mercurial but I wasn't able to get that going with one click). It also does most stuff you'd need and is all in Python so you can use the libraries and rewrite bits if you can't get the framework to do something. Even so it would take Anon a while to make his app but I think it's some sort of progress. The creator talks about it here youtube.com/watch?v=iMUX9NdN8YE


I'll check it out, thanks.


I'm glad to see IntelliJ getting some love there for making much of this easier - few companies have worked so hard at it, and the results are amazing. Looks like Microsoft is making some great steps too, Bing Code Search gets a bad rap for pandering to blub programmers but I'd love something like that in my editor.


Or the pain we remember with fond memories. Like anything good in life, the struggle to do it well, to fully understand it, to master it, is what makes it worthwhile. It's also what makes programmers effective - we revel in solving complex but tractable problems.

There is an inherent tradeoff in creating any system: you can make the system very simple and it becomes too simplistic for the advanced users who want more control and detail. You allow too much control and detail and you alienate users who want simplicity.

A programming language/IDE is a system that affords advanced users the ability to solve an immense set of problems. That unrestrained ability comes at the expense of simplicity. The more constrained and specific that ability becomes, the simpler it can be (i.e. Excel is more constrained but far simpler than a Hadoop cluster)


> We still program like it's 1960 because there are powerful path dependencies that incentivise pretending your space age computing machine is actually an 80 character tty. We are trapped in a local maximum.

Love this (although I don't like extra long lines). He's talking to you Pep8.


While I am watching the development of LightTable with interest, it seems to me that the example used in the OP is missing the forest for the trees. It isn't hard to imagine Anon Intern managing to cobble together a solution for lunch_app that isn't an 'app' at all, just a form created with a builder like WuFoo or Google Forms, and some 'recipes' in a tool like IFTTT that has built in integration points for email, the accounting system, etc.

Arguably this wouldn't much of an improvement over the VB6 + Access status quo (or Visual FoxPro, or FileMaker Pro, etc), except that the individual components can be much more robust and scalable, monitored, auditable, and so on, without Anon Intern having to worry about any of that.


I mentioned that near the beginning:

> There are probably dedicated apps that cover this particular example but we are more concerned with how an end-user would solve this kind of problem in general...

VB6, Access and Excel are exactly the kind of tools we seek to emulate on the usability front. Half of the world runs on Excel precisely because it is easy to get started, easy to understand and yet is capable of handling fairly complex simulations. If we can make something equally approachable but backed by a sane, maintainable and powerful language then we can give programming powers to a lot of people who need them.


Except that I am not talking about a dedicated app at all, but general purpose tools/services for parts of the problem that can be combined arbitrarily, just like libraries can. IFTTT is exactly that sort of service.


"The samples will be missing lots of implicit information such as how to install the necessary libraries and how to deal with missing dependencies and version conflicts. "

Docker pretty much solves this problem. Writing samples for your new project? Start with step 1 `docker pull yourproj/your_tutorial_image`.


Edit: I've mismatched deps and imports, indeed that could be useful. Disregard my post.

If I select a function from autocomplete, its dependencies should be automatically added to the project without any fuss.

For the point, that that post makes, this one is at least available if you use IntelliJ with Java or Scala.


You know, I've recently started thikning - why do we even need explicit imports? Isn't the compiler/etc smart enough to figure out the right import 99% of the time?

Every file in a large project starts with dozens of redundant import lines. Why don't we have imports go the way of the inferred type?


What if foo() is present in two different libraries? (is this the 1% you're talking about?)

What if tomorrow I add another library which implement his own foo()? (this is trickier)

When I read the code, what is foo()? (here the IDE may be smart and show me the right documentation).

All that said, I feel that the pythonic "Explicit is better than implicit" is better.

It's also true that when I see a file whose first 30 lines are used for including dependencies it doesn't sound right.

The problem lies with the fact that we use text to store structured data. If we were to save source code as binaries we could have the source code in a "file|object|item" and the dependencies in another item, linked to the source code (I'm not advocating this, source code in text files has many advantages, but also his drawbacks).


We give each name a unique id under the hood. At edit time the user chooses which one they want from a list. The actual AST only ever refers to ids and the names are effectively just documentation.

This has the neat side-effect that ids are globally unique so you can figure out what libraries you need to pull down just by looking up ids in your package repository.


Depends on what you're looking for. The issue isn't really "importing" the code references as much as it is a scoping issue. That is, imports are largely used to define scope for the file and mostly for the programmers benefit (so you're typing, say, Button.DoSomething instead of System.Windows.Controls.Interactive.Button.DoSomething).

That and the odd namespace clash that a compiler can't really resolve, but that is more due to implementation and system issues, not programming proper.


IntelliJ does it with Python and other languages as well, although you have to explicitly hit Alt+Enter -> "Import this name".


That's true for pulling functions from other installed namespaces, but to my knowledge you can't install a library in the same way (eg search for 'parse html', drag a result into the editor, get Beautiful Soup added to your virtualenv and imported into the repl).


Ah, I didn't realize this is what the parent was talking about. That's pretty damn cool.


I don't know if the parent was talking about that, but it's what I was trying to get at in the post.


solving this problem was part of REBOL's dream.i really wish it had been properly open-sourced from day one and built up a good community and momentum around it; it was a very promising language.


Can't be sure but it seems like the surveying has resulted in most of the right conclusions and the right direction for the project.

HOWEVER, I still think most everyone is missing the REAL issue here. As evidenced by the powerful tools cited in this article that address various aspects of the "programming" problem, there have been numerous efforts to move the state-of-the-art in software development forward. And to a great degree those efforts have _proven_ quite a few superior paradigms.

And yet, we haven't seen those new paradigms become truly mainstream for most programmers. Why? I do NOT believe it is because the approaches haven't integrated the right new concepts, because there are so many existing useful tools combining many different new ideas effectively which have failed to become mainstream among programmers.

I think many of those new approaches could and should have become normal operating mode for programmers.

I think the reason they did not is this: the core definition of "programming" (and by extension "software engineering" etc.) is an antagonistic and largely manual process of creating complex textual source code that can be used to describe system behavior. Period.

For example, if I want to call myself a web "developer" I DARE NOT use an interactive GUI tool to generate and maintain the source code for my web site. Web developers will always say, for example, they avoid this because the source code generated that way is less maintainable by hand. In many cases that may be true. In some cases with advanced code generation it is not. Regardless, I don't believe that is the actual reason. Its just a rationalization.

What do we call someone who's entire job entails creating a web page using a graphical user interface? In other words, this is a hypothetical person who has found a hypothetical GUI tool that can accommodate all of his web site design and implementation needs without any manual edits to source code. If he builds a web site or web application this way, and writes zero lines of code, do we refer to him as a very smart and advanced web "developer"? Or do we call him a web designer or simply a WordPress user (for example)?

We do NOT refer to him as a web "developer". He has no right to refer to himself as a "developer" or "programmer" because he has not wrestled through an antagonistic manual process to create complex textual source code. And that's what people are not understanding. The reason we don't call him a programmer is NOT because he didn't create an effective program or website. Its because the way he did it wasn't hard enough and doesn't match our outdated definition of what "programming" is.

To create a POPULAR system (among "real" "programmers") that makes programming more practical or easier using new paradigms, you must either redefine programming to include the possibility of new paradigms and a non-antagonistic process, or perhaps somehow trick programmers into thinking what they are doing is actually harder than it is. Maybe if there are a few places to type a command to generate source code, that will be sufficiently complex to still be considered "programming".

If you are too successful without doing those things then you will just have another tool that only "users" or "beginners" would ever admit to using.


Programming can get easier, and it has, many times. The introduction of structured programming, the introduction of various forms of polymorphism, the popularization of automatic memory management, and several other developments have made the process of programming easier.

But while the process of programming became easier, the role of the programmer did not. Instead, the programmer was expected to build bigger, more complex systems, and do it faster than before.

I would hesitate to call someone a programmer for just making a web form in HTML and sending it to a database, and a GUI tools to do this already exist, most notably Microsoft Access. But suppose a GUI tool was invented that allowed its user to create any website, from Google search to eBay and anything in between, including both the front- and back-end. Anyone well-versed in using that tool would be a programmer for sure.


This is a generational thing. Yes, the developers who go through the pain may fight the tools that remove it for others. The next generation of programmers won't have our baggage though. They won't have any problems using modern tools, just so long as they implement their vision quickly in most accessible way.

Let's keep creating better ways to program. Future generations will be thankful for it, even if today's developers can't appreciate it.


Saved. Can't wait to look back at this in a few years.


Maybe every tool that we invent and use is two-faced: it makes life both easier and harder, and therefore the wish to make programming easy could result in one of the most difficult journeys so far, comparable with inventing an AI indistiguishable from a human. If i imagine a success in 75 years, then i would probably sit infront and talk to a human-like android: "play me some music based on Bach with the voice of Eddie Murphy, mellow, with a hint of tragedy, but not too sad" - then it would in the blink of an eye play me some music and sing along with it. Chances are i would change Eddie Murphy to some other singer that is readily available at the huge human voice archive this machine has access to. But i am bored with archived and old fashioned music, i want something new and exciting. So i try to teach the machine a new voice. As it turns out it is pretty good at mixing characteristics of multiple voices and synthesizing a new one out of it. I let it do some hundred random variations with hints on what parameters i prefer. "that rough whiskey vibe - keep that and mix it with Presleys guttural hiccups"... "no , not like this, listen". I try to sing what i have in mind, but i am no good singer, so the next test comes out worse than before. "go back ... one more". We spend the rest of the day analyzing Presleys archive and filtering out the guttural characteristic that i meant. I heard i am an expert programmer, my grandma called me that, and i sit down with this android for a whole month and explain in all details the music i want to hear, constantly listening and tweaking my instructions in a flawless instant feedback cycle. The most time spent, or what i sometimes feel - wasted, is actually searching for the right words in my own head to describe what i mean. After 2 weeks i found out that we have broken down the characteristic of a singers voice into twohundredandeighty relevant parameters. Relevant not for everyone of course, but for me and this music i am working on. Maybe i am taking this too far. "delete MelodramaticChorusAccentuationTongueModulationMode two to four". I spend the next week with condensing these parameters down to thirtythree. At some days in the next week i am really without any inspiration and just let the android play thousands of random variations based on the current version that i judge on a 1 to 10 basis while doing some gardenwork. Sometimes i would only change a little note, or the single expression of a syllable. "uah insiide meeee... you see the "siide" must sound more desperate because this guy is on the verge of losing his lifelong dream at that moment." - we spend the rest of the day tweaking that "desperate" thing. This android is amazing. After a month and three weeks the music is ready. I save it and do some other things, mostly gardenwork and watching the drones fly at the evening - they really got some nice new formation techniques that i enjoy greatly. The next day i speak with Andreas (another real human) and tell how good the android has learned my personal preferences for singing voices and how fractalising Bachs harmonic structures to the fifth degree is worth it but no further without structural simplification at the base while keeping the dimensionalitys denominator intact (that was really a complicated talk, i cant get into all the details here). We agree to exchange our Androids preference patterns - my bach-presley11786 for Andreas painting-images9904, and our androids get automatically updated. Next day i try it out. This may take a while, Andreas has warned me. But over the next days i can watch the android painting a really amazing photorealistic but nonetheless astonishingly dreamlike image with Andreas pattern, although the process is, as he said quite slow by design, and one image at a size of four square meters takes about 3 weeks to finish since it includes a complicated technique of iterated de- and reconstruction with additional time for the oil paint to dry up inbetween. But the result is definately worth it. After the image is finished, i call Andreas and congratulate him for his exceptional good taste on imagery (afterall he worked two and half a years on it). He also thanks me for the musical pattern and asks if he can use the guttural Elvis thing in one of his next Crazy Donkey Singalong Performance. That asking just being a polite convention, i naturally give him full permission. We agree to hold another meeting to talk in detail about his structural approach to synthesize dreams without crossing over to kitsch in two weeks. The rest of this day i am back in my garden, the drones fly really low this week... maybe a thunderstorm is coming... but this is great! Every human is a programmer now, which essentially means that he tells other beings what to do, in more or less detail, and of course the fact, that these to be told are not humans anymore, but human-like machines. So, if one of these humans does not tell a machine what to do, he usually enjoys not telling other humans what to do (besides the endless debates about how to tell another machine best what to do (and not to forget - the debates about how it could be better to directly think to a machine rather than speak to it and why it has not been implemented yet)).


Great article but isn't a lot of this more about using different levels of abstraction i.e. frameworks, DSLs and specific to those? That could either take the form of being sufficiently focused that a simple text editor is adequate (e.g. high level commands, no cruft), or a full-featured managed environment (state, docs etc) specific to the framework. I'm not seeing how this is a task for a generic IDE unless I missed something about light table and they are creating a new language.


> unless I missed something about light table and they are creating a new language.

http://www.lighttable.com/2014/03/27/toward-a-better-program...

The short version is yes, we are making a new language. The kind of workflow I discussed in the OP was the original goal for Light Table and it turns out that it can't just be hacked onto existing languages.


Fair enough. I won't say your decision to make a new language is right or wrong - I don't know how it will turn out - but given that you have decided to do so, and given your objectives for it, my one big recommendation would be to make relational databases central to the language, the way Perl makes string processing central and PHP makes web pages central. Some possible ideas are outlined at http://geocities.com/tablizer/top.htm - you don't have to go with those particular ideas of course, but I think it's important to make relational data central one way or the other.


We are heavily inspired by http://shaffner.us/cs/papers/tarpit.pdf


Pretty sweet article. Needs a once over from the author for some edits though. "if you don't have something useful to say, don't say nothing at all" made me giggle.


I can't figure out what that one is supposed to mean!


Are these guys ever going to actually finish the IDE, or postulate on what it means to program, ad infinitum. I'm glad they're being considerate in their design, but this whole project is starting to reek of over-aggrandized vaporware.


https://github.com/LightTable/LightTable/releases

There will be another Light Table release in the next few weeks, featuring CodeMirror 4 (better performance, multiple cursors).

We've only been working on Aurora since February. Designing a new language takes time. We may show some early demos in the next few months, depending on how the current prototype works out.


I think that some people (including me) are a bit disappointed that some(most?) of your effort goes to Aurora instead of LightTable. Don't get me wrong, Aurora sounds cool but for professional programmers LightTable is much more appealing.


I can sympathise with that. The reason for our decision is that there is a low ceiling on how much we can improve matters with Light Table. The problems we are trying to solve turn out to be pretty baked in to existing languages and toolchains and we can't hack around them in the editor. Aurora is a riskier project but it has much more potential to help people.

Also, the fact that Aurora is intended to be simple and easy does not mean that we are building a toy. Quite the opposite - the intention is to remove some of the incidental complexity in programming to leave you free to focus on the actual problems. We intend to bootstrap the compiler and editor by way of dogfooding. I look forward to doing most of my programming in Aurora in the future.


When Chris and company announced the light table project, they had very high goals. I knew then that this would take a long time, and some of the goals would even be unachievable without programming model changes.

The advantage of being incremental is that you can reliably produce something in a reasonable amount of time. This is not incremental, and it's quite risky; be patient.


I use lighttable as my standard ide and enjoy using it... so it is already finished for some definitions of "finished".

(Sure, there's always room for more polish...)


Are you mad that they're blogging?


I thought this project was about improving the programming experience for programmers, not students (or interns) who will in all likelihood never excell as programmers.

Edit: after downvotes I decided to read the whole thing afterall, but remain unswayed. This guy has an important opportunity and I sincerely dislike seeing it squandered.


I've been programming long enough that I know the magic incantations to get started in a few languages. But I am still in Anon's shoes any time I learn a new language or library. As the article says, "Even for experts, programming is an exploratory process." So, I'd say everything in this article applies to programmers, too.

Categorizing people into programmers and non-programmers neglects the people who need programming but not at an expert level. Some interns and students won't excel as programmers, but they still can bring programming into what they do instead, especially if efforts like Light Table succeed.

I suppose I shouldn't be surprised by this kind of pushback against reimagining programming — I've seen it as a response to some of Bret Victor's projects. Skepticism is great: Maybe the ideas behind Light Table are wrong or badly implemented. But we gain nothing from glib dismissals that seem to regard any attempt at improving or simplifying programming as beneath them.


I disagree with language like 'reimagining programming' in this context almost as much as I regret seeing Bret Victor's name being dragged in for no reason -- to the best of my knowledge, Bret Victor didn't design Light Table, and doesn't inform its progress.

There is progress to be made, and I'll be excited again when I see some.


I've seen the same "oh, this is only for beginners" reaction to some of Victor's ideas on making data visible and making libraries "discoverable" that seems needlessly dismissive. That's why I mentioned him. Also, Light Table's initial concept was inspired by his work[0].

I'm not always impressed by Light Table, but I appreciate the work the team has put into the problem and think they have good ideas.

[0]: http://www.chris-granger.com/2012/04/12/light-table---a-new-...


The problem with a program-as-the-mental-model is that you will invariably hit a barrier (whether that be the halting problem or some simpler manifestation). I'm much more sympathetic to the Don't Kill Math[0] view. You need (abstract) analysis and no amount of IDE bling is going to eliminate that necessity.

[0] http://www.evanmiller.org/dont-kill-math.html



Yes, I know the history. That's precisely why I'm so disappointed.

I believe in the concept and potential; it's the execution that I'm finding fault with.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: