It's a fun talk by Bret and I think he echoes a lot of the murmurings that have been going around the community lately. It's funny that he latched onto some of the same core tenants we've been kicking around, but from a very different angle. I started with gathering data on what makes programming hard, he looked at history to see what made programming different. It's a neat approach and this talk laid a good conceptual foundation for the next step: coming up with a solution.
In my case, my work on Light Table has certainly proven at least one thing: what we have now is very far from where we could be. Programming is broken and I've finally come to an understanding of how we can categorize and systematically address that brokeness. If these ideas interest you, I highly encourage you to come to my StrangeLoop talk. I'll be presenting that next step forward: what a system like this would look like and what it can really do for us.
These are exciting times and I've never been as stoked as I am for what's coming, probably much sooner than people think.
APL. Start there. Evolve a true language from that reference plane. By this I mean with a true domain-specific (meaning: programming) alphabet (symbols) that encapsulate much of what we've learned in the last 60 years. A language allows you to speak (or type), think and describe concepts efficiently.
Programming in APL, for me at least, was like entering into a secondary zone after you were in the zone. The first step is to be in the "I am now focused on programming" zone. Then there's the "I am now in my problem space" zone. This is exactly how it works with APL.
I used the language extensively for probably a decade and nothing has ever approached it in this regard. Instead we are mired in the innards of the machine, micromanaging absolutely everything with incredible verbosity and granularity.
I really feel that for programming/computing to really evolve to another level we need to start loosing some of the links to the ancient world of programming. There's little difference between what you had to do with a Fortran program and what you do with some of the modern languages in common use. That's not he kind of progress that is going to make a dent.
> Instead we are mired in the innards of the machine micromanaging absolutely everything with incredible verbosity.
This is one area where Haskell really shines. If you want the machine to be able to do what you want without micromanaging how, than you need a way to formally specify what you mean in an unambiguous and verifiable way. Yet it also needs to be flexible enough to cross domain boundaries (pure code, IO, DSLs, etc).
Category theory has been doing exactly that in the math world for decades, and taking advantage of that in the programming world seems like a clear way forward.
The current state of the industry seems like team of medieval masons (programmers) struggling to build a cathedral with no knowledge of physics beyond anecdotes that have been passed down the generations (design patterns), while a crowd of peasants watch from all angles to see if the whole thing will fall down (unit tests).
Sure, you might be able to build something that way, but it's not exactly science, is it?
This is the kind of talk from Haskell folks that I find incredibly annoying. Where's Haskell's Squeak? Where's Haskell's Lisp Machine? It doesn't take much poking around to find out that non-trivial interactive programming like sophisticated games and user interfaces is still cutting very edge stuff.
I'm sorry, but you're upset because folks are passionate about a language that brings new perspective, and maybe is not exactly as useful in some areas as existing solutions? This is exactly the kind of attachment Bret warns about.
I don't think I expressed attachment to any particular solution or approach - I simply pointed out an extremely large aspect of modern software engineering where Haskell's supposed benefits aren't all that clear. So who's attached?
Are you trying to compare interactive software, one of the dominant forms of programs and widely used by billions of people every day, to formula 1 cars, an engineering niche created solely for a set of artificial racing criteria?
A better analogy would be being mad that the Tesla can't drive on the interstate.
"sophisticated games" pretty specifically implies contemporary 3d gaming, which is not a useful criteria for exploring a fundamental paradigm shift in programming.
The fact that you think a lisp machine is an "extremely large aspect of modern software engineering" certainly makes me feel that you are expressing an attachment to a particular approach.
We have many beautiful cathedrals don't we? So it is a bonafide fact that you can build something with the current state of industry. As far as the analogy goes I would alter it in that the peasants aren't simply watching, but poking the masonry with cudgels. Lastly, scientific methods of building aren't necessarily better, while they follow an order that is rooted in a doctrine, I can quickly think of all those scientifically built rockets that exploded on launch. To play devil's advocate, I'm not convinced that a scientific method is a better one than the current haphazard one we have in place for development.
It's very hard to find good tutorials on APL because it's not very popular and most of its implementations are closed-source and not compatible with each other's language extensions, but it's most recognizable for its extreme use of non-standard codepoints. Every function in APL is defined by a single character, but those characters range from . to most of the Greek alphabet (taking similar meanings as in abstract math) to things like ⍋ (sort ascending). Wikipedia has a few fun examples if you just want a very brief taste; you can also read a tutorial from MicroAPL at http://www.microapl.com/apl/tutorial_contents.html
It's mostly good for being able to express mathematical formulas with very little translation from the math world - "executable proofs," I think the quote is - and having matrices of arbitrary dimension as first-class values is unusual if not unique. But for any practical purpose it's to Haskell what Haskell is to Java.
> But for any practical purpose it's to Haskell what Haskell is to Java.
Can you elaborate on this? As I understand, the core strengths of APL are succinct notation, built-in verbs which operate on vectors/matrices, and a requirement to program in a point-free style. All of this can be done in Haskell.
>A Haskell programmer unfamiliar with APL looks at an APL program and...
And says "what's the big deal?". That's exactly the question, what is the big deal. APL isn't scary, I'm not shouting "I can't make sense of this", I am asking "how is this better than haskell in the same way haskell is better than java?".
I'm not imagined, I am real. I know you were restating the analogy, the problem is that the analogy is wrong. I can't find anything about APL that a haskell developer would find new or interesting or frightening or anything like that.
More esoteric organization/concepts for anyone coming from the C family (which is basically everyone), more out-there notation, more deserving of the title "write-only," and less ability to do anything you might want to do with a real computer beyond using it as a calculator. I wouldn't want to do much work with Haskell's GTK bindings, but at least they exist.
That tutorial is deeply unimpressive. It seems very excited about APL having functions, and not directly mapping to machine-level constructs. In 1962 I can imagine that being impressive (if you weren't familiar with Lisp or ALGOL); today, not so much. The one thing that does seem somewhat interesting is the emphasis it puts on "operators" (i.e., second-order functions). This is obviously not new to anyone familiar with functional programming, but I do like the way that tutorial jumps in quite quickly to the practical utility of a few simple second-order functions (reduce, product, map).
Like I said, it's hard to find good ones; I didn't say I had succeeded. I learned a bit of it for a programming language design course, but I never got beyond the basic concepts.
Well in the end it doesn't matter if your language is looking for popularity or not. What matters is what you can do with it. You think a language with weird symbols all around can't win? Just look at Pearl.
On a related note, if one plans to sell the Language of The Future Of Programming, I swear this thing will know the same fate as Planner, NLS, Sketchpad, Prolog, Smalltalk and whatnot if it cannot help me with the problems I have to solve just tomorrow.
All the decent tutorials that I know of were in book form. Unless someone's scanned them they're gone. I know mine got destroyed in a flooded basement.
If my memory hasn't been completely corrupted by background radiation, I've seen papers as early as the mid 1950s about this notation.
APL started out as a notation for expressing computation (this is not precise but good enough). As far as I'm concerned it's sitting at a level of abstraction higher than Haskell (arguably like a library overtop Haskell).
Now, in the theme of this thread, APL was able to achieve all of this given the constraints at the time.
The MCM/70 was a microprocessor based laptop computer that shipped in 1974 (demonstrated in 1972, some prototypes delivered to customers in 1973) and ran APL using an 80 kHz (that kilo) 8008 (with a whole 8 bytes of stack) with 2 kBytes (that's kilo) RAM or maxed out at 8 kB (again, that's kilo) of RAM. This is a small slow machine that still ran APL (and nothing else). IEEE Annals of Computer History has this computer as the earliest commercial, non-kit personal computer (IEEE Annals of the History of Computing, 2003: pg. 62-75). And, I say again, it ran APL exclusively.
Control Data dominated the super computer market in the 70s. The CDC 7600 (designed by Cray himself, 36.4 MHz with 65 kWord (a word was some multiple of 12 bits, probably 60 bits but I'm fuzzy on that) and about 36 MFLOPS according to wikipedia) was normally programmed in FORTRAN. In fact, this would be a classic machine to run FORTRAN. However, the APL implementation available was often able to outperform it, almost always when coded by an engineer (and I mean like civil, mechanical, industrial, etc engineer, not a software engineer) rather than someone specialising in writing fast software.
I wish everyone would think about what these people accomplished given those constraints. And think about this world and think again about Bret Victor's talk.
The ones I remember were all books. At the time, I thought this was one of the best books available: http://www.amazon.com/APL-Interactive-Approach-Leonard-Gilma... -- but I don't know if I'd pay $522 for it... actually I do know, and I wouldn't. The paper covered versions are just fine, and a much better price :-)
EDIT: I just opened the drop down on the paper covered versions. Prices between $34.13 and $1806.23!!! Is that real?!? Wow, I had five or six copies of something that seems to be incredibly valuable. Too late for an insurance claim on that basement flood.
Haha it sucks actually - I love talking about this stuff, but I know I really need to save it for my talk so I'm tearing myself to pieces trying to keep quiet.
I guess one thing I will say is that our definition of programming is all over the place right now and that in order for us to get anywhere we need to scale it back to something that simplifies what it means to program. We're bogged down in so much incidental complexity that the definitions I hear from people are convoluted messes that have literally nothing to do with solving problems. That's a bad sign.
My thesis is that given the right definition all of the sudden things magically "just work" and you can use it to start cleaning up the mess. Without giving too much away I'll say that it has to do with focusing on data. :)
I feel the same way. As a programmer, I feel like there are tons of irrelevant details I have to deal with every day that really have nothing to do with the exercise of giving instructions to a computer.
That's what inspired me to work on my [nameless graph language](http://nickretallack.com/visual_language/#/ace0c51e4ee3f9d74...). I thought it would be simpler to express a program by connecting information sinks to information sources, instead of ordering things procedurally. By using a graph, I could create richer expressions than I could in text, which allowed me to remove temporary variables entirely. By making names irrelevant, using UUIDs instead, I no longer had to think about shadowing or namespacing.
Also, by avoiding text and names, I avoid many arguments about "coding style", which I find extremely stupid.
I find that people often argue about programming methodologies that are largely equivalent and interchangeable. For example, for every Object Oriented program, there is an equivalent non-object-oriented program that uses conditional logic in place of inheritance. For every curried program, there is an equivalent un-curried program that explicitly names its function arguments. In fact, it wouldn't even be that hard to write a program to convert from one to the other.
I'm pretty excited about the array of parallel processors in the presentation though. If we had that, with package-on-package memory for each one, message passing would be the obvious way to do everything. Not sure how to apply this to my own language yet, but I'll think of something.
I have. I should play with them more, since I don't quite get DataFlow yet.
I'm used to JavaScript, so that's what I based my language on. It's really a traditional programming language in disguise, kinda like JavaScript with some Haskell influence. It's nothing like a dataflow language. On that front, perhaps those languages are a lot more avante-guarde than mine.
> I'm pretty excited about the array of parallel processors in the presentation though. If we had that, with package-on-package memory for each one, message passing would be the obvious way to do everything.
Chuck Moore, the inventor of Forth, is working on these processors.
>By making names irrelevant, using UUIDs instead, I no longer had to think about shadowing or namespacing.
I've been trying to do something similar with a pet language :) Human names should never touch the compiler, they are annotations on a different layer.
But writing an editor for such a programming environment with better UX and scalability than a modern text-based editor is... an engineering challenge.
It's not perfect, and making lambdas is still a little awkward because I haven't made them resizable. Also, eventually I'd like the computer to automatically arrange and scale the nodes for you, for maximum readability. But I think it's pretty fun to use. It'd probably be even more fun on an iPad.
I'd love to make my IDE as fun to use as DragonBox
I think it's really nice! Usually these flow-chart languages have difficult UI, but this one is pretty easy to mess around in.
It would be good if, while clicking and dragging a new connection line that will replace an old one, the latter's line is dimmed to indicate that it will disappear. Also, those blue nodes need a distinguishing selection color.
It sounds like you're aiming more toward a fun tablet-usable interface, but:
Have you thought about what it would take to write large programs in such an editor? For small fun programs a graph visualization is cool, but larger programs will tend toward a nested indented structure (like existing text) for the sake of visual bandwidth, readability of mathematical expressions, control flow, etc.
Use the arrow keys to move the box around. I suppose that's still a bit primitive, but I'll make some more involved programs once I fix up the scoping model a bit.
When I first started on this project, I thought at some point I would need to make a "zoom out" feature, because you might end up making a lot of nodes in one function. However, I have never needed this. As soon as you have too much stuff going on, you can just box-select some things and hit the "Join" button to get a new function. The restricted workspace actually forces you to abstract things more, and the lack of syntax allows you to reduce repetition more than would be practical in a textual language.
For example, in textual languages, reducing repetition often requires you to introduce intermediate variables, which could actually make the program's text longer, so people will avoid doing it. However, in my language you get intermediate variables by connecting two sinks to the same source. The addition in program length hardly noticeable.
I'd like to try labview, but doesn't it cost lots of money? I guess I'll sign up for an evaluation copy.
The closest things to my language that I have seen are Kismet and UScript. Mine is different though because it is lazily evaluated and uses recursion as the only method of looping.
Some other things that look superficially similar such as Quartz Composer, ThreeNode, PureData, etc. are actually totally different animals. They are more like circuit boards, and my language is a lot more like JavaScript.
Nice but still classical programming. I personally think if statements are the problem. The easiest languages are trivial ones with no branching. Not Turing complete, but they rock when applicable e.g. html, gcode
That's true. I intended it to be feature-comparable with JavaScript, since I think JavaScript is a pretty cool language, and that is what it is interpreted in.
I don't think it is possible to make a program without conditional branches.
Somebody posted a link below about "Data Driven" design in C++. In it was an example of a pattern where each object has a "dirty" flag, which determines whether it needs processing, but they found that failing branch prediction here took more cycles than simply removing the branch.
My thought was, instead, what if you created two versions of that method -- one to represent when the dirty flag is true, and another to represent when the dirty flag is false -- and then instead of toggling the dirty flag, you could change the jump address for that method to point to the version it should use. If this toggle happens long enough before the the processor calls that method, you would remove any possibility of branch prediction failure =].
I have no idea if this is practical or not, but it is amusing to consider programs that modify jump targets instead of using traditional conditional branches.
In actual compiled code, conditional branches (without branch prediction) are translated to jumps to different targets, which are specified inline with the instructions. Specifying a modifiable target would mean fetching it from a register (or worse, memory) and delaying execution until the fetch is complete (several cycles minimum on a pipelined machine). With branch prediction, instructions are predicated on a condition inline and we avoid the costly jump instructions.
I think we more commonly use the latter, which tries to guess which way the code will branch and load the appropriate jump target. It's actually typically very successful in modern processors.
I think we need to shift to a different model ... like liquid flow. Liquid dynamics are non-linear (ie. computable), yet a river has no hard IF/ELSE boundaries, it has regions of high pressure etc. which alter its overall behaviour. You can still compute using (e.g. graph) flows, but you don't get the hard edges which are a cause of bugs. Of course it won't be good at building CRUD applications, but it would be natural for a different class of computational tasks (e.g. material stress, neural networks)
(PS angular-js has done all that dirty flag checking if you like that approach)
I don't get it, why are you afraid of scooping yourself?
In the same way that HN frowns upon stealth startups, shouldn't we frown upon 'stealth theories'? If your thoughts are novel and deep enough, revealing something about them will only increase interest in your future talks, since you are definitionally the foremost thinker in your unique worldview. If the idea fails scrutiny in some way, you should want to hear about it now so you can strengthen your position.
What's the downside, outside of using mystery to create artificial hype?
can't speak for the thread starter, but one downside to prematurely talking about something is confusion. Half formed thoughts, rambling imprecise language, etc can create confusion for his audience. The process of editing and preparing for a talk makes it more clear and concise. Maybe he is not ready yet to clearly communicate his concepts yet.
It seems like one large breakthrough in programming could simply be using the features of a language in a manner that best suits the problem. That's what I get from your blog post: design for what makes sense - not for what looks normal during a review. One thing I envy from LISP is that there seem to be few 'best practices' that ultimately make our applications harder to modify.
I've been thinking a lot about such issues too; particularly the pain points I have when ramping up against new systems. What information is missing that leaves me with questions? Can code deliver something thorough enough to be maintainable as a single source of truth?
I think, the differences between reading and writing code are as big as sending and receiving packets. It's difficult to write code that extrapolates the base information in your head driving the decisions. Not only that, but you also have to juggle logic puzzles as you're doing it. And on the other side, you have to learn new domain languages (or type hierarchies), as well as what the program is supposed to do in the first place.
I think the idea of interacting with code as you build it is great, but how can we do that AND fix the information gap at the same time?
For example, people do seem to assume that programming must involve, in some way, coding. Do we really need to code in some programming-language to be programming?
Changing security settings, for example, in a browser lead to quite different behaviors of the program. Isn't the user of the browser programming because they change the behavior of the program?
And this leads to....
> with focusing on data
If we focus on data, and hopefully better abstractions on how to manipulate that data, then wouldn't any user able to alter a program because they can adjust "settings" at almost any point within the program: in real time.
Wouldn't this then enable a lot more people to become programmers?
The difference between setting parameters and programming is obviously that programming allows the creation of new functions.
"Coding" is literally the act of translating a more-or-less formally specified program (that is, a set of planned actions) into a particular computer language.
However, if being a programmer was only like being a translator, programming wouldn't be too hard for mere mortals. That's on the other part - the one that involves methods, logic, knowledge like the CAP theorem - they have problems with. The fact is, not every one can re-discover the bubble sort algorithm like we all here did when we were 8 or so. That's why we are programmers, and that's why they are not (and don't even want to be -- but they are nonetheless nice people and have other qualities, like having money). And these problems don't vanish if you switch from control-flow to data-flow or some bastardized-flow; they just change shape.
As some people have mentioned in this HN post, it is hard to define what programming means.
A program is the result of the automation of some process (system) that people have thought up (even things that couldn't exist before computers). Programming is the act of taking that process (system) and describing it in a computing device of some kind.
Programming currently requires some kind of mapping from the "real world" to the "computer world". The current mapping is done primarily with source code. So, it currently seems that people who are good at programming are good at mapping from the "real world" into the "computer world" via coding.
You seem to be making the point that some people are just good at programming because they can do things like "re-discover the bubble sort algorithm" or understand CAP theorem. These are very domain specific problems.
For people who are able to "re-discover inventory control management" they would do a great job of automating it (programming) if they had an easier way to map that process (system) to a computing device.
The ultimate goal (other than maybe AI) is a 1-to-1 mapping between a "real world" process (system) and a computing device that automates it.
I'm working on a system based on Bret Victor's "Programming in Principle" talk. I believe to achieve such a system, you need to find a way to add semantics to code, as well as the utilization of proofs in order to have enough constraints that your environment is seamlessly self-aware while at the same time extensible.
I'm curious what you mean by data though. Is it data in the "big data" sense? What I mean is, are we talking about gathering a lot of data on coding? My approach is based on that, anyway: lots of data on code with a number of different analyzers (static and dynamic) that allows for extraction of common idioms and constraints, while allowing for the system to more easily help the user.
Of course, there's no magic and a lot of times I reach dead-ends, and while I'm eager to have enough to show the world, progress has been kinda slow lately.
Looking forward for your talk, be sure to link here on HN.
I'm still not sure how you'll make LightTable work (well scale), hopefully it involves a new programming model to make the problem more well defined?
We had some great discussions at LIXD a couple of weeks ago, wish you could have been there. Everyone seems to be reinventing programming these days. We are definitely in competition to some extent. The race is on.
Great. I'll send you something about my new programming model when it is written up decently, but it has to do with being able to modularly re-execute effectful parts of the program after their code or data dependencies have changed.
@ibdknox. I've started programming Clojure in LightTable the other day. How does programming differ from editing text? Can we use gestures to navigate and produce code? I'm working on visual languages which work well in certain domains, but fail when one needs precise inputs. To me the language constructs we use are inherently tied with the production of code (Emacs + LISP). There is a very good reason the guy who built Linux come up with a great versioning system. It is fair to say that Bret does not quite now yet what he is talking about, as he says himself. As if something big is going to happen and it is hard to say what exactly it is. I hope LightTable or something like it replaces Emacs&Vim in a couple of years. I think that being able to code in the browser will turn out to be unbelievably important (although it looks not that useful today).
This made me feel like you were going to write the content and then randomly post it in the comments section of someone's blog on clocks as garden decorations or something like that.
Can you attend Splash in October? I know academic conferences are probably not your pace, but there will be some good talks and it might be worth your time for networking with some of the academic PL types.
I very much enjoyed Bret's talk, but the visual programming part of his talk was rather half-baked. I say this as someone who has done visual coding professionally in the past. People have been trying to crack the "drawing programs" nut for decades. It's not a forgotten idea. It's so not forgotten that there is a wikipedia page listing dozens of attempts over the years: http://en.wikipedia.org/wiki/Visual_programming_language.
The reason we still code in text is because visual programming is not a hard problem -- it's a dozen hard problems. Think about all of the tools we use to consume, analyze, or produce textual source code. There are code navigators, searchers, transformers, formatters, highlighters, versioners, change managers, debuggers, compilers, analyzers, generators, and review tools. All of those use cases would need to be fulfilled. Unlike diagrams, text is a convenient serialization and storage format, you can leverage the Unix philosophy to use the best of breed of the tools you need. We don't have a lingua franca for diagrams like we do for text files.
It's not due to dogma or laziness that we use text to write code. It's because the above list of things are not trivial to get right and making them work on pictures is orders of magnitude harder than making them work with text.
Modern IDEs are not text editors. They are heavily augmented with syntax highlighting, completion, code-folding, refactoring, squiggly red lines. This requires an IDE understanding your language and is effectively parsing the tokens as you type. I would suggest that a lot of the features we talk about have arrived already, it is just not explicit, and is tremendously complex to implement, simply because programmers are old die-hards and refuse to try different ideas.
Then there is the issue of reasoning about working systems. The job of the IDE ends when a software is built. If you encounter a bug though, having a runtime that has the smarts so that you can go in an poke around allow and even encourage experimentation, and improves comprehension.
Finally, there's the issue of code organization. A well artichected piece of software is tidy, because everything is in the right place. While a language-aware IDE can make sure you put the words in the right order, it has no concept of the architecture. A higher level DSL that is supported by the development environment directly might help. If we can somehow raise the abstraction level of the IDE, certain classes of programming problems could be as easy as filling in a form.
> Modern IDEs are not text editors. They are heavily augmented with syntax highlighting, completion, code-folding, refactoring, squiggly red lines.
How do any of those features make something 'not a text editor'? I'm pretty sure vim is still a text editor, and my vim does all of those things, with the possible exception of refactoring (and I'm not sure I want a program doing my refactoring in the first place).
A plane with auto-pilot may be still a plane, but the pilot's ability has been heavily augmented.
Incidentally, I spoke to a guy who had been developing Java on EMACs for 12 years. He tried Eclipse a month ago and was won over. Large languages - rightly or wrongly, like Java, benefit from having tight tool integration.
Most people can't describe what they want to a human let alone a computer. This is the skill of the programmer. Coding comes second.
If we can teach kids to analyse process, teaching them a programming language, whatever the paradigm, is trivial.
I have no idea what coding will look like in 40 years (although a very solid percentage of it will be no different to now) but it will be driven as much by fashion as by any perceived need to democratise it.
Of course, the alternative view is that programming is already democratised - I have seen the future and it is VB in Excel spreadsheets. /slits wrists
Languages aren't, per se, elite, coding techniques are. Lots of ordinary non-programmers can successfully code to some degree in C, C++, Python, etc. But there are many advanced techniques that only experienced developers will be able to grapple with. Anyone can buy and use a hammer, but that doesn't mean that using a hammer makes anyone a carpenter.
Do I think that in 40 years or 100 years we will still be coding in a way that is compatible with using vim? Probably. And I don't see how that makes programming less "democratic".
> But there are many advanced techniques that only experienced developers will be able to grapple with.
At one time iterators were considered a technique and a design pattern. Now, they are a part of most languages. They are transparent. They are taken for granted.
Currently, programming takes place within the domain of software development. It is not surprising then that we value advanced techniques within the industry. Just like there are advanced techniques that are used within the domains of electrical engineering, mechanical engineering and biology (just to name a few).
As we get better at our job as programmers, we further make our "advanced techniques" transparent to those that use our systems. Sure, currently, these systems are usually very domain specific. However, there is nothing to say that we can not build better software development environments which are both non-domain specific and, at the same time, hide the underlying complexities that require experienced developers.
In my opinion, these development environments would use a type of visual language enabling a lot more people to program. I am biased because this is a problem I've been working on for quite a few years now.
Programming is expressing what you want. It's much easier to express yourself using language than by drawing. Democratizing programming by removing coding is just like democratizing literature by replacing all the words with pictures.
>It's much easier to express yourself using language than by drawing.
I've done a lot of programming at the white board and it involved a lot of drawing. And I suck at drawing. But I was able to get my ideas across to others.
And visual programming solves this how? People will have to learn how to connect arcane bricks instead of writing arcane text. Programming isn't going to be democratised until we approach natural language interpreters.
Every step in the development process moves those people with the domain-expertise/vision/creative process/etc. further from the solution. Removing steps, like coding, brings the solution closer to the domain experts/visions/create process/etc.
Visual programming makes it a lot easier for people to work collaboratively. For example, those with the domain-expertise can work closer with those that have programming experience in a visual language.
Just a few ways that visual languages could democratize programming.
Visual programming is not going to solve this unless the visualisation matches their domain-specific notation instead of a visual graph that has separate edges for "then" and "else". At that point, why bother with the visual notation instead of just putting it in text they understand equally well?
Ease of Use - It is not possible to have syntax errors. (Logical errors/misunderstandings of the problem being solved are still possible).
I'm not quite sure what "text they understand" means. Are you talking about natural language interpreters (as you mention above)? That would/will be some cool technology and my feeling is that it is a "next step" in software evolution. Maybe, more likely, the next step is a planned or constructed language interpreter (http://en.wikipedia.org/wiki/Constructed_language). Natural language is so tricky (but maybe not for very domain specific problems).
I mean text that approaches natural language if not natural language. I think something like Inform 7 is far more likely to be adopted by that audience than a visual graph that is just an abstraction of loops and functions. I think the benefits of a textual language matching a domain are much greater than a general-purpose visual programming language.
If it targets, say, a visual learner, I think a graph language won't help unless they are already visualising the program as a graph.
What I think, is that you are saying something that people (and not just "people," but some of the most brilliant people in the history of computing) have been saying for 20 to 40 years.
I've always liked Connections: https://en.wikipedia.org/wiki/Connections_(TV_series). What is so great about this show is that James Burke is able to point out how new and amazing ideas come about by connecting a few, seemingly unrelated, concepts to make a new idea.
In my opinion, the ability for someone to take a few observations about the world around them and turn it into something new and amazing is what makes them brilliant.
We are now at a point in history where a lot of people are able to take in a lot of different ideas leading to a lot of new discoveries (one of the reasons why I think new technology is now being created at an almost exponential rate).
You seem to be implying that a particular problem can't be solved because brilliant people in the past have not solved it yet. In my opinion, problems aren't solved yet because someone has not "connected the dots" yet.
We don't have a lingua franca for diagrams like we do for text files.*
What is UML, then? If you feel stuck with this then maybe you need to look outside the text = code bubble and get some input on tool design from other sources. I agree that text is a convenient serialization and storage format, but it's a terrible design and analysis medium.
I mean, consider CSound, which is a tool for writing music with computers that has a venerable heritage going back to the 1970s. You have one set of code for defining the charactersistics of the sound, and another for defining the characteristic of the ntoes you play with those sounds: http://www.csounds.com/man/qr/score.htm and http://www.csounds.com/manual/html/index.html
CSound is a moderately good teaching tool, and given its heritage it's an impressive piece of technology. But nobody writes music in Csound except a few computer music professors and the students in their departments that have to do as part of their assignments, and 99% of music composed in CSound is a) dreadful and b) could have been done much faster on either a modular synthesizer or with Max/MSP. Electronic musicians feel the same way about CSound that you as a programmer would feel about an elderly relative that keeps talking about when everything was done with vacuum tubes and toggle switches...you respect it but it seems laughably primitive and has nothing to do with solving actual problems. The very few people that need low-level control on specific hardware platforms work in C or assembler.
I think this is pretty relevant here because one of Bret Victor's more impressive achievements is having written some very impressive operating software for a series of synthesizers from Alesis. I'd be pretty astonished if he even considered CSound for the task.
Far from being stuck in a bubble, I actually spent a couple years developing code in a UML-driven development environment (as in, I spent my days drawing UML diagrams that automatically turned into executable code). First of all, you cannot write any nontrivial program in UML alone. It is not nearly specific enough. UML is to a working program as a table of contents is to a technical manual. And in case you think I'm extrapolating from one bad experience, I've also used LabView have seen the parallel difficulties in that language.
Now, I agree that higher levels of abstraction will be needed in the future, but I disagree that visual programming is an obviously superior abstraction. In fact, I believe that people have been earnestly barking up that tree for decades with little success for reasons unrelated to old-fashioned attitudes. There are practical and technical reasons why developing visual programming tools and ecosystems will always be more difficult than developing text-based ones.
Take merging for example. Merging two versions of a source file is many times over a solved problem (not that there aren't new developments to be made). In contrast, merging two versions of a UML diagram is very much a manual process (to the extent that it's possible at all). Now consider creating a change management tool allows you to branch and merge UML diagrams. This is orders of magnitude harder yet. These are essential and straightforward use cases that are much more complex in a visual medium. Without these basic features, visual programming will not scale well to even medium-size teams.
I can go into more detail about issues with visual programming if I still haven't made my case. And I would love to hear from people with visual programming experience that have contradicting opinions. It's always possible that I missed something.
I appreciate the additional context and totally get where you're coming from. The only nitpick I'd make is this:
Merging two versions of a source file is many times over a solved problem
Granted - but isn't this also a limiting factor? It's not that I don't think anything should ever be reducible to code form, but why is that visual mapping of a complete program isn't a standard everyday tool? I mean, it's all very well that we have syntax highlighters showing keywords, variables and so on, but why is it that when I open a program there isn't a tool to automatically show me loops, arrays and so on?
Loops are one of the simplest programming structures; 90% of loops look like:
LOOP foo FROM bar to baz:
something
something
something
profit
foo = foo + 1
END LOOP
I mean, software engineering shouldn't be about syntax, it should be about structure, and yet there don't seem to be many tools around that open up a source file and build branching diagrams and loop modules automatically. Why is that? Why don't we even have structural highlighting rather than syntax highlighting?
Can you elaborate. I see the structure, in the indenting. My IDE (Visual Studio) has little lines and + boxes that allow me to collapse and expand code like this. It's useless, because the most part I kind of care what "something" is, and the collapse is not replaced with a nice pseudocode "frange the kibbleflits" statement. I have tools that can generate diagrams showing me class hierarchies, call stacks, and so on. I rarely (almost never) find the useful. Maybe you have something different in mind?
> It's because the above list of things are not trivial to get right...
It is a hard problem but solvable. We've been working on it for a few years. The "hardest" part was figuring out how to design away the need for complex interfaces (complex APIs). Once we solved this problem, it was a lot easier to build out a visual object language and associated framework (or lack thereof).
Something that is a bit difficult to figure out in a visual language is the merging of branches.
I would like to get your input on your experiences with visual coding in the past.
In 2040 someone will discover Haskell, shed tears on why C#++.cloud is so widespread instead in the industry, and use it to conclude the sorry state of the world. Seriously, don't compare what was published in papers 50 years ago with what business uses today, compare it with what is in papers now, and there are lots of interesting things going on all the time, when was the last time you even checked? Probabilistic programming? Applications of category theory to functional programming? Type theory? Software transactional memory?
Woody Allen did this great movie some time ago, "Midnight in Paris", where the main character, living in present times, dreams of moving back in time to the 1920s as the best time for literature ever. When the occasion to really go back appears though, he discovers the writers of the 1920s thought the best literature was done in 1890s, and so he has to go back again, then again, ... This talk is like this, sentiment blinding a sober assessment.
I feel like you missed the fact that he is obviously aware of the market/hardware reasons that caused programming to evolve in this manner, but it doesn't change the fact that this current model of programming may be a false evolutionary pathway.
He is pointing out experts tend to deny a perfectly valid way of exploring technology, because it doesn't follow the defined community-accepted standards built on assumptions of hardware and efficiency.
He's not not knocking the current model, he's not even saying these other models shouldn't have died, he's saying they shouldn't be forgotten and should often be reexamined in light of new technology which might make a better home for it.
Yes, he is actually, repeatedly. For instance, at 9:30 in the video: "There won't be any, like, markup languages, or stylesheet languages, right? That would make no sense".
Industry is screwy. My perspective is always from heavy industry, where we consider upgrading to PLCs that run on Pentium 90 architecture _amazing_. There's good reason for that reactionism; never fix what ain't broke.
But the same philosophy is used in the softer analytics, where using state of the art really is better. Sure giant clunky Excel sheets _work_, but we can build far better charting tools. We can run statistics easier than MiniTab. Data can be interactive, searchable, and computable instead of rituals and incantations to lousy proprietary one-off enterprise buzz-word-a-tron programs.
We _could_ be using analytical tools that shape themselves to the data. Instead, we have to convince management that it's _possible_ to analyze and map data easily in these new ways. But once they see how much more powerful these ideas are - how much faster and cheaper they work - lower mgt. is thrilled. And if upper management is profit oriented, they'll like it too.
But by this argument, no one is ever allowed to criticize the current state of the world compared with the past, lest he be accused of nostalgia-clouded vision.
I don't follow. The notion put forward was "hey people who like haskell, stop improving haskell and start adding visual studio support or else $BIGCORP won't use haskell". Nobody cares if $BIGCORP uses haskell, they are free to cripple themselves if they want. If $BIGCORP actually seriously wanted to use haskell, they could afford to pay someone to add visual studio support for haskell.
Well, there must be a reason why research keeps discovering new ways of computing and programming while the industry is stucked with the outdated methods.
I loved that movie, but I don't think it is too relevant here. I mean, you can rediscover and read any literature written in the 20s or the 1890s, which is exactly what our field is not doing.
It's simple: industry and academia are too far apart today.
Look at the languages that come out of academia, then look at the languages that have been invented over the last few decades which have gained traction. The latter list includes a lot of crazy items, things like Perl, PHP, Javascript, Ruby, and Python.
Some of them with their merits but for the most part hugely flawed, in some cases bordering on fundamentally broken. But what do they have in common? They were all invented by people needing to solve immediate problems and they are all designed to solve practical problems. Interestingly, Python was invented while its author was working for a research institute but it was a side project.
The point being: languages invented by research organizations tend to be too distanced from real-world needs of everyday programmers to be even remotely practical. Which is why almost all of the new languages invented over the past 3 decades that have become popular have either been created by a single person or been created by industry.
LLVM and Scala come to mind as PL projects born in academia and enjoying wider adoption. Not all researchers are interested in solving the "real problems out there", but some do, and are successful at it.
I just watched most of this talk while a large C++ codebase was compiling, in the midst of trying to find one of many bugs caused by multiple interacting stateful systems, on a product with so much legacy code that it'll be lucky if it's sustainable for another ten years.
Like Bret's other talk, "Inventing on Principle", this talk has affected me deeply. I don't want this anymore. I want to invent the future.
"'The most dangerous thought you can have a creative person is to think you know what you're doing.'
It's possible to misinterpret what I'm saying here. When I talk about not knowing what you're doing, I'm arguing against "expertise", a feeling of mastery that traps you in a particular way of thinking.
But I want to be clear -- I am not advocating ignorance. Instead, I'm suggesting a kind of informed skepticism, a kind of humility.
Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it's poison."
I think much of the motivation for developing new paradigms stems from growing frustration with tool-induced blindness, for lack of a better term. We spend much of our time chasing that seg-fault error instead of engineering the solution to the problem we're trying to solve.
A new programming paradigm allows us to reframe a problem in a different space, much like how changing a matrix's basis changes its apparent complexity, so to speak.
The ultimate goal, I think, is to come up with a paradigm that would map computational problems, without loss of generality, to what our primate brains would find intuitive. This lowers our cognitive burden when attempting to describe a solution, and also to allow us to see clearer what the cause of a problem may be. For example, if you're a game developer, and you find some rendering problems due to some objects intersecting each other, but you're not sure where it happens, Instead of poring over text dump of numerical vector coordinates, it'd be better to visualize them. The abnormality would present themselves clearly, even to a layman's eyes. I suspect this is what Victor is trying to get at. Imagine, if you will, that you have a graphical representation of your code, and a piece of code that could potentially segfault shows up as an irregularity of some form (different textures, different color, different shape, etc), so you can spot them and fix them right away. The irregularity is not a result of some static error analysis, but is instead the result of some emergent property resulting from graphical presentation rules (mapping from problem space to graphic space). We're good at spatial visualization, so I wonder if it's valid to come up with a programming language that would leverage more of our built-in capability in that area. This may seem like wishful thinking or even intractable (perhaps due to a certain perception limitation...which we have to overcome using more cognitive resources), but I certainly hope we'll get there in our life time.
> The ultimate goal, I think, is to come up with a paradigm that would map computational problems, without loss of generality, to what our primate brains would find intuitive.
One thing I can't help but noticing is that the majority of discussions regarding this talk are focusing on the examples presented.
I thought it was pretty clear that the talk wasn't about whether constraint-based solvers and visual programming environments were the "future of programming." It was a talk about dogma. Brent points out that none of the examples he's mentioned are inherently important to what he was trying to get across: they were just examples. The point he was trying to elucidate was that our collective body of knowledge limits our ability to see new ways of thinking about the problems we face.
It is at least somewhat related to the adage, when you have a hammer every problem looks like a nail. He's just taking a historical view and using irony to illustrate his point. When computer technology reached a certain level of power there was a blossoming garden of innovative ideas because the majority of people didn't know what you cannot do.
What I think he was trying to say, and this is partly coloured by my own beliefs, is that beginner's mind is important. Dogma has a way of narrowing your view of the world. Innovation is slow and incremental but there's also a very real need to be wild and creative as well. There's room for both and we've just been focusing on one rather than the other for the last 40 years.
In this discussion I've been trying to make the point that he's missed the mark even in the idea that developer attitude is the inherent barrier preventing these breakthroughs. I believe he's stealing bases here. At least with respect to visual programming, there is objective evidence (that is easily google-able) that this problem is actively being tackled but with very little success. Active and recently failed projects seem to be glaring counterexamples to his broader point, at least with respect to the visual programming domain.
I suspect that my point about presuming developer attitudes are the biggest problems here can more broadly applied though I do not have enough experience with constraint-based solvers and his other examples to do more than wildly speculate.
At the end of the video he warns of the dangers of "dogma".
He looks really nervous and impatient in this talk. He seems afraid that it won't be well received. If so, it is interesting to note that this is what dogma in fact leads to... repression of new ideas, fear of free thinkers and the stagnation of true scientific progress. It means guys like Bret Victor will feel awkward giving a talk that questions the status quo.
"Breakthroughs" do not happen when we are all surrounded by impenetrable walls of dogma. I wonder if we today could even recognize a true breakthrough in computing if we saw one. The only ones I see are from the era Bret is talking about. What happens when those are forgotten?
My friends, there is a simple thing I learned in another discpline outside of computing where I witnessed doing what others thought impossible: the power of irreverance. This is where true innovation comes from.
It means not only questioning whether you know what you are doing, but questioning whether others do. That frees you up to work on what you want to work on, even when it is in a different direction than everyone else. That is where innovation comes from: irreverance.
Very good summary of the state of the art in the early 70s.
His analysis of the "API" problem reminds me of some of the ideas Jaron Lanier was floating around about ten years ago. I can't recall the name of it, but it was some sort of biologically inspired handshake mechanism between software 'agents'.
What I think such things require is an understanding of what is lacking in order to search for it; as near as I can tell, that requires some fashion of self-awareness. This, as far as I can conceive, recurses into someone writing code, whether it be Planner or XML. But my vision is cloudy on such matters.
I should note that I think Brett is one of the leading thinkers of his (my) generation, and have a lot of respect for his ideas.
Enough already ! Could anyone with 100 millions $ give this guy a team of 100 Phds to create the new software revolution ?
This guy is not a good or great or fabulous computer scientist, this guy is something else entirely. He's a true creative Thinker. He doesn't have a vision, he's got tons of them. Every subject he starts thinking about he comes with new ideas.
He shouldn't be doing presentations, he should run a company.
Based on his personal writings, it seems like he prefers to be left alone to work on his ideas. It does not seem like he wants to run a company, or really even work with others.
A company? Then we would have just one solution to the problems he sees. I think that just throwing a bunch of ideas at all of us is more effective. We can all think independently and come up with more novel ways to solve those problems.
An interesting talk, and certainly entertaining, but I think it falls very short. Ultimately it turns into typical "architecture astronaut" naval gazing. He focuses on the shortcomings of "traditional programming" while at the same time imagining only the positive aspects of untried methods. To be honest, such an approach is frankly childish, and unhelpful. His closing line is a good one but it's also trite, and the advice he seems to give leading up to it (i.e. "let's use all these revolutionary ideas from the '60s and '70s and come up with even more revolutionary ideas") is not practical.
To pick one example: he derides programming via "text dump" and lauds the idea of "direct manipulations of data". However, there are many very strong arguments for using plain-text (read "The Pragmatic Programmer" for some very excellent defenses of such). Moreover, it's not as though binary formats and "direct manipulations" haven't been tried. They've been tried a great many times. And except for specific use cases they've been found to be a horrible way to program with a plethora of failed attempts.
Similarly, he casually mentions a programming language founded on unique principles designed for concurrency, he doesn't name it but that language is Erlang. The interesting thing about Erlang is that it is a fully fledged language today. It exists, it has a ton of support (because it's used in industry), and it's easy to install and use. And it also does what it's advertised to do: excel at concurrency. However, there aren't many practical projects, even ones that are highly concurrency dependent, that use Erlang. And there are projects, such as couch db, which are based on Erlang but are moving away from it. Why is that? Is it because the programmers are afraid of changing their conceptions of "what it means to program"? Obviously not, they have already been using Erlang. Rather, it's because languages which are highly optimized for concurrency aren't always the best practical solution, even for problem domains that are highly concurrency bound, because there are a huge number of other practical constraints which can easily be just as or more important.
Again, here we have an example of someone pushing ideas that seem to have a lot of merit in the abstract but in the real world meet with so much complexity and roadblocks that they prove to be unworkable most of the time.
It's a classic "worse is better" scenario. His insult of the use of markup languages on the web is a perfect example of his wrongheadedness. It took me a while to realize that it was an insult because in reality the use of "text dump" markup languages is one of the key enabling features of the web. It's a big reason why it's been able to become so successful, so widespread, so flexible, and so powerful so quickly. But by the same token, it's filled with plenty of ugliness and inelegance and is quite easy to deride.
It's funny how he mentions unix with some hints of how awesome it is, or will be, but ignores the fact that it's also a "worse is better" sort of system. It's based off a very primitive core idea, everything is a file, and very heavily reliant on "text dump" based programming and configuration. Unix can be quite easily, and accurately, derided as a heaping pile of text dumps in a simple file system. But that model turns out to be so amazingly flexible and robust that it creates a huge amount of potential, which has been realized today in a unix heritage OS, linux, that runs on everything from watches to smartphones to servers to routers and so on.
Victor highlights several ideas which he thinks should be at the core of how we advance the state of the art in the practice of programming (e.g. goal based programming, direct manipulations of data, concurrency, etc.) but I would say that those issues are far from the most important in programming today. I'd list things such as development velocity and end-product reliability as being far more important. And the best ways to achieve those things are not even on his list.
Most damningly, he falls into his own trap of being blind to what "programming" can mean. He is stuck in a model where "programming" is the act of translating an idea to a machine representation. But we've known for decades that at best this is a minority amount of the work necessary to build software. For all of Victor's examples of the willingly blind programmers of the 1960s who saw things like symbolic coding, object oriented design and so forth as "not programming" and more like clerical work he makes fundamentally the same error. Today testing, integration, building, refactoring and so on are all hugely fundamental aspects of prototyping and critically important to end-product quality as well as development velocity. And increasingly tooling is placing such things closer and closer to "the act of programming", and yet Victor himself still seems to be quite blind to the idea of these things as "programming". Though I don't think that will be the view among programmers a few decades down the road.
I see where you are coming from, but I think you're getting mired in some of the details of the talk that perhaps rub you the wrong way and are therefore missing the larger point. Brett in all his talks is saying the same thing: take an honest look at what we call programming and tell me that we've reached the pinnacle of where we can go with it.
Whether or not you like this specific talk or the examples he has chosen, I think you would probably agree there is a lot of room for improvement. Brett is trying to stir the pot and get some people to break out and try radical ideas.
Many of the things he talks about in this presentation have been tried and "failed" but that doesn't mean you never look at them again. Technology and times change in ways that can breathe life into early ideas that didn't pan out initially. Many forget that dynamic typing and garbage collection were once cute ideas but failures in practice.
He doesn't mention things like testing, integration, building, and refactoring because they are all symptoms of the bigger problem that he's been railing against: namely that our programs are so complex we are unable to easily understand them to build decent, reliable software in an efficient way. So we have all these structures in place to help us get through the complexity and fragility of all this stuff we create. Instead we should be focusing on the madness that causes our software to balloon to millions of lines of incomprehensible code.
Please forgive my liberties with science words. :)
The purpose of refactoring is to remove the entropy that builds up in a system, organization, or process as it ages, grows in complexity, and expands to meet demands it wasn't meant to handle. It's not a symptom of a problem; it's acknowledgement that we live in a universe where energy is limited and entropy increases, where anything we humans call a useful system is doomed to someday fall apart—and sooner, not later, if it isn't actively maintained.
Refactoring is fundamental. Failure to refactor is why nations fall to revolutions, why companies get slower, and why industries can be disrupted. More figuratively, a lack of maintenance is also why large stars explode as supernovas and why people die of age. And as a totally non-special case, it's why programs become giant balls of hair if we keep changing stuff and never clean up cruft.
A system where refactoring is not a built-in process is a system that will fail. Even if we automate it or we somehow hide it from the user, refactoring still has to be there.
What if programming consists of only refactoring? Then there is no separate "refactoring step", just programming and neglect. This is what Bret Victor is getting at. It is about finding the right medium to work in.
We have that already i.e. coding to a test. It sucks because you never seem to grasp the entirety of a program but instead just hack until every flag is green. It doesn't prevent entropy either. Only thing that prevents code entropy is careful and deliberate application of best practices when needed i.e. a shit ton of extreme effort.
Sure, but I think he ends up missing the mark. Ultimately his talk boils down to "let's revolutionize programming!" But as I said that ends up being fairly trite.
As for testing, integration, building, and refactoring I think it's hugely mistaken to view them as "symptoms of a problem". They are tools. And they aren't just tools used to grapple with explosions of complexity, they are tools that very much help us keep complexity in check. To use an analogy, it's not as though these development tools are like a hugely powerful locomotive that can drag whatever sorry piece of crap codebase you have out into the world regardless of its faults. Instead, they are tools that enable and encourage building better codebases, more componentized, more reliable, more understandable, etc.
Continuous integration techniques combined with robust unit and integration testing encourage developers to reduce their dependencies and the complexity of their code down as much as possible. They also help facilitate refactoring which makes reduction of complexity easier. And they actively discourage fragility, either at a component level or at a product/service level.
Without these tools there is a break in the feedback loop. Coders would just do whatever the fuck they wanted and try to smush it all together and then they'd spend the second 90% of development time (having already spent the first) stomping on everything until it builds and runs and sort of works. With more modern development processes coders feel the pain of fragile code because it means their tests fail. They feel the pain of spaghetti dependencies because they break the build too often. And they feel that pain much closer to the point of the act that caused it, so that they can fix it and learn their lesson at a much lower cost and hopefully without as much disruption of the work of others.
With any luck these tools will be even better in the future and will make it even easier to produce high quality code closer to the level of irreducible complexity of the project than is possible today.
These aren't the only ways that programming will change for the better but they are examples which I think it's easy for people to write off as not core to the process of programming.
Hrmmm ... you seem to still be tremendously missing the overarching purpose of this speech. Obviously to delve proficiently into every aspect of programming through the ages and why things are the way they are now (e.g. what 'won' out and why) would require a Course and not a talk.
You seem to think that the points brought up in this talk diminish the idea that things now DO work. However, what was stressed was avoiding the trap of discontinuing the pursuit of thinking outside the box.
I am also questioning how much of the video you actually paid attention too (note: I am not questioning how much you watched). I say this because your critique is focused on the topics that he covered in the earlier parts of the video and then (LOL) you quickly criticize him for talking about concurrency (in your previous comment)... I clearly remember him talking about programming on Massively Parallel architectures without the need for sequential logic control via multiplexing using threads and locks. I imagine, though, it is possible you did not critique this point because it is obvious (to everyone) that this is the ultimate direction of computing (synchronously with the end of Moore’s law as well).
Ahhh now that’s interesting, we are entering an era where there could possibly be a legitimate use to trying/conceiving new methods of programming? Who would have thought?
Maybe you just realized that you would have looked extremely foolish spending time on critiquing that point? IDK … excuse my ignorance.
Also you constantly argue FOR basic management techniques and methods (as if that countermoves Bret’s arguments) … but you fail to realize that spatial structuring of programs would be a visual management technique in itself that could THEN too have tools developed along with it that would be isomorphic to modern integration, testing management. But I won’t bother delving into that subject as I am much more ignorant on this and more importantly … I would hate to upset you, Master.
Oh and btw (before accusations fly) I am not a Hero worshiper … this is the first time I have ever even heard of Bret Victor. Please don’t gasp too loud.
I definetely get what OP was trying to say. Bret presented something that sounds a lot like the future even though it definitely isn't. Some of listed alternatives like direct data manipulation, or visual languages or non-text languages have MAJOR deficiences and stumbling blocks that prevented them from achieving dominance. Though in some cases it basically does boil down to which is cheaper, more familiar.
I think the title of Bret's presentation was meant to be ironic. I think he meant something like this.
If you want to see the future of computing just look at all the things in computing's past that we've "forgotten" or "written off." Maybe we should look at some of those ideas we've dismissed, those ideas that we've decided "have MAJOR deficiencies and stumbling blocks", and write them back in?
The times have changed. Our devices are faster, denser, and cheaper now. Maybe let's go revisit the past and see what we wrote off because our devices then were too slow, too sparse, or too expensive. We shouldn't be so arrogant as to think that we can see clearer or farther than the people who came before.
That's a theme I see in many of Bret's talks. I spend my days thinking about programming education and I can relate. The art when it comes to programming education today is a not even close to the ideas described in Seymour Papert's Mindstorms, which he wrote in 1980.
LOGO had its failings but at least it was visionary. What are MOOCs doing to push the state of the art, really? Not that it's their job to push the state of the art -- but somebody should be!
This is consistent with other thing's he's written. For example, read A Brief Rant on the Future of Interaction Design (http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...). Not only does he use the same word in his title ("future"), but he makes similar points and relates the future to the past in a similar way.
"And yes, the fruits of this research are still crude, rudimentary, and sometimes kind of dubious. But look —
In 1968 — three years before the invention of the microprocessor — Alan Kay stumbled across Don Bitzer's early flat-panel display. Its resolution was 16 pixels by 16 pixels — an impressive improvement over their earlier 4 pixel by 4 pixel display.
Alan saw those 256 glowing orange squares, and he went home, and he picked up a pen, and he drew a picture of a goddamn iPad.
[picture of a device sketch that looks essentially identical to an iPad]
And then he chased that carrot through decades of groundbreaking research, much of which is responsible for the hardware and software that you're currently reading this with.
That's the kind of ambitious, long-range vision I'm talking about. Pictures Under Glass is old news. Let's start using our hands."
Okay. Single fingers are a amazing input device because of dexterity. Flat phones are amazing because they fit in my pockets. Text search is amazing because with 26 symbols I can query a very significant portion of world knowledge (I can't search for, say, a painting that looks like a Van Gogh by some other painter, so there are limits, obviously).
Maybe it is just a tone thing. Alan Kay did something notable - he drew the iPad, he didn't run around saying "somebody should invent something based on this thing I saw".
Flat works, and so do fingers. If you are going to denigrate design based on that, well, let's see the alternative that is superior. I'd love to live through a Xerox kind of revolution again.
I've seen some of his stuff. I am reacting to a talk where all he says is "this is wrong". I've written about some of that stuff in other posts here, so I won't duplicate it. He by and large argues to throw math away, and shows toy examples where he scrubs a hard coded constant to change program behavior. Almost nothing I do depends on something so tiny that I could scrub to alter my algorithms.
Alan Kay is awesome. He did change things for the better, I'm sorry if you thought I meant otherwise . His iPad sketch was of something that had immediately obvious value. A scrubbing calculator? Not so much.
Hmm didn't you completely miss his look, the projector etc? He wasn't pretending to stand in 2013 and talk about the future of programming. He went back in time and talked about the four major trends that excisted back then.
No. I'm saying, there is a reason those things haven't' become reality. They have much greater hidden cost than presented. It is the eqivalent of someone dressing in 20th century robe of Edisson and crying over the cruel fate that fell on DC. Much like DC these ideas might see a comeback but only because the context has changed. Not being aware of history is one blunder but failing to see why those thing weren't realized is another blunder.
I get it, I really do. And I'm very sympathetic to Victor's goals. I just don't buy it, I think he's mistaken about the most important factors to unlock innovation in programming.
His central conceit is the idea is that various revolutionary computing concepts which first surfaced in the early days of programming (the 1960s and '70s) have since been abandoned in favor of boring workaday tools of much more limited potential. More so that new, revolutionary concepts in programming haven't received attention because programmers have become too narrow minded. And that is very simply fundamentally an untrue characterization of reality.
Sure, let's look at concurrency, one of his examples. He bemoans the parallelization model of sequential programming with threads and locks as being excessively complex and inherently self-limited. And he's absolutely correct, it's a horrible method of parallelism. But it's not as though people aren't aware of that, or as though people haven't been actively developing alternate, highly innovative ways to tackle the problem every year since the 1970s. Look at Haskell, OCaml,vector computing, CUDA/GPU coding, or node.js. Or Scala, Erlang, or Rust, all three of which implement the touted revolutionary "actor model" that Victor brandishes.
Or look at direct data manipulations as "programming". This hasn't been ignored, it's been actively worked on in every way imaginable. CASE programming received a lot of attention, and still does. Various workflow based programming models have received just as much attention. What about Flash? Hypercard? Etc. And there are many niche uses where direct data manipulation has proven to be highly useful. But again and again it's proven to be basically incompatible with general purpose programming, likely because of a fundamental impedance mismatch. A total of billions of dollars in investment has gone into these technologies, it's counterfactual to put forward the notion that we are blind to alternatives or that we haven't tried.
Or look at his example of the smalltalk browser. How can any modern coder look at that and not laugh. Any modern IDE like Eclipse or Visual Studio can present to the developer exactly that interface.
Again and again it looks either like Victor is either blindly ignorant of the practice of programming in the real-world or he is simply adhering to the "no true Scottsman" fallacy. Imagining that the ideas he brings up haven't "truly" been tried, not seriously and honestly, they've just been toyed with and abandoned. Except that in some cases, such as the actor model, they have not just been tried they've been developed into robust solutions and they are made use of in industry when and if they are warranted. It's hilarious that we're even having this discussion on a forum written in Arc, of all things.
To circle back to the particular examples I gave of alternative important advances in programming (focusing on development velocity and reliability), I find it amusing and ironic that some folks so easily dismiss these ideas because they are so seemingly mundane. But they are mundane in precisely the ways that structured programming was mundane when it was in its infancy. It was easy to write off structured programming as nothing more than clerical work preparatory to actual programming, but now we know that not to be true. It's also quite easy to write off testing and integration, as examples, as extraneous supporting work that falls outside "real programming". However, I believe that when the tooling of programming advances to more intimately embrace these things we'll see an unprecedented explosion in programming innovation and productivity, to a degree where people used to relying on such tools will look on our programming today as just as primitive as folks using pre-structured programming appear to us today.
Certainly a lot of programmers today have their heads down, because they're concentrated on the work immediately in front of them. But the idea that programming as a whole is trapped inside some sort of "box" which it is incapable of contemplating the outside of is utterly wrong with numerous examples of substantial and fundamental innovation happening all the time.
I think Victor is annoyed that the perfect ideal of those concepts he mentions haven't magically achieved reification without accumulating the necessary complexity and kruft that comes with translating abstract ideas into practical realities. And I think he's annoyed that fundamentally flawed and imperfect ideas, such as the x86 architecture, continue to survive and be immanently practical solutions decade after decade after decade.
It turns out that the real world doesn't give a crap about our aesthetic sensibilities, sometimes the best solution isn't always elegant. To people who refuse to poke their head out of the elegance box the world will always seem as though it turned its back on perfection.
It's always a red flag when people have to say that. Many experts don't profess to understand something which they spent a long time understanding.
Ironically, Bret Victor mentioned, "The most dangerous thought that you can have as a creative person is to think that you know what you're doing..."
The points you mention are bewildering, since in my universe, most "technologists" ironically hate change. And learning new things. They seem to perceive potentially better ways of doing things like a particularly offensive veggie, rant at length rather than even simply taste the damn thing, and at best hide behind "Well it'd be great to try these new things, but we have a deadline now!" Knowing that managers fall for this line each time, due to the pattern-matching they're trained in.
(Of course, when they fail to meet these deadlines due to program complexity, they do not reconsider their assumptions. Their excuses are every bit as incremental as their approach to tech. The books they read — if they read at all — tell them to do X, so by god X should work, unless we simply didn't do enough X.)
It's not enough to reject concrete new technologies. They even fight learning about them in order to apply vague lessons into their solutions.
Fortunately, HN provides a good illustration of Bret Victor's point: "There can be a lot of resistance to new ways of working that require to kind of unlearn what you've already learned, and think in new ways. And there can even be outright hostility." In real life, I've actually seen people shout and nearly come to blows while resisting learning a new thing.
You haven't addressed any of inclinedPlane's criticism of Brett's talk. Rather your entire comment seems to be variations on "There are people who irrationally dislike new technology."
Well, I don't agree with your premise, that I haven't addressed any of their criticisms.
A main theme underlying their complaint is that there's "numerous examples of substantial and fundamental innovation happening all the time."
But Bret Victor clearly knows this. Obviously, he does not think every-single-person-in-the-world has failed to pursue other computational models. The question is, how does the mainstream programming culture react to them? With hostility? Aggressive ignorance? Is it politically hard for you to use these ideas at work, even when they appear to provide natural solutions?
Do we live in a programming culture where people choose the technologies they do, after an openminded survey of different models? Does someone critique the complectedness of the actor model, when explaining why they decided to use PHP or Python? Do they justify the von Neumann paradigm, using the Connection Machine as a negative case study?
There are other shaky points on these HN threads. For instance, inferring that visual programming languages were debunked, based on a few instances. (Particularly when the poster doesn't, say, evaluate what was wrong with the instances they have in mind, nor wonder if they really have exhausted the space of potential visual languages.)
@cali: I completely agree with your points.
@InclinedPlane is missing the main argument.
Here is my take:
TLDR: Computing needs an existential crisis before current programming zeitgeist is replaced. Until then, we need to encourage as many people as possible to live on the bleeding edge of "Programming" epistemology.
Long Version: For better or for worse, humans are pragmatic. Fundamentally, we don't change our behavior until there is a fire at our front door. In this same sense, I don't think we are going to rewrite the book on what it means to "program," until we reach an existential peril. Intel demonstrated this by switching to multicore processors after realizing Moore's law could simply not continue via a simple increase in clock speed.
You can't take one of Bret's talks as his entire critique. This talk is part of a body of work in which he points out and demonstrates our lack of imagination. Bret himself points out another seemingly irrelevant historical anecdote to explain his work: Arab Numerals. From Bret himself:
"Have you ever tried multiplying roman numerals? It’s incredibly, ridiculously difficult. That’s why, before the 14th century, everyone thought that multiplication was an incredibly difficult concept, and only for the mathematical elite. Then arabic numerals came along, with their nice place values, and we discovered that even seven-year-olds can handle multiplication just fine. There was nothing difficult about the concept of multiplication—the problem was that numbers, at the time, had a bad user interface."
Interestingly enough, the "bad user interface" wasn't enough to dethrone roman numerals until the renaissance. The PRAGMATIC reason we abandoned roman numerals was due to the increased trading in the Mediterranean.
Personally, I believe that Brett is providing the foundation for the next level of abstraction that computing will experience. That's a big deal. Godspeed.
Perhaps. But I think he is a visual thinker (his website is littered with phrases like "the programmer needs to see....". And that is a powerful component of thinking, to be sure. But, think about math. Plots and charts are sometimes extremely useful, and we can throw them up and interact with them in real time with tools like Mathcad. Its great. But, it only goes so far. I have to do math (filtering, calculus, signal processing) most every day at work. I have some Python scripts to visualize some stuff, but by and large I work symbolically because that is the abstraction that gives me the most leverage. Sure, I can take a continuous function that is plotted, and visually see the integral and derivative, and that can be a very useful thing. OTOH, if I want to design a filter, I need to design it with criteria in mind, solve equations and so on, not put an equation in a tool like mathcad, and tweek coefficients and terms until it looks right. Visual processing falls down for something like that.
Others have posted about the new IDEs that they are trying to create. Great! Bring them to us. If they work, we will use them. But I fundamentally disagree with the premise that visual is just flat out better. Absolutely, have the conversation, and push the boundaries. But to claim that people that say "you know, symbolic math actually works better in most cases" are resisting change (you didn't say that so much as others) is silly. We are just stating facts.
Take your arabic numbers example. Roman numerals are what, essentially VISUAL!! III is 3. It's a horrible way to do arithmetic. Or imagine a 'visual calculator', where you try to multiply 3*7 by stacking blocks or something. Just the kind of thing I might use to teach a third grader, but never, ever, something I am going to use to balance my checkbook or compute loads on bridge trusses. I'm imagining sliders to change the x and y, and the blocks rearranging themselves. Great teaching tool. A terrible way to do math because it is a very striking, but also very weak abstraction.
Take bridge trusses. Imagine a visual program that shows loads in colors - high forces are red, perhaps. A great tool, obviously. (we have such things, btw). But to design a bridge that way? Never. There is no intellectual scaffolding there (pun intended). I can make arbitrary configurations, look at how colors change and such, but engineering is multidimensional. What do the materials cost? How hard are they to get and transport? How many people will be needed to bolt this strut? How do the materials work in compression vs expansion? What are the effects of weather and age? What are the resonances. It's a huge optimization problem that I'm not going to solve visually (though, again, visual will often help me conceptualize a specific element). That I am not thinking or working purely visually is not evidence that I am not being "creative" - I'm just choosing the correct abstraction for the job. Sometimes that is visual, sometimes not.
So, okay, the claim is that perhaps visual will/should be the next major abstraction in programming. I am skeptical, for all the reasons above - my current non-visual tools provide me a better abstraction in so many cases. Prove me wrong, and I will happily use your tool. But please don't claim these things haven't been thought of, or that we are being reactionary by pointing out the reasons we choose symbolic and textual abstractions over visual ones when we have the choice (I admit sometimes the choice isn't there).
Bret has previously given a talk[1] that addresses this point. He discusses the importance of using symbolic, visual, and interactive methods to understand and design systems. [2] He specifically shows an example of digital filter design that uses all three. [3]
Programming is very focused on symbolic reasoning right now, so it makes sense for him to focus on visual and interactive representations and interactive models often are intertwined with visual representation. His focus on a balanced approach to programming seems like a constant harping on visualization because of this. I think he is trying to get the feedback loop between creator and creation as tight as possible and using all available means to represent that system.
The prototypes I have seen of his that are direct programming tend not to look like LabView, instead they are augmented IDEs that have visual representations of processing that are linked to the symbolic representations that were used to create them. [4] This way you can manipulate the output and see how the system changes, see how the linkages in the system relate, and change the symbols to get different output. it is a tool for making systems represented by symbols but interacting with the system can come through a visual or symbolic representation.
Part of Bret's theory of learning (which I agree with) is that when "illustrating" or "explaining" an idea it is to important use multiple simultaneous representations, not solely symbolic and not solely visual. This increases the "surface area" of comprehension so that a learner is much more likely to find something in this constellation of representations that relate to their prior understanding. In fact, that comprehension might only come out of seeing the constellation. No representation alone would have sufficed.
Further, you then want to build a feedback loop by allowing direct manipulation of any of the varied representations and have the other representations change accordingly. This not only lets you see the same idea from multiple perspectives -- visual, symbolic, etc. -- but lets the learner see the ideas in motion.
This is where the "real time" stuff comes in and also why he gets annoyed when people see the it as the point of his work. It's not; it's just a technology to accelerate the learning process. It's a very compelling technology, but it's not the foundation of his work. This is like reducing Galileo to a really good telescope engineer -- not that Bret Victor is Galileo.
I think he emphasizes the visual only because it's so underdeveloped relative to symbolic. He thinks we need better metaphors, not just better symbols or syntax. He's not an advocate of working "purely visually." It's the relationship between the representations that matters. You want to create a world where you can freely use the right metaphor for the job, so to speak.
That's his mission. It's the mission of every constructivist interested in using computers for education. Bret is really good at pushing the state of the art which is why folks like me get really excited about him! :D
You might not think Bret's talks are about education or learning, but virtually every one is. A huge theme of his work is this question: "If people learn via a continual feedback loop with their environment -- in programming we sometimes call this 'debugging' -- then what are our programming environments teaching us? Are they good teachers? Are they (unknowingly) teaching us bad lessons? Can we make them better teachers?"
The thing is that computing has both reached the limits of where "text dump" programming can go AND has found the text-dump programming is something like a "local maximum" among the different clear options available to programmers.
It seems like we need something different. But the underlying problem might be that our intuitions about "what's better" don't seem to work. Perhaps an even wider range of ideas needs to be considered and not simply the alternatives that seem intuitively appealing (but which have failed compared to the now-standard approach).
I agree with this. To get out of this local trap we are going to need something revolutionary. This is not something you can plow money into, it will come, if indeed it ever comes, from left field. My bet is there is a new 'frame' to be found somewhere out in the land of mathematical abstraction. I think to solve this one we are going to have to get right down to the nitty gritty, where does complexity come from, how specifically does structure emerge from non structure? How can we design such systems?
It's true you couldn't plow money into such a project. But I always wondered why, when confronted with a problem like this, you couldn't hire one smart organizers who hire forty dispersed teams who'd each follow a different lead. And hire another ten teams who'd be tasked with following and integrating the work of the forty teams (numbers arbitrary but you get the picture).
I suppose that's how grants are supposed to work already but it seems these mostly degenerated to all following the intellectual trend with the most currency.
> it turns into typical "architecture astronaut" naval gazing
I take exception to your critique of your Mr Victor's presentation. I am sad to see that your wall of text has reached the top of this discussion on HN. To be honest, it's probably because no one has the time to wade through all of the logical fallacies, especially the ad hominem attacks and needlessly inflammatory language ("falls very short," "architecture astronaut naval gazing," "untried methods," "frankly childish, and unhelpful,"trite," "not practical," etc)
You seem to be reacting just like the "absolute binary programmers" that Bret predicts. As far as I can gather, you are fond of existing web programming tools (HTML, CSS, JS, etc) and took Bret's criticism as some sort of personal insult (I guess you like making websites).
I think that Bret's talk is about freeing your mind from thinking that the status quo of programming methodologies is the final say on the matter, and he points out that alternative methodologies (especially more human-centric and visual methodologies) are a neglected research area that was once more fruitful in Computer Science's formative years.
Bret's observations in this particular presentation are valid and insightful in their own right. His presentation style is also creative and enjoyable. Nothing in this presentation deserves the type of language that you invoke, especially in light of the rest Bret's recent works (http://worrydream.com/) that are neatly summed up by this latest presentation.
I'm not surprised at the language; it's war, after all. Bret and Alan Kay and others are saying, "We in this industry are pathetic and not even marginally professional." It's hard to hear and invokes sometimes an emotional response.
And what makes it hard to hear is that we know deep in our hearts, that's it's true, and as an industry, we're not really trying all that hard. It used to be Computer Science; now it's Computer Pop.
Bret and Alan Kay and others are saying, "We in this industry are pathetic and not even marginally professional." It's hard to hear and invokes sometimes an emotional response.
It sounds like sour grapes to me. Everyone else is pathetic and unprofessional because they didn't fall in love with our language and practices.
Indeed, they didn't. And it likely cost the world trillions (I'm being conservative, here). The sour grapes are justified here. To give a few examples:
In the sixties, people were able to build interactive systems with virtually no delay. Nowadays we have computers that are millions times faster, yet still lag. Seriously, more than 30 seconds just to turn on the damn computer? My father's Atari ST wast faster than my brand new computer in this respect.
Right now, we use the wrong programming languages for many projects, often multiplying code size by at least 2 to 5. I know learning a new language takes time, but if you know only 2 languages and one paradigm, either you're pathetic, or your teachers are.
>In the sixties, people were able to build interactive systems with virtually no delay.
That did virtually nothing. It is easy to be fast when you do nothing.
>I know learning a new language takes time, but if you know only 2 languages and one paradigm, either you're pathetic, or your teachers are.
X86 still dominates the desktop.
Wow, so CS is all about what hardware you buy and what languages you program in? I guess we will just have to agree to disagree on what CS is. While programming languages are part of CS, what language you chose to write an app in really is not.
> > In the sixties, people were able to build interactive systems with virtually no delay.
> That did virtually nothing. It is easy to be fast when you do nothing.
This is kind of the point. Current interactive systems tend to do lots of useless things, most of which are not perceptible (except for the delays they cause)
> Wow, so CS is all about what hardware you buy and what languages you program in?
No. Computer Science is about assessing the qualities of current programming tools, and inventing better ones. Without forgetting humans warts and limitations of course.
On the other hand, programming (solving problems with computers), is about choosing hardware and languages (among other things). You wouldn't your project to cost 5 times more than it could just because you've chosen the wrong tools.
You wouldn't your project to cost 5 times more than it could just because you've chosen the wrong tools.
Yep, If there were really tools out there that could beat what is in current use by a factor of 5 then they would have won, and once they exist they will win. Because they would have had the time to A) Implement something better. B) Use all that extra time to build an easy migration path so that those on the lesser platform could migrate over.
So where is the processor that is 5x better than x86? Where is the language that is 5x better than C, C++, Java, C#,(whatever you consider to be the best of the worse to be.) I would love to use a truly better tool, I would love to use a processor so blazingly fast that it singed my eyebrows.
This is kind of the point. Current interactive systems tend to do lots of useless things, most of which are not perceptible (except for the delays they cause)
Right because all of us sitting around with our 1/5x tools have time to bang out imperceptible features.
Thanks for saying all that. I was thinking it, but restrained myself since there seemed to be a lot of hero worship over this person going on here. But it needs to be said. Everything in that video is of stuff that has been researched for decades. It isn't mainstream largely because it is facile to say 'declarative programming' or what have you, but something entirely different for it to be easier and better. Prolog is still around. Go download a free compiler, and try to write a 3D graphical loop that give you 60 fps. Try to write some seismic code with it. Try to write web browser. Not so easy. Much was promised by things like Prolog, declarative programming, logic programming, expert systems, and so on, but again it is easy to promise, hard to deliver. We didn't give up, or forget the ideas, it is just that the payoff wasn't there (except in niche areas where in fact all of these things are going strong, as you would expect).
Graphical programming doesn't work because programs are not 2 dimensional, they are N dimensional, and you spend all your time trying to fit things on a screen in a way that doesn't look like a tangled ball of yarn (hint, can't be done). I've gone through several CASE tools through my decades, and they all stink. Not to mention, I don't really think visually, but more 'structurally' - in terms of the interrelations of things. You can't capture that in 2D, and the problems that 2D create more than overwhelm whatever advantages you might get going from 1D (text files) to 2D.
Things like CSP have never been lost, though they were niche for awhile. Look at Ada's rendevous model, for example.
Right. Personally I've had plenty of experience with certain examples of "declarative programming" and "direct manipulation of data" programming and other than a few fairly niche use cases they are typically horrid for general purpose programming. Think about how "direct manipulation" programming fits into a source control / branching workflow, for example. Unless there's a text intermediary that is extremely human friendly you have a nightmare on your hands. And if there is such an intermediary then you're almost always better off just "directly manipulating" that.
Think about how "direct manipulation" programming fits into a source control / branching workflow, for example.
Trivially. Since virtually all currently used languages form syntactic trees (the exception being such beasts as Forth, Postscript etc.), you could use persistent data structures (which are trees again) for programs in these languages. Serializing the persistent data structure in a log-like fashion would be equivalent to working with a Git repository, only on a more fine-grained level. Essentially, this would also unify the notion of in-editor undo/redo and commit-based versioning; there would be no difference between the two at all. You'd simply tag the whole thing every now and then whenever you reach a development milestone.
Well, there is yarn and then there is yarn. I don't mean spaghetti code, which is its own problem separate from representation. I'm thinking about interconnection of components, which is fine. Every layer of linux, say, makes calls to the same low level functions. If you tried to draw that it would be unreadable, but it is perfectly fine code - it is okay for everyone to call sqrt (say) because sqrt has no side effects. Well, sqrt is silly, but I don't know the kernel architecture - replace that with virtual memory functions or whatever makes sense.
I have actually been thinking about 1D coding vs 2D coding. Isn't 2D describing nD a little bit closer? Like a photograph of a sculpture... a little easier to get the concept than with written description, no matter how eloquent.
Re: the ball of yarn, we're trying to design that better in NoFlo's UI. Think about a subway map that designs itself around your focus. Zoom out to see the whole system, in to see the 1D code.
All I can say is, have you tried to use a CASE tool to do actual coding? I have, forced on me by various MIL-STD compliant projects.
X and Y both talk to A and B. Represent that in 2D without crossing lines.
Okay, you can, sure. If X and Y are at the top, and A and B are at the bottom, Twist A and Y, and the interconnection x in the middle goes away. But, you know, X is related to Y (same level in the sw stack), and I really wanted to represent them at the same level. Opps.
And, I'm sure you can see that all it takes is one additional complication, and you are at a point where you have crossed lines no matter what.
Textually there is no worry about layout, graphically, there is. I've seen engineers spend days and weeks just trying to get boxes lined up, moving things around endlessly as requirements change - you just spend an inordinate amount of time doing everything but engineering. You are drawing, and trying to make a pretty picture. And, that is not exactly wasted time. We all know people spend too much effort making PowerPoint 'pretty', and I am not talking about that. I mean that if the image is not readable then it is not usable, so you have to do protracted layout sessions.
Layout is NP-hard. Don't make me do layout to write code.
tl;dr version - code is multi-dimensional, but not in a 'layout' way. If you force me to do 2D layout you force me to work in an unnatural way that is unrelated to what I am actually trying to do. You haven't relaxed the problem by 1 dimension by introducing layout, but multiplied the constraints like crazy (that's a technical math term, I think!)
And then there is the information compression problem. Realistically how much can you display on a screen graphically. I argue far less than textually. I already do everything I can to maximize what I can see - scrolling involves a context switch I do not want to do. So, in {} languages I put the { on the same line as the expression "if(){" to save a line, and so on. Try a graphical UML display of a single class - you can generally only fit a few methods in, good luck with private data, and all bets are off if methods are more than 1-2 short words long. I love UML for a one time, high level view of an architecture, but for actually working in? Horrible, horrible, horrible. For example, I have a ton of tiny classes that do just 1 thing that get used everywhere. Do I represent that exactly once, and then everywhere else you have to remember that diagram? Do I copy it everywhere, and face editing hell if I change something? Do I have to reposition everything if I make a method name longer? Do I let the tool do the layout, and give me an unreadable mess? And so on. The bottom line is you comprehend better if you can see it all on one "page" - and graphical programming has always meant less information on that page. That's a net loss in my book. (This was very hand-wavey; I've conflated class diagrams with graphical programming for exmaple - we'd both have to have access to a whiteboard to really sketch out all of the various issues).
Views into 1D code is a different issue, which is what I think you are talking about with NoFlo (I've never seen it). If you can solve the layout problem you will be my hero, perhaps, so long as I can retain the textual representation that makes things like git, awk, sed, and so on so powerful. But I ask what is that going to buy me opposed to a typical IDE with solutions/projects/folders/files on a tab, a class browser in another tab, auto-complete and easy navigation (ctrl+right click to go to definition, and so on)? Can I 'grep' all occurrences of a word (I may want to grep comments, this is not strictly a code search)?
Hope this all doesn't come across as shooting you down or bickering, but I am passionate about this stuff, and I am guessing you are also. I've been promised the wonders of the next graphical revolution since the days of structured design, and to my way of thinking none of it has panned out. Not because of the resistance or stupidity of the unwashed masses, but because what we are doing does not inherently fit into 2D layout. There's a huge impedance mismatch between the two which I assert (without proof) will never be fixed. Prove me wrong! (I say that nicely, with a smile)
Sorry for the length; I didn't have time to make it shorter.
I write all of my software in a 2D, interactive, live-executing environment. Yes, layout is a problem. But you get good at it, and then it's not a problem anymore.
Moreover, the UI for the system I use is pretty basic and only has a few layout aids – align objects, straighten or auto-route patch cords, auto-distribute, etc. I can easily imagine a more advanced system that would solve most layout problems.
A 2D editor with all of the power or vim or emacs would be formidable. Your bad experience with "CASE tools" does not prove the rule.
>Sorry for the length; I didn't have time to make it shorter.
favorite phrase
let me try the tl;dr
assembler over machine won as well as it lost to the next high level thing because on practical terms it was easier and more practical, reality decided based on constraints..
if it doesn't get mainstream it means it's not worth it because it's more expensive...
As a JS hacker I wanted to bring that kind of coding to the browser for kids so I made http://meemoo.org/ as my thesis. Now I have linked up with http://noflojs.org/ to bring the concept to more general purpose JS, Node and browser.
I won't have really convinced myself until I rewrite the graph editor with the graph editor. Working on that now.
bingo, but it can also goes the other way back, the photograph example:
how much time do you need by tangling lines (or any other method you can come with) to define all the level of detail you are looking?
now, "no matter how eloquent" if the photo can be made digital it can be saved to file and it can be described with a rather simple language, all 0 and 1, so it can be done, and methods for being that eloquent exist...
what if the programs written on text actually are a representation of some more complex ideas? (IMO that's what they are, code is just the way of ... coding those ideas to text...) and text is visual remember... (same abstraction for words and the ideas they represent)
> I'd list things such as development velocity and end-product reliability as being far more important.
Your main thesis is that software and computing should be optimized to ship products to consumers.
The main thesis of guys like Alan Kay is that we should strive to make software and computing that is optimized for expanding human potential.
Deep down most of us got in to computing because it is a fantastic way to manipulate our world.
Bret Victor's talks instill a sense of wonderment and discovery, something that has often been brow-beaten out of most of us working stiffs. The talks make us feel like there is more to our profession than just commerce. And you know what? There is. And you've forgotten that to the point where you're actually rallying against it!
> Your main thesis is that software and computing should be optimized to ship products to consumers.
Those were just examples of other things I thought were more important, it wasn't an exhaustive list. However, it's interesting that you focus in on "optimizing to ship products to consumers", when I made mention of no such thing. I mentioned development velocity and end-product reliability. These are things that are important to the process of software development regardless of the scale of the project or the team working on it or the financial implications of the project.
They are tools. Tools for making things. They enable both faceless corporations who want to make filthy lucre by shipping boring line-of-business apps and individuals who want to "expand human potential" or "instill a sense of wonderment and discovery".
Reliability and robustness are very fundamental aspects to all software, no matter how it's built. And tools such as automated builds combined with unit and integration tests have proven to be immensely powerful in facilitating the creation of reliable software.
If your point is that non-commercial software need not take advantage of testing or productivity tools because producing a finished product that runs reliably is unimportant if you are merely trying to "expand human potential" or what-have-you then I reject that premise entirely.
If you refuse to acknowledge that the tools of the trade in the corporate world represent a fundamentally important contribution to the act of programming then you are guilty of the same willful blindness that Bret Victor derides so heartily in his talk.
You now, in some sense those early visionaries were beaten by the disruptive innovators of their day.
I think the argument here is that 1000 little choices favoring incremental advantage in the short term add up to a sub-optimal long term, but I'm not so sure. I have a *NIX machine in my phone. Designers "threw it in there" as the easy path. And it works.
Just trying to show the Linux kernel as an inexpensive building block in this day and age. One that is used casually, in Raspbery Pi's, in virtualization, etc.
>> I'd list things such as development velocity and end-product reliability as being far more important.
Your main thesis is that software and computing should be optimized to ship products to consumers.
No, the main thesis is that should be optimized to solve problems and to try to adjust it as easily as it could..
>The main thesis of guys like Alan Kay is that we should strive to make software and computing that is optimized for expanding human potential.
we are, even with our current tools, now you have the opportunity to express yourself to a the world in this place, everything done with these limiting tools..., it's IMO the presentation about exploring if maybe there is a better approach...., quotes on maybe
>Come back to the light, fine sir!
All are lights... is just the adequate combination required... you don't put the ultra bright leds of your vehicle in your living room or viceversa ...
This reminds me of the UML and the Model-Driven Architecture movement of the days before, where architect astronauts imagined a happy little world where you could just get away from that dirty coding, join some boxes with lines in all sorts of charts and then have that generate your code. And it will produce code you actually want to ship and that does what you want to do.
This disdain for writing code is not new. This classic essay about "code as design" from 1992 (!) is still relevant today:
In the presenter's worldview it seems as though a lot of subtle details are ignored or just not seen, whereas in reality seemingly subtle details can sometimes be hugely important. Consider Ruby vs Python, for example. From a 10,000 foot view they almost look like the same language, but at a practical level they are very different. And a lot of that comes down to the details. There are dozens of new languages within the last few decades or so that share almost all of the same grab bag of features in a broad sense but where the rubber meets the road end up being very different languages with very different strengths. Consider, for example, C# vs Go vs Rust vs Coffeescript vs Lua. They are all hugely different languages but they are also very closely related languages.
I suspect that the killer programming medium of 2050 isn't going to be some transformatively different methodology for programming that is unrecognizable to us, it's going to be something with a lot of similarities to things I've listed above but with a different set of design choices and tradeoffs, with a more well put together underlying structure and tooling, and likely with a few new ways of doing old things thrown in and placed closer to the core than we're used to today (my guess would be error handling, testing, compiling, package management, and revision control).
There is just so much potential in plain jane text based programming that I find it odd that someone would so easily clump it into a single category and write it all off at the same time. It's a medium that can embrace everything from Java on the one hand to Haskell or lisp on the other, we haven't come anywhere close to reaching the limits of expressiveness available in text-based programming.
You can cast this entire comment in terms of hex/assembler vs C/Fortran and you get the same logical form.
We haven't come anywhere close to reaching the limits of expressiveness in assembler either, yet we've mostly given up on it for better things.
Try arguing the devil's argument position. What can you come up with that's might be better than text-based programming? Nothing? We're really in the best of all possible worlds?
I don't think it's fair to call him a Non-Coding Architect. Have you seen his other talks, or the articles he's published via his website http://worrydream.com ? Bret clearly codes.
I really wish he did. I think one of the greatest disservices he does himself is not shipping working code for the examples in his presentation. We've seen what and we're intrigued, but ship something that shows how so we can take the idea and run with it.
A delay in releasing code would be valuable then. Those too impatient to wait can start hacking on something new now and give lots of thought to this frontier and those that want to explore casually can do so a few months later when the source is released. Releasing nothing is a non-solution. Why make everyone else stumble where you have? That's just inconsiderate.
Dicebat Bernardus Carnotensis nos esse quasi nanos, gigantium humeris insidentes, ut possimus plura eis et remotiora videre, non utique proprii visus acumine, aut eminentia corporis, sed quia in altum subvenimur et extollimur magnitudine gigantea.
bingo, this remembers me of people not having time to get bored and then innovate by giving your mind some free space to go around.
The typical scenario of the problem solution once you give it a break....
Fooling around with a paint brush in your study is fine, but real artist ship.
A bunch of ideas that sound great in theory are just that, it is only by surviving the crucible of the real world that ideas are validated and truly tested. When Guy Steele and James Gosling were the only software developers in the world who could program in Java, every Java program was a masterpiece. It is only once the tool was placed in the hands of mere mortals that its flaws were truly known.
Walk around a good gallery. There are a pretty good number of pieces entitled "Study #3", or something of that sort. An artist is playing around with a tool, or a technique, trying to figure out something new.
Piano music is probably where this concept gets the most attention. Many études, such as those by Chopin, are among the most significant musical works of the era.
In another talk Bret claims that you basically cannot do visual art/design without immediate feedback. I was wondering how he thought people that create metal sculptures via welding, or carve marble, possibly work. It's just trivially wrong to assert you need that immediate feeback, and calls all of the reasoning into question.
Good point. I think programmers would be better off dropping the artistic pretensions altogether and accepting that they are much closer to engineers and architects in their construction of digital sandcastles.
You're forgetting about the hundred even thousands of painting they did that are not in the gallery. These paintings are the same as "shipping" even though you never see them in the gallery.
You can't play around with a tool or technique without actually producing something. You can talk about how a 47.3% incline on the brush gives the optimal result all day long, but it's the artist that actually paints that matters.
> And there are projects, such as couch db, which are based on Erlang but are moving away from it. Why is that?
That is news to me. CouchDB is knee deep in Erlang and loving it. They are merging with BigCouch (from Cloudant) which is also full on Erlang.
Come to think of it, you are probably thinking of Couchbase, which doesn't really have much "couch" in except for name and couch's original author working on it.
> Rather, it's because languages which are highly optimized for concurrency aren't always the best practical solution, even for problem domains that are highly concurrency bound, because there are a huge number of other practical constraints which can easily be just as or more important.
That is true however what is missing is that Erlang is optimized for _fault_tolerance_ first then, concurrency. Fault tolerance means isolation of resources and there is a price to pay for that. High concurrency, actor model, functional programming, immutable data, run-time code reloading all kind of flow from "fault tolerance first" idea.
It is funny, many libraries/languages/project that try to copy Erlang completely miss that one main point about and go on implementing "actors" run the good 'ol ring benchmark and claim "we surpassed Erlang, look at these results!". Yeah that is pretty amusing. I want to see them do a completely concurrent GC and hot code reloading (note: those are hard to add on, they have to be baked in to the language).
They also seem to miss the preemptive scheduling, built-in flow-control and per-process GC (which leads to minimal GC pauses). Those are impossible to achieve without a purposely built VM. No solution on Sun JVM will ever be able to replace Erlang for applications which require low-latency processing. Similarly, no native-code solution can do so either: you need your runtime to be able to preempt user code at any point of time (i.e. Go is not a replacement for erlang).
> Those are impossible to achieve without a purposely built VM. No solution on Sun JVM will ever be able to replace Erlang for applications which require low-latency processing.
Impossibility claims are very hard to prove and are often wrong, as in this case.
First, commercial hard real-time versions of the JVM with strong timing and preempting guarantees exist and are commonly used in the defense industry. To the best of my knowledge, there are no mission- and safety- critical weapon systems written in Erlang; I personally know several in Java. These are systems with hard real-time requirements that blow stuff up.
In addition, Azul's JVM guarantees no GC pauses larger than a few milliseconds (though it has no preemption guarantees).
But the fact of the matter is that even a vanilla HotSpot VM is so versatile and performant, that in practice, and if you're careful about what you're doing, you'll achieve pretty much everything Erlang gives you and lots more.
People making this claim (Joe Armstrong first among them) often fail to mention that those features that are hardest to replicate on the JVM are usually the less important ones (like perfect isolation of processes for near-perfect fault-tolerance requirements). But when it comes to low-latency stuff, the JVM can and does handily beat Erlang.
P.S.
As one of the authors of said ring-benchmark-winning actor frameworks for the JVM, I can say that we do hot code swapping already, and if you buy the right JVM you also get a fully concurrent GC, and general performance that far exceeds Erlang's.
> First, commercial hard real-time versions of the JVM with strong timing and preempting guarantees exist and are commonly used in the defense industry. To the best of my knowledge, there are no mission- and safety- critical weapon systems written in Erlang; I personally know several in Java. These are systems with hard real-time requirements that blow stuff up.
That's why I said Sun JVM in first place. Azul and realtime Java are those purposely built VMs I mentioned.
Your claim about Sun JVM is more interesting. If it is so versatile why there are no network applications on JVM exist that provide at least adequate performance? Sure, JVM is blazing fast as far as code execution speed goes; the point is that writing robust zero copy networking code is so hard on JVM that this raw execution speed does not help.
I'm not sure what you mean when you say network applications that provide at least adequate performance. Aren't Java web-servers at the very top of every performance test? Isn't Java the #1 choice for low-latency high-frequency-trading applications? Aren't HBase, Hadoop and Storm running on the JVM?
The whole point of java.nio introduced over 10 years ago, back in Java 1.4, is robust zero-copy networking (with direct byte-buffers). Higher-level networking frameworks, like the very popular Netty, are based on NIO (although, truth be told, up until the last version of Netty, there was quite a bit of copying going on in there), and Netty is at the very top of high-performance networking frameworks in any language or environment.
I've spent a great deal of time trying to make a very similar erlang system reach 1/100 of the throughput/latency that the LMAX guys managed in pure java. There are days when I cry out in my sleep for a shared mutable variable.
If you need shared state to pass a lot of data between CPUs than erlang might not be a right solution; however the part that needs to do it can be isolated, implemented in C, and communicated with from BEAM.
What always amuses me about LMAX is the way they describe it (breakthrough! Invention!), while what they "invented" is a ring buffer and is the the solution everybody arrives to first. This is the way how all device drivers communicate with peripheral devices, for example; and fast IPC mechanism people used in UNIX for decades. Even more funny, that it takes less code to implement it in C from scratch than use LMAX library.
Your criticism seems to be framed against where we are at today.
As programmers we have a fragmented feedback cycle regardless of whether we are writing our software in Erlang or Lisp or C++.
While it is true that realistic matters like 'integration' and 'development velocity' are important enough in modern-day programming to determine what path we must take we shouldn't let it change our destination.
If you were to envision programming nirvana would it be mostly test coverage and scrum boards?
> If you were to envision programming nirvana would it be mostly test coverage and scrum boards?
Far from it. Indeed I think that TDD is vastly over-used and often harmful and SCRUM is more often development poison than anything else. But the fact that these things are popular despite the frequent difficulty of implementing them correctly is, I think, indicative of two things. First, that there is something of serious and fundamental value there which has caused so many people to latch onto such ideas zealously, even without fully understanding where the value in such ideas comes from. And second, that due to their being distanced from the "practice of programming" they are more subject to misinterpretation and incorrect implementation (this is a hard problem in programming as even the fundamentals of object oriented design aren't immune to such problems even though they tend to be baked into programming languages fairly deeply these days).
I think that unquestionably a routine build/test cycle is a massive aid to development quality. It doesn't just facilitate keeping a shipping product on schedule it has lots of benefits that diffuse out to every aspect of development in an almost fractal fashion. For example, having a robust unit test suite vastly facilitates refactoring, which makes it easier to improve code quality, which makes it easier to maintain and modify code, which makes it easier to add or change features, and so forth. It's a snowball effect. Similarly I think that unquestionably a source control system is a massive aid to development quality and the pace. That shouldn't be a controversial statement today though it would have been a few decades ago. More so I think that unquestionably the branching and merging capabilities of advanced source control systems are a huge aid in producing software.
Development velocity has a lot of secondary and higher order effects that impact everything about the software project. It makes it easier to change directions during development, it lowers the overhead for every individual contributor, and so on. Projects with higher development velocity are more agile, they are able to respond to end-user feedback and test feedback and are more likely to produce a reliable product that represents something the end-users actually want without wasting a lot of developer time along the way.
Some people have tried to formalize such "agile" processes into very specific sets of guidelines but I think for the most part they've failed to do so successfully, and have instead created rules which serve a far too narrow niche of the programming landscape and are also in many cases too vague to be applied reliably. But that doesn't mean that agility or increased development velocity in general are bad ideas, they are almost always hugely advantageous. But they need to be exercised with a great deal of thought and pragmatism.
Also, as to testing, it also suffers from the problem of being too distanced from the task of programming. There are many core problems in testing such as the fact that test code tends to be of lower quality than product code, the problems of untested or conflicting assumptions in test code (who tests the tests?), the difficulty of creating accurate mocks, and so on. These problems can, and should, be addressed but one of the reasons why they've been slow to be addressed is that testing is still seen as something that gets bolted onto a programming language, rather than something that is an integral part of coding.
Anyway, I've rambled too long I think, it's a deep topic, but hopefully I've addressed some of your points.
It's funny that you mention testing. TDD/BDD/whatever IS declarative programming, except you're doing the declarative-to-imperative translation yourself.
TDD has always felt sort of wrong to me because it really felt like I was writing the same code twice. Progress, in this regard, would be the spec functioning as actual code.
Characterizations like 'wrongheadedness' have no part in this discussion. If his conclusions are wrong you can explain why without generalizing to his nature as a person.
His "computers should figure out how to talk to each other" immediately reminded me the "computers should heal themselves" one finds in "objects have failed" from the same author. Both shells seem equally empty to me.
Also, if you want more fuel, you might find funny that he refers to GreenArrays in his section about parallel computing. Chuck Moore, the guy behind it, is probably the last and ultimate "binary programmer" on this planet. But at the same time, he invented a "reverse syntax highlighting", where you set the colors of your tokens in order to set their functionq, in a non-plain-text-source system (see ColorForth).
I have no idea why you're calling Chuck Moore a "binary programmer", by the definition given in today's talk.
Forth is anything but machine code. Forth and Lisp both share the rare ability to describe both the lowest and the highest layers of abstraction equally well.
Chuck Moore is definitely an interesting guy. It's hard to stereotype him, but he is definitely closer to the metal than most other language designers.
For one thing, Forth is the machine code for the chips he designs. Moreover, in his various iterations of his systems on the x86, he was never afraid to insert hex codes in his source when he needed too, typically in order to implement his primitives, because he judged that an assembler was unnecessary. At one point he tried to build a system in which he coded in something rather close to object code. This system led him to his colorForth, in which you actually edit the object code with a specialized editor that makes it look like you're editing normal source code.
Forth does absolutely not share the ability to describe both high and low level equally well. Heck, Moore even rejects the idea of "levels" of programming.
Bret Victor's talk wasn't about any particular technology. It was about being able to change your mind. It's not important that "binary programmers" programmed in machine code. It's important that they refused to change their minds. We should avoid being "binary programmers" in this sense.
> For one thing, Forth is the machine code for the chips he designs.
You're right, I should've said Forth isn't just machine code.
> Forth does absolutely not share the ability to describe both high and low level equally well. Heck, Moore even rejects the idea of "levels" of programming.
This is a misunderstanding. He rejects complex programming hierarchies, wishing instead to simply have a programmer-Forth interface and a Forth-machine interface. He describes programming in Forth as building up the language towards the problem, from a lower level to a higher level:
"The whole point of Forth was that you didn't write programs in Forth, you wrote vocabularies in Forth. When you devised an application, you wrote a hundred words or so that discussed the application, and you used those hundred words to write a one line definition to solve the application. It is not easy to find those hundred words, but they exist, they always exist." [1]
Also:
"Yes, I am struck by the duality between Lisp and Lambda Calculus vs. Forth and postfix. But I am not impressed by the productivity of functional languages." [2]
Here's what others have said:
"Forth certainly starts out as a low-level language; however, as you define additional words, the level of abstraction increases arbitrarily." [3]
Do you consider Factor a Forth? I do.
"Factor allows the clean integration of high-level and low-level code with extensive support for calling libraries in other languages and for efficient manipulation of binary data." [4]
Absolutely. I was waiting for him to mention what I think of as the Unix/Plan 9/REST principle the whole time. IMO this is one of the most important concepts in computing, but too few people are explicitly aware of it. Unfortunately he didn't mention it.
Really what Victor is complaining about is the web. He doesn't like the fact that we are hand-coding HTML and CSS in vim instead of directly manipulating spatial objects. (Although HTML is certainly declarative. Browsers actually do separate intent from device-specific details. We are not writing Win32 API calls to draw stuff, though he didn't acknowledge that.)
It has been impressed on me a lot lately how much the web is simply a distributed Unix. It's built on a file-system-like addressing scheme. Everything is a stream of bytes (with some additional HTTP header metadata). There are bunch of orthogonal domain-specific languages (HTML/CSS/etc vs troff/sed/etc). They both have a certain messiness, but that's necessary and not accidental.
This design is not accidental. It was taken from Unix and renamed "REST". The Unix/Plan 9/REST principle is essentially the same as the Alan Perlis quote: "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." [1] The single data structure is the stream of bytes, or the file / file descriptor.
For the source code example, how would you write a language-independent grep if every language had its own representation? How about diff? hg or git? merge tools? A tool to jump to source location from compiler output? It takes multiple languages to solve any non-trivial problem, so you will end up with an M x N combinatorial explosion (N tools for each of M languages), whereas you want M + N (M languages + N tools that operate on ALL languages).
Most good programming languages have the same flavor -- they are built around a single data structure. In C, this is the pointer + offset (structs, arrays). In Python/Lua it's the dictionary. In R it's the data frame; in Matlab it's the matrix. In Lisp/Scheme it's the list.
Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion. Rich Hickey has some good things to say about this.
I would posit that Windows and certain other software ecosystems have reached a fundamental scaling limit because of the O(M*N) explosion. Even if you have $100 billion, you can't write enough code to cover this space.
Another part of this is the dichotomy between visually-oriented people and language-oriented people. A great read on this schism is: http://www.cryptonomicon.com/beginning.html . IMO language-oriented tools compose better and abstract better than visual tools. In this thread, there is a great point that code is not 2D or 3D; it has richer structure than can really be represented that way.
I really like Bret Victor's talks and ideas. His other talks are actually proposing solutions, and they are astounding. But this one comes off more as complaining, without any real solutions.
He completely misunderstands the reason for the current state of affairs. It's NOT because we are ignorant of history. It's because language-oriented abstractions scale better and let programmers get things done more quickly.
That's not to say this won't change, so I'm glad he's working on it.
> Most good programming languages have the same flavor -- they are built around a single data structure. In C, this is the pointer + offset (structs, arrays). In Python/Lua it's the dictionary. In R it's the data frame; in Matlab it's the matrix. In Lisp/Scheme it's the list.
Lists are not very important for Lisp, apart from writing macros.
> Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion. Rich Hickey has some good things to say about this.
Haskell has even more types, and no exloding codebases. The `M * N explosion' is handled differently there.
> For the source code example, how would you write a language-independent grep if every language had its own representation? How about diff? hg or git? merge tools? A tool to jump to source location from compiler output? It takes multiple languages to solve any non-trivial problem, so you will end up with an M x N combinatorial explosion (N tools for each of M languages), whereas you want M + N (M languages + N tools that operate on ALL languages).
You'd use plugins and common interfaces. (I'm all in favour of text, but the alternative is still possible, if hard.)
> Lists are not very important for Lisp, apart from writing macros.
I'm not sure I agree. Sure, in most dialects you are given access to Arrays, Classes, and other types that are well used. And you can choose to avoid lists, just like you can avoid using dictionaries in Python, and Lua. But I find that the cons cell is used rather commonly in standard Lisp code.
You can't --really-- avoid dictionaries in python, as namespaces and classes actually are dictionaries, and can be treated as such.
In Lua, all global variables are inserted into the global dictionary _G, which is accessible at runtime. This means you can't even write a simple program consisting of only functions becouse they are all added and exectued from that global dictionary.
There where also other languages which could have been mentioned. In Javascript for instance, functions and arrays are actually just special objects/dictionaries. You can call .length on a function, you can add functions to the prototype of Array.
I think Haskell handles the combinations explosion with its polymorphic types and higher-order abstractions. There are many, many types, but there are also abstractions over types. Java/C++ do not get that. `sort :: Ord a => [a] -> [a]` works for infinite amount of types that have `Ord` instance.
I don't agree that lists are not very important for Lisp, they're essential for functional programming as we know it today.
It's not an either-or. My prediction is that Victor's tools will be an optional layer on top of text-based representations. I'd go as far as to say that source code will always be represented as text. You can always build Visual Studio and IntelliJ and arbitrarily complex representations on top of text. It's just that it takes a lot of engineering effort, and the tools become obsolete as new languages are developed. We HAD Visual Studio for VB; it's just that everyone moved onto the web and Perl/Python/Ruby/JS, and they got by fine without IDEs.
There are people trying to come up with a common structured base for all languages. The problem is that if it's common to all languages, then it won't offer much more than text does. Languages are that diverse.
I don't want to get into a flame war, but Haskell hasn't passed a certain threshold for it to be even considered for the problem of "exploding code base size". That said, the design of C++ STL is basically to avoid the M*N explosion with strong types. It is well done but it also causes a lot of well-known problems. Unfortunately most C++ code is not as carefully designed as the STL.
>I don't want to get into a flame war, but Haskell hasn't passed a certain threshold for it to be even considered for the problem of "exploding code base size".
What threshold?
>It is well done but it also causes a lot of well-known problems.
Like what? And why do you assume those problems are inherent to having types?
>Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion.
I think haskell and friends demonstrate that your explanation for java and C++ "exploding" is incorrect. Haskell is all about types, lots of types, and making your own types is so basic and simple that it happens all the time everywhere. Yet, there is no code explosion.
@InclinedPlane: I would suggest to you to ask yourself one question: what is the difference between a programmer and a user? if I code in language XY I'm already a consumer of a library called XY (and the operating system and the global network). most "programmers" today have nothing to do with memory (and the hardware of course). the next big thing is never just a simple iteration of the current paradigm. the problem with many ideas he mentions were not practical for a long time. On the other hand much of computing has simply to do with conventions (protocols of different kinds).
To add to the UNIX thought, it goes beyond text configuration--the very design of system calls that can fail with EINTR error code was a kind of worse is better design approach.
> Similarly, he casually mentions a programming language founded on unique principles designed for concurrency, he doesn't name it but that language is Erlang.
I haven't seen the talk yet and just browsed the slides, but just from your description Mozart/Oz could also fit the bill since it was designed for distributed/concurrent programming as well. Furthermore, Oz's "Browser" has some f-ing cool interactive stuff made possible due to the specific model of concurrency in the system. I must say that programming in Mozart/Oz feels completely different to Erlang, despite that fact that both have a common origin in Prolog.
<edit: adding more ..>
> He is stuck in a model where "programming" is the act of translating an idea to a machine representation. But we've known for decades that at best this is a minority amount of the work necessary to build software.
There is a school of thought whereby "programming" is the act of coding itself. To put it in other words, it is a process of manipulating a formal system to cause effects in the world. That system could be a linear stream of symbols, or a 2D space of tiles, or any of myriad forms, but in the end much of the "pleasure of programming" is attributable to the possibility of play with such a system.
To jump a bit ahead, consider the Leap Motion controller. What if we had a system built where we can sculpt 3D geometries and had a way to map these "sculptures" to programs for doing various things? I say this 'cos "programming", a lot of the times, feels like origami to me when I'm actually coding. Lisps, in particular, evoke that feeling strongly. So, I'm excited about Leap Motion for the potential impact it can have on "programming".
I think representations are important, and the "school of direct manipulation" misses this point. Just because we have great computing power at our finger tips today, we won't revert to using roman numerals for numbers. One way to interpret the claims of proponents of direct manipulation is that programming ought to be a dialogue between a representation and the effect on the world instead of a monologue or, at best, a long distance call.
Bret has expressed favour for dynamic representations in some of his writings, but I'm not entirely sure that they are the best for dynamic processes. There is nothing uncool about static representations like code. (Well, that's all we've had for ages now, anyway.) What we've been lacking is a variety of static representations, since language has been central to our programming culture and history. What would an alien civilization program in if they had multidimensional communication means?
To conclude, my current belief is that anyone searching for "the one language" or "the one system" to rule them all is trying to find Joshu's "Mu" by studying scriptures. Every system (a.k.a. representation) is going to have certain aspects that it handles well and certain others that it does poorly on. That ought to be a theorem or something, but I'm not sophisticated enough, yet, to formally articulate that :)
Or as the physicist and Bayesian pioneer E. T. Janes, wrote:
In any field, the Establishment is seldom in pursuit of the truth, because it is composed of those who sincerely believe that they are already in possession of it.
From Probability Theory: The Logic of Science, E.T. Jaynes, 2003.
> Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it's poison.
> Learn tools, and use tools, but don't accept tools. Always distrust them; always be alert for alternative ways of thinking. This is what I mean by avoiding the conviction that you "know what you're doing".
These two statements have done a better job explaining my feelings on expertise than almost any of my attempts. Thank you, Bret.
> Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it's poison.
Any more abstraction on this statement?
I'm interpreting it as "don't try new things because you don't know what you're doing", which just so happens to feel like the exact opposite of what Bret is trying to convey.
As I'm interpreting it, it's a cautionary statement against worshipping ignorance. It's brave and difficult to do something that's dissimilar to the ways you've learned and become powerful through performing. It's foolish to dive in without learning all that you can about what those who have been here before discovered.
I don't think it's cautioning against diving in prematurely. It's cautioning against thinking you'll do better than those who have come before by pure virtue of not knowing what yhey've done.
I don't know that it was a bad thing though. As soon as I saw that I started to think about how that might be possible, or even if it could be possible. Fundamentally there has to be some kind of common discovery protocol underlying it; it just doesn't appear to be possible (yet) to have two unknown systems talk to each other with an unknown protocol. That'd be like two monoglots, a German and Russian speaker figuring out how to talk fluently with each other. I suppose it would be possible using gestures and props, but these non-verbal clues could themselves be thought of as a kind of discovery protocol for figuring out the more efficient protocol that enables verbal communication.
You should look into Hypermedia API's. The entire point is to have a discoverable API where the developer doesn't need to know low level details. Theoretically, you could write a library to parse, adapt, and act on another API.
I think there are (at least) two very different kinds of talks: those that are meant to teach you worthwhile things, and those that are meant to inspire you to invent worthwhile things. All the talks I've seen from him are in the latter category.
His talks constantly feature working demos of the ideas he is pushing, subtly demonstrating a lot of well-thought-out interaction design details. If you watch his "Media for Thinking the Unthinkable" (http://vimeo.com/67076984) it's a gold mine of specifics. I've watched it several times and always pick some new ideas for my UI design work.
The difference to a run-of-the-mill talk is that he is showing the details, not telling the details.
I have the exact opposite reaction to the video. He is solving toy problems with toy ideas. I think his page Kill Math (http://worrydream.com/KillMath/) illuminates this point. I don't think he can think symbolically very well (no insult intended, I can't think visually very well). There are certainly times where graphing things make a lot of sense, but to throw out analytical math? Come on. By and large he is getting the "feel" of a system, but he cannot really reason about it, prove things about it, extend it, or design new systems with vision (there are obvious counterexamples).
In another video he shows an IDE where he scrubs constants, and it changes the behavior of the concurrently running program (changing the size of an ellipse or tree branch). It's neat. But, again, toy problem. First of all, we shouldn't be programming with constants. Second, anything complicated will have relationships between the data - scrubbing one value will just end up giving you nonsense. Third, it just doesn't make any sense in many contexts. I work in computer vision currently, and I can't think of anything but the most superficial way I could incorporate scrubbing. He made some comment about how no one could know what a bezier curve is unless they had a nice little picture of it in their IDE to match the function call. That's silly. I actually use splines and other curve fitting in my work, and I have to actually understand the math. Do I use cubic splines, a Hermite interpolation, bezier, or something else? I don't decide that by drawing some pictures - the search space is too big, I'll never cover all the possibilities. I have to do math to figure out the best choice.
In that same video he went on to demonstrate programming binary search using visual techniques. Unfortunately he wrote a buggy implementation, and stood there exclaiming how his visual technique found a different bug. It did, a super trivial one, but it completely failed to reveal the deeper issue. And, there was no real way for his visual method to have found it.
Visualization is an very powerful tool, but it is one tool in the toolchest. There is a scene in the movie Contact with Jodie Foster using headphones to listen to the SETI signal. We all know that is bogus - the search space is far too vast for aural search to work.
His ideas are terribly wrong headed. Make interfaces to help give us intuition? Absolutely! Use graphics where analytics fail. Of course! But don't conclude that math is a "freakish knack", as he does, or that math is some sort of temple (he calls mathematicians "clergy", and then goes on to throw in an insult that many are just pretending to understand).
I posted in another comment how crazy it would be to have a calculator that scrubs. Well, he shows one on that page. Really? The day bridge designers start using scrubbing apps to design our bridges is the day I'm never crossing a bridge again.
Edit to add: his website is another example of this. I can't find anything on it. There are a bunch of pictures, and my eyes saccade around, but what is here, what is his point? I dunno. I can click, and click, and click, and start to get an idea, but there is always more hidden away behind pictures. It's barely workable as a personal website, and would be a disaster as a way to organize anything larger. I don't mean to pick on it - as an art project or glimpse into how he thinks, it's great. I just point out it illustrates (pun kind of intended) the strengths and limits of visual presentation. You tell me, for example, without grep or google search, whether he has written about coffee.
If you disagree, please reply in pictures only! ;)
Come to wonder when I try to see the places on where this can be applied in my particular working field.
If I check my everyday working flow, it seems like I'm constrained to all the scenarios that he mention, and I'm aware of how limiting it can be for what the technology and multiple cores...
I'm talking about working on files, not interacting visually with the computer, not letting the computer figure out the stuff... not working parallely
and then I notice...
how I deliver software to a distributed environment of virtual machines some running on the same cpu, some boxes with their own one and realize that maybe the everyday cpu that you buy for your everyday box, is that small cpu on the cpu grid he shows....
network between the cpus are the lines that connects them....
and notice that I don't remember when it was the last time that I wrote the last tcp stack for connecting those machines....
so they somehow are figuring they out on they own how to talk to each other (notice how this is different from having a goal and try to achieve it) I still think we are way far from this happening (probably luckily for us)...
so: what if all he mentions here does somehow exist but it requires to shift the way you see stuff?
It will not help the discussion forward to behave like fans and treat any substantial critique as "you are one of those old fashioned mindless programming dudes".
On the other hand, in the light of Victor's achievements in industry (including "shipping" stuff) one cannot dismiss him as a smooth talking TEDdie either.
Victor has provided many crafted examples of what can be achieved in the fields of engineering, mathematics and programming, or any field of science and technology, if the feedback loop between the tool and its user is improved.
Indeed, this 30 minute talk does not compare to an industrial delivery. It has some theatre and some deliberate exaggerations or unfair treatment of society evolutions. Such is the nature of talks.
I do not think he sees the current state of affairs as a great mistake. He will surely acknowledge all practical circumstances and conceptual challenges that have made certain inferior designs survive while superior ones did not materialize.
The message is: we shouldn't accept this state of affairs as final or as one that can only be marginally improved. It can still be radically improved. The industry is still fresh - even ideas from the 60s are valid and underachieved.
I see his critique as a positive statement of hope and encouragement, not as a pointing finger to all you silly programmers.
A whole bunch of interesting stuff in there. Undoubtedly I shall spend most of my forthcoming holiday reading up on papers and other works as old as I am and realising - yet again - everything old is new again (except for the bits that have been willfully ignored in favour of being reinvented, badly ;) )
The art of programming is evolving steadily; more powerful hardware becomes available, and compiler technology evolves.
Ofcourse there will be resistance to change, and new compilers don't mature overnight. At the end of the day, it boils down to what can be parsed unambiguously, written down easily by human beings, and executed quickly. If you get off on reading research papers on dependent types and writing Agda programs to store in your attic, that's your choice; the rest of us will be happily writing Linux in C99 and powering the world.
Programming has not fundamentally changed in any way. x86 is the clear winner as far as commodity hardware is concerned, and serious infrastructure is all written in C. There is a significant risk to adopting any new language; the syntax might look pretty, but you figure out that the compiler team consists of incompetent monkeys writing leaking garbage collectors. We are pushing the boundaries everyday:
- Linux has never been better: it continues improve steadily (oh, and at what pace!). New filesystems optimized for SSDs, real virtualization using KVM, an amazing scheduler, and a new system calls. All software is limited by how well the kernel can run it.
- We're in the golden age of concurrency. Various runtimes are trying various techniques: erlang uses a message-passing actor hammer, async is a bit of an afterthought in C#, Node.js tries to get V8 to do it leveraging callbacks, Haskell pushes forward with a theoretically-sound STM, and new languages like Go implement it deep at the scheduler-level.
- For a vast majority of applications, it's very clear that automatic memory management is a good trade-off. We're look down upon hideous nonsense like the reference-counter in cpython, and strive to write concurrent moving GCs. While JRuby has the advantage of piggy-banking on a mature runtime, the MRI community is taking GC very seriously. V8 apparently has a very sophisticated GC as well, otherwise Javascript wouldn't be performant.
- As far as typing is concerned, Ruby has definitely pushed the boundaries of dynamic programming. Javascript is another language with very loosely defined semantics, that many people are fond of. As far as typed languages go, there are only hideous languages like Java and C#. Go seems to have a nice flavor of type inference to it, but only time will tell if it'll be a successful model. Types make for faster code, because your compiler has to spend that much less time inspecting your object: V8 does a lot of type inference behind the scenes too.
- As far as extensibility is concerned, it's obvious that nothing can beat a syntax-less language (aka. Lisp). However, Lisps have historically suffered from a lack of typesystem and object system: CLOS is a disaster, and Typed Racket seems to be going nowhere. Clojure tries to bring some modern flavors into this paradigm (core.async et al), while piggy-banking on the JVM. Not sure where it's going though.
- As far as object systems go, nothing beats Java's factories. It's a great way to fit together many shoddily-written components safely, and Dalvik does exactly that. You don't need a package-manager, and applications have very little scope for misbehaving because of the suffocating typesystem. Sure, it might not be be pleasant to write Java code, but we really have no other way of fitting so many tiny pieces together. It's used in enterprise for much the same reasons: it's too expensive to discipline programmers to write good code, so just constrain them with a really tight object system/typesystem.
- As far as functional programming goes, it's fair to say that all languages have incorporated some amount of it: Ruby differentiates between gsub and gsub! for instance. Being purely functional is a cute theoretical exercise, as the scarab beetle on the Real World Haskell book so aptly indicates.
- As far as manual memory management goes (when you need kernels and web browsers), there's C and there's C++. Rust introduces some interesting pointer semantics, but it doesn't look like the project will last very long.
Well, that ends my rant: I've hopefully provided some food for thought.
> We're in the golden age of concurrency. Various runtimes are trying various techniques: erlang uses a message-passing actor hammer, async is a bit of an afterthought in C#, Node.js tries to get V8 to do it leveraging callbacks, Haskell pushes forward with a theoretically-sound STM, and new languages like Go implement it deep at the scheduler-level.
No, a better analogy is that we're in the Cambrian explosion of concurrency. We have a bunch of really strange lifeforms all evolving very rapidly in weird ways because there's little selection pressure.
Once one of these lifeforms turns out to be significantly better, then it will outcompete all of the others and then we'll be in something more like a golden age. Right now, we still clearly don't know what we're doing.
We've been doing concurrency for many years now; it's called pthreads. Large applications like Linux, web browsers, webservers, and databases do it all the time.
The question is: how do we design a runtime that makes it harder for the user to introduces races without sacrificing performance or control? One extreme approach is to constrain the user to write only purely functional code, and auto-parallelize everything, like Haskell does (it's obvious why this is a theoretical exercise). Another is to get rid of all shared memory and restrict all interaction between threads to message passing like Erlang does (obviously, you have to throw performance out the window). Yet another approach is to run independent threads and keep polling for changes at a superficial level (like Node.js does; performance and maintainability is shot). The approach that modern languages are taking is to build concurrency as a language primitive built into the runtime (see how go's proc.c schedules various channels in chan.c; it has a nice race detection algorithm in race.c).
There is more pressure than ever to build applications that leverages more cores to build highly available internet applications. Multi-cores have existed long enough, and are now prevalent even on mobile devices. No radically different solution to concurrency is magically going to appear tomorrow: programmers _need_ to understand concurrency, and work with existing systems.
> We've been doing concurrency for many years now; it's called pthreads.
Sometimes, the major advances come when fresh ideas are infused from the outside. In Darwin's case it was his geological work that inspired his theory. In concurrency maybe it will be ideas from neuroscience.
> No radically different solution to concurrency is magically going to appear tomorrow: programmers _need_ to understand concurrency, and work with existing systems.
The environment is changing. In 2007, the oxygen levels started increasing, single threaded CPU scaling hit the wall. It has gone from doubling every 2 years to a few % increases per year.
We are only at the beginning of this paradigm shift to massively multi-core CPUs. Both the tools and the theory are still in their infancy. In HW there are many promising advances being explored, such as GPUs, Intel Phi, new FPGAs, and projects like Parallella.
The software side also requires new tools to drive these new technologies. Maybe a radical new idea, but more likely some evolved form of CSP, functional, flow-Based, and/or reactive programming models from the 70s, that didn't work with the HW environment at the time will fill this new niche.
For example, one of the smartest guys I know working on a neuromorphic engineering where he's creating a ASIC with thousand of cores now and may evolve to (b)millions. If this trilobite emerges on top, whatever language is used to program it might have been terrible in the 70s or for your "existing systems" but it may be the future of programming.
> Sometimes, the major advances come when fresh ideas are infused from the outside.
I agree with this largely; over-specialization leads to myopia (often accompanied by emotional attachment to one's work).
> In Darwin's case it was his geological work that inspired his theory.
If you read On the Origin of Species, you'll see that Darwin started from very simple observations about cross-pollination leading to hybrid plant strains. He spent years studying various species of animals. In the book, he begins out very modestly, following step by step from his Christian foundations, without making any outrageous claims. The fossils he collected on his Beagle expedition sparked his interest in the field, and served as good evidence for his theory.
> In concurrency maybe it will be ideas from neuroscience.
Unlikely, considering what little we know about the neocortex. The brain is not primarily a computation machine at all; it's a hierarchical memory system that makes mild extrapolations. There is some interest in applying what we know to computer science, but I've not seen anything concrete so far (read: code; not some abstract papers).
> We are only at the beginning of this paradigm shift to massively multi-core CPUs.
From the point of view of manufacturing, it makes most sense. It's probably too expensive to design and manufacture a single core in which all the transistors dance to a very high clock frequency. Not to mention power consumption, heat dissipation, and failures. In a multi-core, you have the flexibility to switch off a few cores to save power, run them at different clock speeds, and cope with failures. Even from the point of view of Linux, scheduling tons of routines on one core can get very complicated.
> In HW there are many promising advances being explored, such as GPUs, Intel Phi, new FPGAs, and projects like Parallella.
Ofcourse, but I don't speculate much about the distant future. The fact of the matter is that silicon-based x86 CPUs will rule commodity hardware in the foreseeable future.
> [...]
All this speculation is fine. Nothing is going to happen overnight; in the best case, we'll see an announcement about a new concurrent language on HN tomorrow, which might turn into a real language with users after 10 years of work ;) I'll probably participate and write patches for it.
For the record, Go (which is considered "new") is over 5 years old now.
I think you missed my point about Darwin. Darwin was inspired by the geologic theory, gradualism, where small changes are summed up over long time periods. It was this outside theory applied to biology that helped him to shape his radical new theory.
Right now threads are the only game in town, and I think you're right. For existing hardware, there probably won't be any magic solution, at least no with some major tradeoff like performance hit you get with Erlang.
I was thinking about neuromorphic hardware when I mentioned neuroscience. From what I hear the software side there is more analogous to HDL.
Go is great stopgap for existing thread based HW. But if the goal is to achieve strong AI, we're going to need some outside inspiration. Possibility from a hierarchical memory system, a massively parallel one.
I wish I could offer less speculation, and more solid ideas. Hopefully someone here on HN will. I think that was the point of the video. To inspire.
There are other options in the systems field like "virtual time" and "time warps", or "space-time memory", or a plethora of optimistic concurrency schemes where you optimistically try to do something, discover there is an inconsistency, rollback your effects, and do it again (like STM, but with real "do it again").
Our raw parallel concurrency tools, especially pthreads and..gack..locks, are horribly error prone and not even very scalable in terms of human effort and resource utilization. That is why we've expended so much effort designing models that try and avoid them.
My point is that we'll continually find better solutions to existing problems (concurrency, or anything else for that matter). There will be a time in the future when we've come up with a solution that's "good enough", and it'll become the de-facto standard for a while (kind of like what Java is today). I don't know what that solution will be, and I don't speculate about it: I'm more interested in the solutions we have today.
Yes, the raw solutions _are_ very painful, which is why they haven't seen widespread adoption. And yes, we are continually trying to enable more programmers.
Yes, many of us are in the field of coming up with "the programming model" to handle this as well as general live programming problems. I'm personally focusing on optimistic techniques to deal with concurrency as well as incremental code changes.
Nothing you've said really invalidates his argument - we are still typing mostly imperative code into text files, it is still very easy to introduce bugs into software, and software development is on the whole unnecessarily complex and unintuitive.
It's heartening to see a renewed interest in functional, declarative and logic based programming today, but also saddening that the poisonous legacy of C has prevented us from getting there sooner.
> It's heartening to see a renewed interest in functional, declarative and logic based programming today, but also saddening that the poisonous legacy of C has prevented us from getting there sooner.
From the point of view of programming a computer, this doesn't make much sense to me personally.
But perhaps the problem is that I first and foremost see that I program a computer, a deterministic machine with limited resources and functionality, rather than "designing an user experience and letting computer take care of making it run as I describe". Guess I dwell in the depths of hardware/machine centric programming rather than fly high in user centric programming.
Unless you're writing IA64 microcode, you don't really program a computer. You explain your desires to a compiler using a vocabulary as expressive as is possible for it to comprehend, and then it uses whatever intelligence is at its disposal to program the computer.† The more intelligent the compiler (e.g. GHC with its stream-fusion), and the more expressive the vocabulary it knows (e.g. Erlang/OTP with its built-in understanding of servers, finite-state machines, and event-handlers) the higher-level the conversation you can have with it is.
Your conversation with the compiler is actually the same conversation a client would have with you, as a software contractor. From the client's perspective, you play the role of the compiler, interrogating and formalizing their own murky desires for them, and then coughing up a build-artifact for them to evaluate. This conversation just occurs on an even higher level, because a human compiler is smarter, and has a much more expressive vocabulary, than a software compiler.
...but the "goal" of compiler and language design should be to make that distinction, between the "software compiler" and the "human compiler", less obvious, shouldn't it? The more intelligence we add to the compiler, and the more expressivity we add to the language, the more directly the programmer can translate the client's desires into code. Until, finally, one day--maybe only after we've got strong AI, but one day--the client themselves will be the one speaking to the compiler. Not because the client will be any better at knowing how to formalize what they want than they ever were (that's the dream that gave us the abominations of FORTRAN, SQL, and AppleScript) but because the compiler will be able to infer and clarify their murky thoughts into a real, useful design--just as we do now. Wouldn't that be nice?
---
† If you use a language-platform that includes garbage-collection, for example, then you're not targeting a machine with "limited resources" at all; garbage-collection is intended to simulate an Abstract Machine with unlimited memory. (http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047...)
Don't talk rubbish. Nobody enjoys spending 10 hours to accomplish something that can be accomplished in an hour. Ofcourse we're trying to build compilers for nicer languages. Programming isn't going to become any less complex or unintuitive by sitting around wishing for better solutions: it's going to happen by studying existing technology, and using it to build better solutions.
What has "prevented" us from getting there sooner is purely our incompetence. It's becoming painfully clear to me that people have absolutely no idea about how a compiler works.
Dunno the OPs original reasoning, but I found this comment flippant and unsubstantiated. I don't code Rust myself, but both it and Go seem extremely promising and both "specialize" relative to C/C++ without stepping on each other's toes. There is room for both systems languages. As things stand now if it doesn't last very long, it will because of some future mistake by its creators or community, not because it loses out to some fitter competitor. AFAICT the multicore future doesn't have room for C/C++, so its logical that one or more practical systems language that do consider a multicore future will take the place of C/C++. Go and Rust seem like the most likely candidates on the horizon at this point in time.
Let's take a couple of simple examples of when manual memory management is helpful:
- implement a complex data structure that requires a lot of memory: you can request a chunk of memory from the kernel, do an area-allocation and choose to allocate/free on your own terms.
- implement a performant concurrency model. You essentially need some sort of scheduler to give various threads access to the shm via cas primitives.
Let's take up the second point first: you have implemented tasks that communicate using pipes without sharing memory (rt/rust_task.cpp). You've exposed the lower-level rt/sync via libextra/sync.rs, but it's frankly not a big improvement over using raw pthreads. The scheduler is a toy (rt/rust_scheduler.cpp), and the memory allocator is horribly primitive (rt/memory_region.cpp; did I read correctly? are you using an array to keep track of the allocated regions?). The runtime is completely devoid of any garbage collection, because you started out with the premise that manual memory management is the way to go: did it occur to you that a good gc would have simplified the rest of your runtime greatly?
Now for the first point: Rust really has no way of accomplishing it, because you I don't get access to free(). The best you can do at this point is to use some sort of primitive reference counter (not much unlike cpython or shared-pointer in C++), because it's too late to implement a tracing garbage collector. And you just threw performance out the window by guaranteeing that you will call free() everytime something goes out of scope, no matter how tiny the memory.
Now, let's compare it to the go runtime: arena-allocator tracked using bitmaps (malloc.goc), decent scheduler (proc.c), decent tracing garbage collector (mgc0.c), and channels (chan.c). For goroutines modifying shared state, they even implemented a nice race-detection tool (race.c).
The fact of the matter is that a good runtime implementing "pretty" concurrency primitives requires a garbage collector internally anyway. It's true that go doesn't give me a free() either, but atleast I'm reassured by the decent gc.
Now, having read through most of libstd, observe:
impl<'self, T> Iterator<&'self [T]> for RSplitIterator<'self, T>
What's the big deal here? The lifetime of the variable is named ('self), and the ownership semantics are clear (& implies borrowed pointer; not very different from the C++ counterpart). Whom is all this benefitting? Sure, you get annoying compile-time errors when you don't abide by these rules, but what is the benefit of using them if there's no tooling around it (aka. gc)? Yes, it's trivially memory-safe and I get that.
Lastly think about why people use C and C++. Primarily, it boils down to compiler strength. The rust runtime doesn't look like it's getting there; atleast not in its current shape.
> Let's take a couple of simple examples of when manual memory management is helpful:
>
> - implement a complex data structure that requires a lot of memory: you can request a chunk of memory from the kernel, do an area-allocation and choose to allocate/free on your own terms.
Rust fully supports this case with arenas.
> - implement a performant concurrency model. You essentially need some sort of scheduler to give various threads access to the shm via cas primitives.
And that's why the new scheduler is written in Rust.
Furthermore, manual memory management is helpful when you are implementing a browser that doesn't want a stop the world GC.
> You've exposed the lower-level rt/sync via libextra/sync.rs, but it's frankly not a big improvement over using raw pthreads.
It's just a wrapper around pthreads, for use internally by the scheduler and low-level primitives. It is not intended for safe Rust code to use. Of course it's not a big improvement over pthreads.
> The scheduler is a toy (rt/rust_scheduler.cpp)
That's why it's getting rewritten. You're looking at the old proof of concept/bootstrap scheduler. Please see the new scheduler in libstd/rt. It will probably be turned on in a week or two.
> and the memory allocator is horribly primitive (rt/memory_region.cpp; did I read correctly? are you using an array to keep track of the allocated regions)
There is a new GC that is basically written, just not turned on by default yet. Furthermore, manually-managed allocations no longer go through that list.
> Rust really has no way of accomplishing it, because you I don't get access to free().
Of course you do. `let _ = x;" is an easy way to free any value.
> The best you can do at this point is to use some sort of primitive reference counter (not much unlike cpython or shared-pointer in C++), because it's too late to implement a tracing garbage collector.
This is just nonsense, sorry. Graydon has a working tracing GC, it's just not turned on by default because of memory issues on 32 bit when bootstrapping. This is not too difficult to fix and is a blocker for 1.0.
Furthermore, did you not see the mailing list discussions where we're discussing what needs to happen to get incremental and generational GC?
> And you just threw performance out the window by guaranteeing that you will call free() everytime something goes out of scope, no matter how tiny the memory.
This is what move semantics are for. If you want to batch deallocations like a GC does (which has bad effects on cache behavior as Linus is fond of pointing out, but anyway), move the object into a list so it doesn't get eagerly freed and drop the list every once in a while.
> Lastly think about why people use C and C++. Primarily, it boils down to compiler strength. The rust runtime doesn't look like it's getting there; atleast not in its current shape.
The benchmarks of the new runtime are quite promising. TCP sending, for example, is faster than both node.js and Go 1.1 in some of our early benchmarks. And sequential performance is on par with C++ in many cases: http://pcwalton.github.io/blog/2013/04/18/performance-of-seq...
Please supply evidence (aka. code) to back your one-liners. I assume you're talking about libextra/arena.rs. It's very straightforward; there's a big comment at the top of the file, so I don't have to point out how primitive or sophisticated it is.
> And that's why the new scheduler is written in Rust.
You're talking about libstd/rt/sched.rs. So it uses the UnsafeAtomicRcBox<ExData<T>> primitive (from libstd/std/sync.rs) to implement the queues. The event loop itself is a uvio::UvEventLoop. Looking at the rest of libstd/rt/uv, I see that your core evented io is libuv (aka. Node.js). For readers desiring an accessible introduction, see [1]. Otherwise, sched.rs is very straightforward.
> There is a new GC that is basically written
Unless you're expecting some sort of blind worship, I expect pointers to source code. I found libstd/gc.rs, so I'll assume that it's what you're talking about. Let's see what's "basically" done, shall we?
You use llvm.gcroot intrinsic to extract the roots, and then _walk_gc_roots to reference count. You've also written code to determine the safe points, and have implemented _walk_safe_point. For readers desiring an accessible introduction to gc intrinsics in llmv, see [2]. The history indicates that nobody has basically touched gc.rs since it was written by Elliott a year ago, so I'm not going to investigate further.
The reason it's not enabled enabled by default is quite simple: it's not hooked up to the runtime at all. You still have to figure out when to run it.
> Graydon has a working tracing GC
You're not understanding this: the whole point of running an open source project is so you can proudly show off what you've written and get others involved. Your one-liners are not helping one bit.
> did you not see the mailing list discussions where we're discussing what needs to happen to get incremental and generational GC?
No, and that should be the purpose of your reply: to provide links, so people can read about it. I'm assuming you're talking about this [3]. Okay, so you need read and write barriers, and you mentioned something about a hypothetical Gc and GcMut; readers can read the rest of the thread for themselves: I don't see code, so no comments.
> TCP sending, for example, is faster than both node.js and Go 1.1 in some of our early benchmarks.
TCP sending is libuv: logically, can you explain to me how you're faster than node.js? No comments on Go at this point.
> And sequential performance is on par with C++
So you emit relatively straightforward llvm IR for straightforward programs, and don't do worse than clang++. Not surprising.
This is the one link you provided in your entire comment. Learn to treat people with respect: showing a programmer colorful pictures of vague benchmarks instead of code is highly condescending. Yes, I've seen test/bench.
If you're hiding some code in the attic, now is the time to show it.
I find some of your comments very aggressive. Yet you are lecturing people about being condescending. I honestly much prefer pcwalton's tone which I find much less condescending, interestingly.
Pointing at code can be indeed useful, but it looks to me like you are comparing apple to oranges: Rust is not at 1.0 yet, so comparing code that isn't yet production-ready with Go or whatever technology that is already mature is not all that useful.
Saying that, in it's current state, Rust is not a good choice for production code is acceptable and fairly obvious. Extrapolating to the point of saying that it is doomed seems like quite an exaggeration to me, and not respectful of the work people are putting into this project.
> Unless you're expecting some sort of blind worship, I expect pointers to source code. I found libstd/gc.rs, so I'll assume that it's what you're talking about. Let's see what's "basically" done, shall we?
I don't really want to draw out this argument, but I called your reply FUD because you were claiming things that were not true, such as that we cannot implement tracing GC.
Hm, a conservative mark-and-sweep that uses tries to keep state. I wonder how the gc task is scheduled, but you're not feeling chatty; so I'll drop the topic.
I made claims based on what I (and everyone else) could see in rust.git; I have no reason to be either overtly pessimistic or overtly optimistic. At the end of the day, the proof is the pudding (aka. code): we are only debating facts, not hypotheticals.
Either way, it was an interesting read. Sure, I took a karma hit for saying unpopular things, and people feel sour/ hurt/ [insert irrational emotion here]; that's fine. Nevertheless, I hope the criticism helped think about some issues.
I think you took a karma hit not for saying unpopular things, but for assuming bad faith. One of the lead developers of the Rust language pointed out some gaps or mistakes in your comment about Rust. Instead of appearing eager to correct yourself, you appeared eager to defend your original statements and all but accused him of lying. I'm certain you could have made the same substantive points with a more reasonable/humble tone and not been downvoted.
For example, when you learn new information like "there's a tracing GC in progress" and you want to look at the source code, you could choose to say "Oh, cool! I didn't know that. Could you give a link with more information or source?" instead of lecturing the other commenter about how they are Doing Open Source Wrong.
I don't have a position to defend, and I am nobody to make any statements of any significance: I did a code review, and I was critical about it. If anything, I want the project to succeed. Evidence? [1]
He asked me why I thought Rust would've live for long, and I spent hours reading the code and writing a detailed coherent comment to the best of my ability. He dismisses my comment as "FUD" [2] and responds with one-liners. The final comment with a link to his blog with colorful graphs was terribly condescending. Him being a lead developer doesn't mean squat to me: a bad argument from him is still a bad argument.
No, I'm not going to stoop to begging for scraps: if I wanted to do that, I'd be using proprietary software; Apple or Microsoft nonsense. In this world, the maintainer is the one who has to take the effort to educate potential contributors. He is clearly doing a terrible job, and I pointed that out.
No, I never accused him of lying. I accused him of making a bad argument, and not giving me sufficient information to post a counter-argument, which is exactly what he did.
And no, I did not "defend" my original argument: I posted a fresh review of fresh code (the one in src/libstd/rt, as opposed to the one in src/rt).
On the point of tone. Yes, I've spent many years on harsh mailing lists and my language is a product of that experience. Are you going to discriminate against me because of that, irrespective of the strength of the argument?
I will repeat this once more: the only currency in a rational argument is the strength of your argument; don't play the authority card.
Factually, there have been more commits to the arch/arm tree than the arch/x86 tree in the last six months. It's true that Linaro, Samsung, and many other companies are interested in taking ARM forward as it's great for minimizing power consumption on embedded devices (among other things). I'm not going to speculate about whether x86 or ARM will "win the battle" or whether they will co-exist, but the fact of the matter is that x86 dominates everything from consumer laptops to web infrastructure. It's a very mature architecture, and VT-x is slowly phasing out pvops. The virt/kvm/arm tree is very recent (3 months old): ARM doesn't have virtualization extensions, so I don't know how this works yet. So, yeah: ARM definitely has a long and exciting future.
> C is single-handedly responsible for 99% of all security problems on the Internet.
Collecting evidence to back outrageous claims is left as an exercise to the reader.
> BS
I'm not interested in "transcendental superiority" arguments. CLOS doesn't have users, and hasn't influenced object systems in prevalent languages; period.
> WTF?
Factually, Java is a very popular language in industry, which requires code produced by different programmers to fit together reliably. I personally attribute it to the object system/ typesystem, although others might have a different view.
> I'm not going to speculate about whether x86 or ARM will "win the battle" or whether they will co-exist
I don't care about a 'battle'. Just most computers, probably a dozen, around me use ARM.
> Collecting evidence to back outrageous claims is left as an exercise to the reader.
That's a trivial task.
> I'm not interested in "transcendental superiority" arguments.
WTF?
> CLOS doesn't have users,
BS.
> and hasn't influenced object systems in prevalent languages; period.
True scotsman argument. Actually for that it is relatively unknown, it has influenced a lot languages and a lot of researchers. There are a ton of non-CLOS literature and systems, trying to adapt stuff like Mixins, MOP, Multiple Dispatch, Generic Functions, ...
That languages like Java doesn't has anything of that natively is not CLOS' fault. Java just recently was catching up to some kind of closures. Give the Java maintainers a few more decades. Java does not even have multiple inheritance.
CLOS' Multiple dispatch is also now present in unknown languages like Haskell, R, C#, Groovy, Clojure, Perl, Julia and a few others.
To the contrary, Typed Racket is under active development and new Racket libraries are written using it. I don't know where you got the impression that it's going nowhere, but it's incorrect.
I like his overall message, but I wonder about the details. E.g. he attacks the existence of HTML and CSS, but there needs to be some universal format to store the markup and design in. So I guess he's attacking the idea of hand-coding them instead of using a WYSIWYG editor. But you can use something like Dreamweaver, Expression Web, or even more recent web apps like Divshot. I guess the problem in that they're not good enough yet, but that's not because people aren't trying to do it, it's because it is hard to do.
Is this an actual talk he gave in 1973 or is this a spoof or something?
If so, it seems he missed the mark (significantly) on web development.
He said "if in a few decades we get a document format on some sort of web of computers, I am sure we will be creating those documents by direct manipulation - there won't be any markup languages or stylesheets, that will make no sense."
So that is either very sarcastic and cheeky, or straight up wrong.
It's pure cheekiness. He is making the point that we're currently doing all sorts of things that would have seemed backwards to some researchers even 40 years ago.
It's not an actual talk or a spoof. It's probably best described as a farce because he's using a lot of irony to make his points. I guess you could call it sarcasm.
I think he's wrong as well. Often non-technical managers assume that since something is simple to describe, it will be simple to implement. This is the tech talk equivalent of that attitude.
Also, there are CMSs and WYSIWYG webpage creators that operate at various levels of success. Markup languages and stylesheets coexist partly because they meet different use cases. For example, I've never heard of a spec for a WYSIWYG "language", so you're guaranteed to have to deal with vendor lock-in and a lack of portability unless you can then generate some text documents in a standardized language.
Well, implementing the ideas we had 40 years ago using today's dogma took 40 years, so the fact that he doesn't specify his idea on replacing 40 years of engineering cruft in a 20 minute presentation can be overlooked, imho.
We should feel lucky that what we love is such novel and unexplored field.
I'm quite confident that we will eventually move forward from this seemingly stale period of programming paradigms. Because after all, we all know the frustration brought from the initial stages of learning a new thing; and we all know the much greater awe of mastering it.
I don't think the future of programming is necessarily visual programming. Nature didn't program human bodies visually, and yet we are the most powerful living machines with powerful operating systems. But we do need to find a new "programing medium" like proteins that can build up ideas "organically".
Isn't this all just evidence that "better" in most cases is a small margin? You can hate X, and prefer Y, but in most cases the X guys will finish their project. Methodology based crash and burns are pretty rare. And the things that are "not terrible" are not separated by that much.
This is awesome. It makes me realize how much of the time I am just applying the same formula over and over again and not really being creative. The flip side, I would argue is that reinventing the wheel all the time is expensive. There's a reason why standards have formed.
If you try to reinvent the wheel, you will get another wheel. I think what Bret is talking about there is wondering about if we could create airplanes and spaceships perhaps!
I just learned today that Smalltalk was the inspiration for a lot of what NeXT ended up doing with Objective C, which makes so much sense. At the end of the day, Xcode is just another set of text files in many ways, but in so many others it's so much more.
This seems to be a callout to a point Alan Kay has been making for a while. RDF might be a good way to dip your toes in the water, but Alan Kay has gone further and called out languages such as "Linda" and related "tuple-space" research as directions in automatically figuring this stuff out.
For my own part I would also look into metaphor based research. By this I mean that looking to Biology for metaphors worked really well for OO programming so do the same thing here. Humans have been dealing with the mapping of their native language to that of another language for centuries now. I am sure that amongst anthropologists and linguists there is a pretty good body of research on how "first-contact" communication has been accomplished in the past and probably people have attempted to figure out principles to apply as well. There is probably a lot of fertile field here to till from a computer science perspective. NASA might have even sponsored some interesting research in this field, how did we design the voyager plaque, for instance.
APIs are currently the core fabric by which communication occurs within a computing system. As long as we use them, we end up with specialization of communication between computing systems.
This specialization, in my opinion, is the root cause problem in programming computing systems.
Bret Victor had this to say "The only way it (communication between systems) can scale, they (computers) have to figure out (dynamically), a common language".
Here I feel he is missing a key point. It is not a common language we are looking for, but a common architecture by which information is communicated between systems. Or, in this case, a non-architecture or anti-API by which communication takes place between systems.
Yeah, all the fundamental things were invented and researched before I was born.) and everything is still relevant and actual even in the midst of J* mass hysteria.)
In my case, my work on Light Table has certainly proven at least one thing: what we have now is very far from where we could be. Programming is broken and I've finally come to an understanding of how we can categorize and systematically address that brokeness. If these ideas interest you, I highly encourage you to come to my StrangeLoop talk. I'll be presenting that next step forward: what a system like this would look like and what it can really do for us.
These are exciting times and I've never been as stoked as I am for what's coming, probably much sooner than people think.
EDIT: Here's the link to the talk https://thestrangeloop.com/sessions/tbd--11