Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not about choosing one or the other, it's about allowing both. I can use symbols (though not sentences or other usefully descriptive language), but do I have an opportunity to represent those symbols at all? no.

I'm not saying we should forsake language, if you look at the now very out of date Aurora demo, all the operations have sentence descriptions. This certainly isn't an all or nothing thing. If it makes sense to visualize some aspect of the system in a way that is meaningful to me, I should be able to do so - that is after all how people often solve hard problems.



In your example you used an ace of spades. Your picture took up half my screen. I can't imagine trying to actually manipulate logic when each element is taking up half my screen - can you?

Instead, I can just create a variable called AceSpades. It's not as... philosophical? but it's a million times more practical. Instead of needing an artist to come and draw up a new symbol for a concept I've created, I can just write it in text as a variable name. A lot of graphically based languages have been tried but they just don't scale to general problems. They work extremely well as limited languages for very constrained problems, but as soon as you need a new concept the complexity goes out the window compared to a traditional text based language. Why? Same reason as before - defining a concept in text is easier than designing a new symbol.

You touch on this in your blog too in how most of what programmers do is glue things together. You didn't really define what "glue things together" means though. I think it means "define new concepts using existing concepts". Eg, we take a MouseInput and a Sphere and create a new SphereSlice. With a text language we're just glueing the input library together with our geometry library. With a symbol based language, we have to actually define the new symbols and concepts of what a slice of a sphere is.


"[Graphical languages] work extremely well as limited languages for very constrained problems"

Even that might need a citation...


[I'm curious why I got downvoted above - did I miss something?]

I don't think that needs a citation though. Any GUI is a graphical language for a constrained problem. I could use a generic text language to post this comment to HN, or I can use a specialized graphical language that provides me with this resizable text input box and reply button. It's extremely constrained in this case. If it had GUI elements for doing formatting, or the ability to post comments to other websites, it would be a less constrained graphical language. As you reduce constraints the graphical language would either need the ability to create new concepts or have many additional concepts predefined. So to answer your question about a citation: this very comment box is my citation. It works better than a purely text based input for submitting this comment.

Obviously for a general language you can't predefine all the required concepts which means they need to be user defined. User defining concepts in a graphical language is a difficult task as it requires creating uniquely human recognizable symbols for the new concepts.

You have two ways to get those new human recognizable symbols - you can generate them with an AI, or a human must generate them. AI is nowhere near able to generate symbols for concepts it doesn't understand as it would need to be a true AI with the ability to learn and understand new concepts. Having your graphical language's users define new concepts in effect makes those users into language designers. This is a bigger problem than it sounds as language design is an extremely difficult problem, and I personally don't want to be designing a language when I'm trying to solve a problem as I'll no doubt get the language design wrong if I'm focused more on the problem than the language design.


"Any GUI is a graphical language for a constrained problem."

With that broad a definition (and I don't think it's horribly unreasonable), I agree it doesn't need a citation (even if I think GUIs are overapplied).


A few examples of graphical programming languages are LabView, Scratch, and Lego Mindstorms (NXT). (I'm not advocating graphical programming, just providing examples.)

Edit: maybe graphical GUI editors such as in Xcode and Visual Studio could be considered tools for programming languages that are partially graphical and partially text based.


Would you include PLC programming? It's safe to say that's been pretty successful in industrial settings.


Yeah, I'm most familiar with graphical programming by way of hearing people complain about LabView.


I would be interested in seeing you take a larger chunk of code and convert it to a more symbol-rich representation. Showing a single card, especially a very large one, isn't a good representation of the idea. I would also appreciate a description of how exactly I inserted these symbols into the editor.

I will spot you that I won't natively know the language in question. In turn, I warn you that the most likely criticism I will make is that you've greatly increased the cognitive load of what I have to understand to understand your code without a corresponding payoff, even accounting for fluency in the vocabulary. (I say this not to be a jerk, but precisely to issue fair warning so you can head it off at the pass.) I will also spot fluency in your paradigm of choice... while I hope that the result is not a superficial syntax gloss on top of fold & map, I am happy to accept that I would need to know what those things are.

(I've come to start issuing the same challenge to anyone who thinks a visual programming language is the answer to our programming complexity problems, for instance. Don't draw me three boxes and two lines showing a simple map transform. Draw me something not huge, but nontrivial, say, the A-star algorithm. Then tell me it's better. Maybe it is, if you work on it enough, but don't scribble out the equivalent of "map (+1) [1, 2, 3]" and tell me you've "fixed" programming. Trivial's trivial in any representation.)


Sure, there are plenty of cases where visualization is helpful. But I see so many blog posts about it, and not much in the way of actual progress.

Take the card again. It's your example, after all. I cannot think of any way to use that to, say, write a small AI to play poker. I suppose I could see a use in a debugging situation for my 'hand' variable to display a little 5@ symbol (where @ is the suit symbol). But okay, let's think about that. What does it take to get that into the system?

No system 'knows' about cards. So I need a graphics designer to make a symbol for a card. I surely don't want an entire image of a card, because I have 20 other variables I am potentially interested in, which is why in this context a 5@ makes sense (like you would see in a bridge column in a newspaper). So somebody has to craft the art, we have to plug it into my dev sysstem, we need to coordinate it with the entire team, and so on. Then, it is still a very custom, one off solution. I use enums, you use ints, the python team is just using strings like "5H" - it goes on and on. I don't see a scalable solution here.

Well, I do see one scalable solution. It is called text. My debugger shows a textual depiction of my variable, and my wetware translates that. I'm a good reader, and I can quickly learn to read 54, "5H", FiveHearts as being the representation of that card. Will I visually "see" the value of a particular hand as quickly? Probably not, unless I'm working this code a lot. But I'll take that over firing up a graphics team and so on.

I do plenty of visualizations. It is a big reason for me using Python. If I want to write a Kalman filter, first thing I'm doing is firing up matplotlib to look at the results. But again, this is a custom process. I want to look at the noise, I want to look at the size of the kalman gain, I want to plot the filter output vs the covariance matrices, I want to.... program. Which I do textually, just fine, to generate the graphics I need.

I've dealt with data flow type things before. They are a royal pain. Oh, to start, it's great. Plop a few rectangles on the screen, connect with a few lines, and wow, you've designed a nand gate, or maybe a filter in matlab, or is it a video processing toolchain? Easy peasy. But when I need to start manipulating things programmatically it is suddenly a huge pain.

I am taking time out of writing an AI to categorize people based on what they are doing in a video (computer vision problem) to post this message. At a rudimentary level graphical display is great. It is certainly much easier for me to see my results displayed overlaid on the video, as opposed to trying to eyeball a JSON file or something. But to actually program this highly visual thing? I have never, ever heard anything but hand waving as to how I would do that in anything other than a textual way. I really don't think I would want to.

Anyway, scale things up in a way that I don't have to write so many matplotlib calls and you will have my attention. But I just haven't seen it. I've been programming since the early 80s, and graphical programming of some form or another has been touted as 'almost here'. Still haven't seen it, except in highly specialized disciplines, and I don't want to see it. "Pictures are worth a thousand words" because of compression. It's a PCA - distill a bunch of data down to a few dimensions. Sometimes I really want that, but not when programming, where all the data matters. I don't want a low order representation of my program.


> So I need a graphics designer to make a symbol for a card.

I think this is the crux of the debate. The point isn't high quality visualizations, it's about bringing the simple little pictures you'd draw to solve your problem directly into the environment. Can you draw a box and put some text in it? Tada! Your own little representation of a card.

I'm not suggesting that you hire people out to build your representations :) This is about providing tools for understanding. Maybe you don't see value in that, and there's no reason you can't just keep seeing things as plain raw text (that's just a representation itself).

> Anyway, scale things up in a way that I don't have to write so many matplotlib calls and you will have my attention.

Give us a bit and I think we can provide a whole lot more than just that. But we'll see!


I enjoyed watching the demo and reading the post. I hope you continue to think about this and innovate.

Something that I feel like is missing is the abstraction quality of programming. That is, the idea that I typically have very little use for a particular graphic when writing a program. I'm trying to express "whenever the user hits this button, flip over the top card in this set, move it over here, and then make the next card the top card" or whatever.

Some of Bret's demos look to me like he's thinking directly about this, and trying to discover where the abstraction fits in, and how direct manipulation can help to basically "see" that the abstraction is working. Perhaps that's a good guide to where direct manipulation could really help -- for anything relative complex, it's a big pain to see that code works. A direct manipulation system to basically flip through possibilities, especially into edge cases, and make sure they work as intended would definitely help out. I don't know whether that's the final way you want to express the system -- language is really powerful, even a million years later! -- but a way to see what the language does would be really awesome.


I'm optimistic that your team is making real progress behind the scenes, but please remember that when you say 'do some math' some of us think 'discontinuous galerkin' instead of 'add one'. Not that everyone needs to, but one reason the early pioneers made such great progress is that they were building tools to solve truly challenging problems. The fact that we can build TODO lists in 40 seconds today is incidental.


Just use Unicode, and a programming language that uses the full power of Unicode symbology in its syntax. E.g.

♠♣♥♦ × A23456789TJQK


Please don't. People are already terrible at naming things, I for one am not going to try the entire Unicode table to find out which symbol you chose for "MetadataService". Plain text is fine, it's searchable, readable, and somewhat portable (minus the line ending debacle).

If you need something more, vim has the "conceal" feature which can be used to replace (on the lines the cursor is not on) a given text with another (eg show ⟹ instead of =>). Would you be better off if there was an option to do this for variable/class/method names? I'm not sure.


> vim can be used to replace a given text with another (eg show ⟹ instead of =>)

If you use the short ⇒ to substitute for => (rather than long ⟹ as in your example), as well as many other Unicode symbols, then the overall code can be much shorter and thus more understandable.

The spec for the Fortress programming language made a point of not distinguishing between Unicode tokens in the program text and the ASCII keys used to enter them. Perhaps that's the best way to go?


Why do you think that "much shorter" implies "more understandable"?

I think we have a lot of experience to suggest otherwise.

Anyone who has had to maintain old Fortran or C code will likely know what I mean. With some early implementations limiting variable and function identifiers to 8 characters or less, we'd see a proliferation of short identifiers used. Such code is by far some of the hardest to work with due to variable and function names that are short to the point of being almost meaningless.

Then there are languages like APL and Perl, which make extensive use of symbols. APL has seen very limited use, and Perl code is well-known for suffering from maintenance issues unless extreme care is taken when initially creating the code.

Balance is probably best. We don't want excessively long identifiers like is often the case in Java, but we surely don't want excessively short ones, either.


As somebody who spent some years writing Perl code, I don't feel that having a few well-defined ASCII symbols were such an issue. The problems with Perl are that symbols change depending on the context (eg, an array @items needs to be accessed via $items[$i] to get an item at position $i, to tell Perl it is a scalar context), and weak typing. Even with changing symbols, it makes it easier to distinguish between scalars, arrays and hashes, especially with syntax highlighting. As opposed to languages like Haskell or Scala, in which library designers are free to display their creativity with such immediately obvious operators as '$$+-'.

Edited to add that I agree with your overall point. Shorter is not always clearer. It can be a benefit to have a few Unicode symbols displayed via 'conceal' but it's not (at least in my experience) a major productivity gain. And the number needs to be kept small. If I want Unicode symbol soup, I'll play a roguelike.


If you're using Unicode: 🂡🂾🃍🃛🂠

https://en.wikipedia.org/wiki/Unicode_Playing_Card_Block


I think the problem is the card example is a bad one. 5H is already acceptable for nearly every case, since there is so little data in the image.

Also it is probably good to remember that most of the good examples of doing this have probably already been done, debug visualizations in physics engines are a great example, a perfect way of showing incredibly complex data.

The only way to expand on that would be to add time and ease isolating a piece of data.


Try writing a sudoku programmer with constraint based programming.

You 'teach' the computer the rules of the game and the computer works to figure out allowed values.

https://en.wikipedia.org/wiki/Constraint_programming




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: