Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, there are plenty of cases where visualization is helpful. But I see so many blog posts about it, and not much in the way of actual progress.

Take the card again. It's your example, after all. I cannot think of any way to use that to, say, write a small AI to play poker. I suppose I could see a use in a debugging situation for my 'hand' variable to display a little 5@ symbol (where @ is the suit symbol). But okay, let's think about that. What does it take to get that into the system?

No system 'knows' about cards. So I need a graphics designer to make a symbol for a card. I surely don't want an entire image of a card, because I have 20 other variables I am potentially interested in, which is why in this context a 5@ makes sense (like you would see in a bridge column in a newspaper). So somebody has to craft the art, we have to plug it into my dev sysstem, we need to coordinate it with the entire team, and so on. Then, it is still a very custom, one off solution. I use enums, you use ints, the python team is just using strings like "5H" - it goes on and on. I don't see a scalable solution here.

Well, I do see one scalable solution. It is called text. My debugger shows a textual depiction of my variable, and my wetware translates that. I'm a good reader, and I can quickly learn to read 54, "5H", FiveHearts as being the representation of that card. Will I visually "see" the value of a particular hand as quickly? Probably not, unless I'm working this code a lot. But I'll take that over firing up a graphics team and so on.

I do plenty of visualizations. It is a big reason for me using Python. If I want to write a Kalman filter, first thing I'm doing is firing up matplotlib to look at the results. But again, this is a custom process. I want to look at the noise, I want to look at the size of the kalman gain, I want to plot the filter output vs the covariance matrices, I want to.... program. Which I do textually, just fine, to generate the graphics I need.

I've dealt with data flow type things before. They are a royal pain. Oh, to start, it's great. Plop a few rectangles on the screen, connect with a few lines, and wow, you've designed a nand gate, or maybe a filter in matlab, or is it a video processing toolchain? Easy peasy. But when I need to start manipulating things programmatically it is suddenly a huge pain.

I am taking time out of writing an AI to categorize people based on what they are doing in a video (computer vision problem) to post this message. At a rudimentary level graphical display is great. It is certainly much easier for me to see my results displayed overlaid on the video, as opposed to trying to eyeball a JSON file or something. But to actually program this highly visual thing? I have never, ever heard anything but hand waving as to how I would do that in anything other than a textual way. I really don't think I would want to.

Anyway, scale things up in a way that I don't have to write so many matplotlib calls and you will have my attention. But I just haven't seen it. I've been programming since the early 80s, and graphical programming of some form or another has been touted as 'almost here'. Still haven't seen it, except in highly specialized disciplines, and I don't want to see it. "Pictures are worth a thousand words" because of compression. It's a PCA - distill a bunch of data down to a few dimensions. Sometimes I really want that, but not when programming, where all the data matters. I don't want a low order representation of my program.



> So I need a graphics designer to make a symbol for a card.

I think this is the crux of the debate. The point isn't high quality visualizations, it's about bringing the simple little pictures you'd draw to solve your problem directly into the environment. Can you draw a box and put some text in it? Tada! Your own little representation of a card.

I'm not suggesting that you hire people out to build your representations :) This is about providing tools for understanding. Maybe you don't see value in that, and there's no reason you can't just keep seeing things as plain raw text (that's just a representation itself).

> Anyway, scale things up in a way that I don't have to write so many matplotlib calls and you will have my attention.

Give us a bit and I think we can provide a whole lot more than just that. But we'll see!


I enjoyed watching the demo and reading the post. I hope you continue to think about this and innovate.

Something that I feel like is missing is the abstraction quality of programming. That is, the idea that I typically have very little use for a particular graphic when writing a program. I'm trying to express "whenever the user hits this button, flip over the top card in this set, move it over here, and then make the next card the top card" or whatever.

Some of Bret's demos look to me like he's thinking directly about this, and trying to discover where the abstraction fits in, and how direct manipulation can help to basically "see" that the abstraction is working. Perhaps that's a good guide to where direct manipulation could really help -- for anything relative complex, it's a big pain to see that code works. A direct manipulation system to basically flip through possibilities, especially into edge cases, and make sure they work as intended would definitely help out. I don't know whether that's the final way you want to express the system -- language is really powerful, even a million years later! -- but a way to see what the language does would be really awesome.


I'm optimistic that your team is making real progress behind the scenes, but please remember that when you say 'do some math' some of us think 'discontinuous galerkin' instead of 'add one'. Not that everyone needs to, but one reason the early pioneers made such great progress is that they were building tools to solve truly challenging problems. The fact that we can build TODO lists in 40 seconds today is incidental.


Just use Unicode, and a programming language that uses the full power of Unicode symbology in its syntax. E.g.

♠♣♥♦ × A23456789TJQK


Please don't. People are already terrible at naming things, I for one am not going to try the entire Unicode table to find out which symbol you chose for "MetadataService". Plain text is fine, it's searchable, readable, and somewhat portable (minus the line ending debacle).

If you need something more, vim has the "conceal" feature which can be used to replace (on the lines the cursor is not on) a given text with another (eg show ⟹ instead of =>). Would you be better off if there was an option to do this for variable/class/method names? I'm not sure.


> vim can be used to replace a given text with another (eg show ⟹ instead of =>)

If you use the short ⇒ to substitute for => (rather than long ⟹ as in your example), as well as many other Unicode symbols, then the overall code can be much shorter and thus more understandable.

The spec for the Fortress programming language made a point of not distinguishing between Unicode tokens in the program text and the ASCII keys used to enter them. Perhaps that's the best way to go?


Why do you think that "much shorter" implies "more understandable"?

I think we have a lot of experience to suggest otherwise.

Anyone who has had to maintain old Fortran or C code will likely know what I mean. With some early implementations limiting variable and function identifiers to 8 characters or less, we'd see a proliferation of short identifiers used. Such code is by far some of the hardest to work with due to variable and function names that are short to the point of being almost meaningless.

Then there are languages like APL and Perl, which make extensive use of symbols. APL has seen very limited use, and Perl code is well-known for suffering from maintenance issues unless extreme care is taken when initially creating the code.

Balance is probably best. We don't want excessively long identifiers like is often the case in Java, but we surely don't want excessively short ones, either.


As somebody who spent some years writing Perl code, I don't feel that having a few well-defined ASCII symbols were such an issue. The problems with Perl are that symbols change depending on the context (eg, an array @items needs to be accessed via $items[$i] to get an item at position $i, to tell Perl it is a scalar context), and weak typing. Even with changing symbols, it makes it easier to distinguish between scalars, arrays and hashes, especially with syntax highlighting. As opposed to languages like Haskell or Scala, in which library designers are free to display their creativity with such immediately obvious operators as '$$+-'.

Edited to add that I agree with your overall point. Shorter is not always clearer. It can be a benefit to have a few Unicode symbols displayed via 'conceal' but it's not (at least in my experience) a major productivity gain. And the number needs to be kept small. If I want Unicode symbol soup, I'll play a roguelike.


If you're using Unicode: 🂡🂾🃍🃛🂠

https://en.wikipedia.org/wiki/Unicode_Playing_Card_Block


I think the problem is the card example is a bad one. 5H is already acceptable for nearly every case, since there is so little data in the image.

Also it is probably good to remember that most of the good examples of doing this have probably already been done, debug visualizations in physics engines are a great example, a perfect way of showing incredibly complex data.

The only way to expand on that would be to add time and ease isolating a piece of data.


Try writing a sudoku programmer with constraint based programming.

You 'teach' the computer the rules of the game and the computer works to figure out allowed values.

https://en.wikipedia.org/wiki/Constraint_programming




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: