(in the picture above "string" is selected, and "find" is slightly highlighted because its on the same level. this helps visualize the tree and plan your movements)
It actually started as my final project during undergrad. Here's my 78-page thesis on it (unfortunately Portuguese, but has English pictures):
Structured editing is much, much, MUCH better suited for programming than plain text. Syntax preservation and source-display separation are game changers, and they are not the only benefits.
Unfortunately it has a Vim-level difficulty curve and people don't realize how much time they currently waste on syntax mucking. This makes marketing kinda hard, even with a fully operational and well polished implementation.
Fun fact: structured editing also works with non-programming structured text, like JSON, HTML and CSS!
> Structured editing is much, much, MUCH better suited for programming than plain text. Syntax preservation and source-display separation are game changers, and they are not the only benefits.
You can do a lot of things with a parser and a rich code editor...e.g.
Parsing is just a detail, if you can get it right, you don't necessarily need a structured or projected text editor. And its not even the "big" detail, which would definitely be type checking, and structural isn't going to help you much there (that is, if you want any kind of fluidity).
Interesting research, specially because it has a lower learning curve. I see it as a halfway between fully structured and fully textual.
The large screenshot[0] is a textbook example of structured editor, but the cursor moves around as if it was text. It looks like the document is modeled as a combination of complete structures plus a few incomplete pieces of text. That's interesting because you get the benefits of structured display without paying for structured input.
However, I'm not sure I would use this. It's not clear how to call the Δ function, for example. You can consult the manual or ctrl+c the character, but that's not desirable. Also, does erasing the dot after ≱ yield ≥ or ≯?
I do like structured input. You type an awful lot less and there's no meddling with commas and parenthesis. The learning curve is tough, but Vim users are here to prove people are willing. If structured editing lives up to the hype, of course.
Anyway, I'm always glad to see research in this area.
> Also, does erasing the dot after ≱ yield ≥ or ≯?
Yes. Actually, in my newest prototype, the dots disappear and we treat ≱ as having 3 characters (the first is !, the second is >, the third =). Deleting the first character is deleting the ! (you get ≥), or deleting the last character is deleting the = (you get ≯).
I think the issue with high learning curve tools is not that the learning curve is high, it's that the benefit seems (rightly) dubious at the beginning.
You can find videos of people whizzing along with emacs or vim, and it's not difficult to see the productivity gain. So it'd be cool if you had a video of what it looks like to program with this thing.
Thank you, and you're absolutely correct. I think a side-by-side video comparison would work even better, because it's not immediately clear how much time is spent moving commas around, playing with whitespace or hunting parenthesis.
That said, I'm not trying to convert people just yet. It still needs polishing and support for a few more languages (right now it only does Lua, Python and basic Lisp, which is not very exciting).
1) Premature open-sourcing complicates commercial projects, and a man gotta eat.
2) Because it would be a first, any flaws in the implementation will be assigned to the concept of structured editing itself, hurting the development of alternative implementations. Even if my vision of it fails, I want other people to have a shot without fighting incorrectly formed opinions.
In the end, it's the ideas and not the source that are important. It's like developing for VR now, or mobile a few years ago. The space of possible interfaces is so big that most of the effort goes into combing it.
Something I notice about the UI you use is that it tries very hard to emphasize the structured-ness of the underlying engine. Perhaps the middle ground goes back towards Emacs(or rather, a new iteration of the Emacs concept) - plain old text editing is the default, but the underlying editing engine is always working in structured form.
Ultimately, I have to concur with the people calling out the lack of need; if my sustained average is 80 lines of code per day, I'm not being bottlenecked by my type time, but I am bottlenecked by typing errors that cascade to runtime. I would welcome seeing more modes that simplify specific, necessary-but-error-prone tasks like copy-paste-modify.
I hear what you're saying. But I've been programming for 35 years. I've found that getting syntax right is at worst a tiny percentage of my time/energy/brain cost when doing programming. Once out of the newb phase. Once I've fully grokked the language and I've had enough ramp-up that I'm in "the zone" it's nearly effortless to type syntax-perfect code on my 1st attempt. Even getting built-in library calls and macros right, on 1st attempt, becomes nearly effortless, once in that frame of mind.
It's the think-design-code-test loop that is the biggest driver on my time/energy/brain cost. Not syntax conformance. Conformance becomes like a musician having played a certain instrument for enough years, you acquire muscle/eye memory for what note requires what finger/body/mouth positions/behaviors, nearly a one-to-one map, autonomous.
Do agree that anything that enforces syntax correctness, upfront, is helpful for newbies, especially "forever newbie" use cases, the non-expert users.
Maybe you had to hold Ctrl+Shift and mash the right arrow to select the first tuple, and then Ctrl+X. Or you typed "d2t,", paying attention to the inner comma. Probably between 10 and 20 movements, all over the keyboard. Three or four seconds, and to me feels like using a blunt knife.
And yet you are just moving one element down a list. You do it all day with parameters in a function, statements in a block, items in a literal list. Which is why it has a dedicated command in my editor: "Move down" (https://i.imgur.com/wvcduDk.png). It's a single key. Works in all cases mentioned, for all supported formats.
I'm not saying a structured editor is for people who forget semicolons. It's for people who have to work with semicolons.
That's a lovely example. I was going to post that I don't like structure editors, but your example is two keystrokes in Emacs: c-m-F and c-m-T -- which I now (after 35+ years) realize is a structure editor.
(one of the first things I worked on when I started at PARC was adapting an Emacs clone since I didn't like the Interlisp structure editor. Little did I know!).
Nice example, but...I work with graphics code a lot (I'm a game developer), as well as in tons of other domains. That operation wouldn't be worth the mental storage (and certainly not a key!) to memorize. I can't think of the last time I would have needed it. Maybe when getting a pair of function parameters in the wrong order?
The example you gave wouldn't ever be a program I'd write. Magic numbers? Hard-coded rendering? Rendering something that can't be edited by an artist? All bad. I understand that it's just an example, but can you come up with one that would be valid in production code?
I know you were looking for a simple example, but in addition to the comments others have made about syntax not being a big deal for an experienced developer, this looks like a helper operation for developers who are doing it wrong to begin with.
Aside from that, years of user-experience research shows that modal editors are worse than the alternative (if you can hit one key, that means you can't type text in that mode; sorry vim-lovers, but it's true).
Yes I tried your experiment, and I could get it down to 7 keystrokes in my editor. I type quickly, so that's a couple of seconds at most; less than a second if I'm in the zone. With how rarely I do that operation, I'd not bother trying to optimize it more (another comment mentioned premature optimization).
But if I did need to do it a lot, creating a new macro that did the right thing wouldn't be hard; in a couple minutes I could bind a simple version of the command to a key permanently. Also, if I were modifying many similar lines to change them, I have options: Refactoring tools (if we're talking a function signature change), simple keyboard macros, or smart multi-file search-and-replace regular expression conversions.
And all of this in a "text" editor with a low barrier to entry (unlike vim OR emacs) -- though one that understands some structure at least.
Although, you'd think vim would have a general "transpose text object" operator. If it were bound to some ŧ, then you could just do %ŧ% (here emulating Emacs' C-M-f C-M-t)
Short of having a transpose verb, here is what I cape up with (using the arg text objext):
dia # 1. delete argument under caret
df # 2. delete separator (comma and space)
% # 3. move after argument (i.e to end of tuple)
p # 4. paste separator
"2p # 5. paste argument
I'm not doing this out of the desire of golfing but to analyse and point out that what vim lacks in this specific case is how to semantically do step 3 (ga/gA ?) and possibly step 2 (because what if there's no space? a "separator", like "surround", might be useful), upon which one could easily and semantically build a "ŧa" (transpose argument) sequence (which could also become ŧ2a, ŧ3a to swap with the second or third next argument, and capitalize the a to swap with the previous).
That said vim is still "only" a text editor, but it goes a long way at being a general purpose one with semantic operations.
again... I hear you. been there, done that, have the T-shirt, over 35 years of coding in multiple domains, including GUIs, graphics, games, code-driven visual layouts, etc.
and I'm saying with a low-overhead deterministic-optimized editor like vim, and the right person at the keyboard, this can be done very quickly and accurately. and again, to continue your example, the hard part is not getting the syntax correct it's ensuring the resulting visual image -- your example suggests a vectory visual artifact ala OpenGL or SVG, etc. -- has the right shape and position. Syntax is something my brain/eye system just tells me, instantly, RIGHT or WRONG. our brains are great at this.
I'm not saying you're wrong. I'm saying for non-newb, non-lame programmers it's a use case that optimizes for a cost that's one of the smallest costs imposed on the programmer. not unlike "premature optimization".
I'm not saying syntax-enforcing keystrokes are a bad thing. I do think there are benefits to having a set of syntax-generic consistent keystrokes, like vim, across all the various syntaxes one has to deal with, day in and day out. If the only thing I ever had to edit was C files or JSON, that's it, nothing else, then yes having a C or JSON-semantic keystroke-restricted inescapable mode (with prompts, wizards, etc.) would be a help. (Which arguably is how all the big fat modern IDEs have evolved towards anyway.) But I'm very aware of the phenomenon where one can gain in the small but lose in the large. Local maxima, etc.
Also benefits to having screen match print, etc. Being grep-friendly, diff-friendly, textual VCS-optimized friendly, etc.
Local maxima. The sneakiest wrongs are right in the small.
How many programmers are out there that are "lame"? Honestly, I still to this day work with code written by developers with years of experience who still get syntax wrong. Often I forget that most skills follow a bell-curve, and most developers are not one-with-their-machine-and-language -- hell, even I'm probably not, although some days I might feel like it. I think having a system that removes that barrier would allow that vast sea of average programmers to move closer to that sublime moment where your thoughts are transcribed in code, perfectly, in one go.
> How many programmers are out there that are "lame"?
from a few decades of observation in the wild, I'm sad to say that it is a surprisingly large percentage. the market demand for programmers seems to exceed the supply of those of us who truly can.
I do think the "killer app" use case for the modalities you're talking about are data entry kinds of use cases. Where a non-expert user, perhaps one who has to deal with a variety of formats, very randomly, is required to type things in that strictly confirm to a perfectly-defined syntax. And he/she has a gigantic amount of data to enter manually, by hand, in a short time. Higher throughput is better, but also highest correctness is the other dimension. Tiny percentage time overhead spent on thinking, design, test. Mostly on data entry by hand. Then yes your approach starts to yield disproportionately higher benefits.
Arguably one reason why XML/XSD/XSL took off. It was not just yet-another-structured-text-format like CSV and JSON. It also had an "official" way to express and constrain an application data format, and generic query and mapping languages. Great for low brain, high volume, high repetition use cases. However... programming itself is high brain, lower volume, low repetition.
An advantage of using something that isn't text is that our tooling can also be better. I think we've all had the experience of git diffs being complete messes due to some variables moving around but git matching the wrong parentheses
I agree that with a language like python (or most all languages) syntax doesn't become an issue, but if we can reduce complexity on one part of the editor, maybe we can introduce stronger _general_ refactoring tools (not just language-specific ones).
Anyways I think there's some good research happening in the domain of program editing (light table being one thing, albeit derived from some other tools), because the objective isn't to write hello world quickly, but to be able to quickly and confidently iterate on larger codebases (much like what vim lets you do quickly for simpler formats)
>Once out of the newb phase. Once I've fully grokked the language and I've had enough ramp-up that I'm in "the zone" it's nearly effortless to type syntax-perfect code on my 1st attempt. Even getting built-in library calls and macros right, on 1st attempt, becomes nearly effortless, once in that frame of mind.
And yet, it's this "nearly" that creates all the huge C buffer overflow and memory corruption bugs...
> I hear what you're saying. But I've been programming for 35 years. I've found that getting syntax right is at worst a tiny percentage of my time/energy/brain cost when doing programming.
I don't want structural editing because I can't remember the syntax of a language. I want it because I want to edit the code more efficiently. I don't know if I'll really be more efficient, but that's nonetheless what I want.
People who use more plain editors like Gedit or Notepad++ might scoff at Emacs or Vim users. But if they think that power editor users use them is because they can't write or navigate text, they're really missing the point. The point is to navigate and edit text more efficiently.
The very first programming text editing interface I ever used was the ZX Spectrum's BASIC editor. The Spectrum had a peculiar modal entry system, where entering any keyword was always a single keypress. The keyboard had all the BASIC keywords written on and around the keys (see http://en.wikipedia.org/wiki/ZX_Spectrum#/media/File:ZXSpect...).
As you typed a BASIC command, the cursor would switch between being a flashing 'K' (keyword entry mode: the next key you press will enter the white keyword on that key), or in expression entry mode, a flashing 'L' (lowercase letter entry) or 'C' (capital letter entry). Other keywords were accessible using the 'Symbol Shift' key (to enter the symbols or function names in red on the key caps), or by pressing symbol shift and shift you could put it into 'Extended' mode and enter the red or green keywords above or below the keys.
The upshot of this was that Spectrum BASIC never had to lex code outside of basic expression parsing - and the only ASCII lexing it had to handle was expressions containing literals and variable names. Any keyword token was stored just as a single character token in memory - the program was stored tokenized and ready to be interpreted. It was fundamentally impossible to type certain classes of syntax error.
I remember being kind of surprised when I came across the BBC Micro and C64 where you had to type BASIC keywords out in full. It felt crazy - how could the computer handle you mistyping 'prnit'? How come the computer let you type lines of BASIC in that were wrong?
I still think there's something slightly broken about the fact that text editors let you type syntactically invalid code.
That brings back memories. Despite the horrible squishy keys that made text entry painful, the keyword entry was pretty efficient once you got used to it.
It wasn't quite my first coding experience though - I had tinkered on a TRS80, so I didn't quite have that same feeling of shock when moving on to something else.
The most complete example of a structured editing system I know off is Doug Engelbart's NLS system from the mother of all demos (from 1968!) [0]. In the system text, drawings and code are all structured hyperdocument data. It is quite different from most software we use today. As seen from the demo, structured editing works well not just on code but on also on text.
I have been experimenting with implementing some of these ideas in the browser[1]. I primarily use the system to take text notes with and write down my ideas. You can't program in it yet, but part of the program is driven by datastructures that are created, edited and manipulated in the system itself. Instead of looking and manipulating the raw data, however, you can render the data to look like a pseudo dsl.
If you had a really good, intuitive tree editor, I would imagine it would be easily adaptable to all kinds of interfaces and scenarios. Mobile, web, VR...
People who write Lisp in Emacs seem to swear by Paredit.[1]
I've not used it so I can't respond to either "really good" or "intuitive", but let's see if this works: To the Paredit user who just clicked the comments link followed by Ctrl+F paredit: what do you think of it?
> To the Paredit user who just clicked the comments link followed by Ctrl+F paredit: what do you think of it?
How the hell did you see me doing that? :O. I searched for it after reading top-level thread since a lot of comments are basically describing Paredit.
I'm using Paredit to write Lisp and I really miss this style in other languages. It takes some time to get used to - I finally grokked it after spending ~1 hour (two pomodoros) on structuring and restructuring a block of Lisp code. But after that hour of practice, writing code feels much different.
Lisp code is an explicit tree structure, and what Paredit does is enforce that structure. It lets you move things up and down the tree, or left and right at the same level, automatically maintaining the structure (keeping your parens balanced). It properly handles cutting and pasting parts of a tree. After you internalize those features, you really start to think of code in terms of trees instead of text representation.
Paredit is one of the reasons I find writing emacs-lisp really enjoyable, it made me dislike Python's lack of braces since I could no longer fly through my code as fast :-)
Note: there is now https://github.com/Fuco1/smartparens which might supersede paredit, supporting all kinds of languages; I haven't given it a good try yet though.
A lot of the infrastructure of the current web sprung out of Dave Winer's experiments with Frontier. From RSS/OPML/podcasts to weblogs to RPC-over-HTTP, Dave and his employees at UserLand (which included Brent Simmons and IIRC Aaron Swartz) were pioneers that contributed a lot to the Internet we currently live in.
I've personally been pulled into another project, but I do think there's plenty of unexplored space in the structured code editor arena. Paredit is good, but I'd like to see a more visual way of dealing with the structure.
The challenge is, of course, that a general-purpose tree editor is not likely to be efficient for editing code. You have to resist the urge to over-generalize. A good structured code editor isn't likely to be useful for anything besides coding.
I've been hearing this sort of thing for years (decades?), but I have yet to see someone put a toolchain together that works well using these concepts. It would be interesting to see someone try it instead of just talking about how it would be a good thing.
Pretty much anything that doesn't require years of training as a programmer uses a mixture of structured editing and small chunks of raw text. If you look at it by sheer number of users, structured tools are winning and have been for a long time.
Except for programmers. And when people try to do what we do, but in Excel, it's bad. Or if they try to do it in Labview, it's also bad. (I've personally seen both.)
These tools can't do, structurally, what git (or even cvs) does with plain text, or what vim or emacs (or even notepad!) do with plain text.
When people try to do what we do, but in python/emacs/git, it's bad. It's not like if you banned excel the same people would suddenly produce a beautifully factored python program. They just wouldn't have anything at all.
Right now we have structured tools with easy learning curves but low ceilings, and raw text tools with high ceilings but with a learning curve like being punched in the face with a brick wall. There is definitely room to explore in between the two.
Come back when you can write excel as an excel macro. The argument that spreadsheets are easy for non programmers to use is not an argument for replacing programmers current tools with structured tools. Nor an argument against.
Come back when you can write a database in sql, or a browser in html. Hell, try writing a browser in javascript.
How much of what the average programmer does looks like implementing a programming language and how much is just throwing some UI over a CRUD database? Why insist that a tool is only worthwhile if it can do both? There is plenty of room for tools that just solve the kinds of problems that most people have and do it without requiring years of training.
The vast majority of knowledge workers still rely on tools like Excel and Labview even when they are grossly unsuited for the task at hand, because the alternatives we offer require far too much training.
No-one is going to take your emacs away, we're just trying to figure out what the other 99% of the population is going to use.
Instead of Mat.Inverse(), there's an operator that actually means "take the inverse of this matrix". Likewise, the power operator in J, if given infinity as the exponent, will apply a function to a value until the fixed-point is reached. Haskell's infix operators also steer towards this direction of expressiveness.
It's pretty liberating to be able to use programming symbols to represent computations in the same way the integral symbol represents integration. However, the barrier to literacy is inevitably raised.
With previously working code above and below, I start
to declare foo. In the process I introduced unmatched
‘{’; '('; and '"', and am referencing the not yet
(fully) declared foo. This is routine editing, but it
causes huge problems to tools like
* typechecking
* go to source
* code folding
* autocomplete
* etc
Visual Studio and C# aren't without problems, but they are without this problem. This works fine.[1] I'd be quite surprised if it didn't also work in VS-supported languages such as VB.net, F#, and JavaScript - but I can't attest to it.
[1] - I personally find it annoying that VS complains about unmatched punctuation the moment I insert the opening one. "Jesus, VS, give me a minute!" And there are other parts of VS and/or C# that I (and probably you) find annoying. But the above is a Solved Problem, if you choose your tools carefully.
It's not a solved problem if your project is gigantic or if you're doing cross-platform development and don't want to or can't use the build system of the IDE or if it's C++.
To me solving the problem has a higher bar - you should be able to open a large project for the first time and have usable intelligent editor support within a second. I'm even okay with the results of autocomplete being incorrect (to a degree) if they are fast.
I think IDEs make the wrong tradeoffs. They require you to use their build system and insist on parsing all the code in your project before providing anything useful. And guess what none of the IDEs that I tried for C++ provide something as basic as a fuzzy file finder by default. Most don't even have plugins for that and those that do have very bad and slow implementations.
For me something like Sublime Text with a few plugins works great. And before anyone goes yada yada about semantic autocompletion and all that - you can get that too at a very cheap cost - look at YouCompleteMe for vim. For project wide searching I find it easier and more reliable to use grep (or git-grep, ag, ST built in search, etc.) than relying on the IDE's find symbol functionality.
It's not a solved problem if your project is gigantic
or if you're doing cross-platform development and don't
want to or can't use the build system of the IDE or if
it's C++.
I can't comment on whether the problem is solved for C++ because I haven't done C++ development in several years. But, for the other potential roadblocks you listed: no, they're all solved.
To me solving the problem has a higher bar - you should
be able to open a large project for the first time and
have usable intelligent editor support within a second.
Well, therein lies the rub. With the "intelligence on a novel project within one second" requirement, if it hasn't already been solved then you will never find it to be.
Many people are fine with a tool that has a large(r) start-up cost if saves them time (and/or headache) during run-time. Also, I don't think - though I can't prove - that it's common for people to open and close large projects in IDEs often enough that start-up time is a primary concern.
Combine that with increased tool feature count and project complexity as time goes on, as well as limited resources for tool development, plus the current start-up times being Good Enough, and the result is a problem that won't get solved.
[IDEs] require you to use their build system...
I'm not sure what you mean. In my experience, you can use IDEs as glorified text editors - and build from the command line or some other external build tool.
In some sense pointing out 'text'ness of the text tools is the wrong thing to do (e.g., 'text' is something that contrasts with 'binary'). All of Excel, LabView, Scratch etc (as jamii pointed out in the other comment, as examples of structured editors), could have text file formats behind the scenes which could still be diffed, grepped etc. (unless the author made the distinction between text and ASCII/Unicode).
I think the author has made an interesting point, but I would add that "time has come" to, not abandon text tools (which I don't think has worked; I think the trend is going away from Excel to R and python+pandas, in data science etc), but have a hybrid system, where the text representation of the "view" is visible to the programmer all/most of the time.
For example:
- we either have plain text "LaTeX" editing where we edit out document/source "blindfolded", and only get to see the result when we compile and view the pdf.
- we use a WYSIWYG tool like Word (and some WYSIWYG LaTeX editors) where we stick to the view and are scared (sometimes it's impossible) to dig into the text representation
- what we need is something like a hyprid, where a "view" line, and a "text" line are interweaved, or "view" on left side, and "text" on right side. And we could edit any of the two ways, and the other should update.
I've been thinking along these line for quite a while, and if I get a change I intend to create some kind of system based on that.
Macromedia Dreamweaver had a nice split pane option where you could view both the html and rendered page at the same time. You could edit either side at will, and they both would stay in sync. Some things were much easier just dragging and dropping them into place, whereas other items it was easier tweaking text to get it to look right.
This article makes exactly the same mistake that pretty much everyone else suggesting programmer tooling beyond plain text has made: assuming it has to be either or, that if we want more sophisticated tools that use a more complex representation, we have to give up text.
The suggestion is a nonstarter. Network effects alone would prevent the world making such a move, and it's a good thing, because the loss wouldn't be just a particular existing tool, it would be the entire universe of tools that work with text, that the world has spent decades developing, most of which any individual has never even heard of, let alone thought about how to replace.
It's also completely unnecessary. If you want a tool that lets you view and edit your code in a fancy table format, you can write one. All it needs to do is parse the existing source code into whatever internal format it wants and write it out again afterwards. Yes I know the author criticizes parsers, but really, writing reliable parsers has been done often enough to make it clear that it's a solvable problem. And would it not be better to have some of what you're looking for in an actual tool than all of it in an imaginary one?
Its probably because I have spent way more time in front of an IDE than in (advanced) maths classes that I find the code example vastly more readable. I had no idea what the Integral symbol on the left meant, while the function name makes it obvious. Likewise the parenthesis make it easy for me to understand what gets parsed first.
I was working on a text-free structural editor a couple of years ago which people seemed to like (and probably now think of as vaporware). I was writing it under kind of ridiculous conditions, though, and needed to take a break. I've moved its project page here recently (with video): http://symbolflux.com/projects/tiledtext
I head a realization a couple weeks ago about how to simplify some core pieces (which were making undo/redo... insane), and started getting back into the code. I wish I could dedicate myself to working on it full-time, but I need to be cautious about coding too much. So, it's either I work a shitty minimum wage kind of job, or don't really do side projects. Thinking of leaving professional software again in order to keep the side projects :/
colorForth adds at least the color dimension to a programming language. Words have different meaning depending on the color used. This reduces the amount of punctuation and the need for "reserved" words as we see in traditional languages.
This is just a display issue, you could display text in different styles such as underlined or bold, for example. A good text editor can easily do this.
One of the advantages of Structured Code Editors is that you don't have to store plain text, you can use a code per construct. I guess this is one of the reasons the Sinclair Zx81 (1KB RAM) used a structured editor.
The article claims that parsing isn't necessary with structured editing. We're working on some abstract data structure representing our language's syntax. This might be an AST, like any programming language in common use today, or maybe in the future we've thought up some better way to represent programming languages within the compiler/interpreter. Since we're working directly with the AST or whatever, we don't need to parse! We just do whatever actions the programmer wants do over on our data structure.
Now, let's talk about how parsing works in your language of choice in 2015. We're going to take a string, and we'll turn it into a representation of our program. If we're lucky, our language implementation isn't awful, and our parser will be a function from some sort of Unicode-encoded text into an AST. OK, maybe our input is ASCII or some other text encoding, but the point is the same. Either way, we're taking some keypresses from the programmer, and turning them into a data structure representing the program.
Let's abstract a bit. Maybe in the future we don't use keyboards. Instead, we have whatever peripheral you like. This peripheral is capable of sensing some sort of action from the user, and turning it into actions within the computer. So now, our parser is a function from some user action to a data structure representing the program.
This sounds an awful lot like "we just do whatever actions the programmer wants do over on our data structure". The question is, how do we figure out what the user wants? The answer is, we parse it! It doesn't matter if we're parsing text or not. We will always need some way of determining the programmer's intention from the signals we get through their peripheral device. Viewed through this lens, the camera requires a parser just as much as the keyboard does, just as much as the mousepad does, just as much as the microphone does. We will always need a way to convert the unstructured thoughts of a human programmer into the formal language of a compiler's internals. Regardless of what representation we choose for any part of this process, it's going to be subject to the article's quote:
> This situation is a recipe for disaster. The parser often has bugs: it fails to handle some inputs according to the documented interface. The quoter often has bugs: it produces outputs that do not have the right meaning. Only on rare joyous occasions does it happen that the parser and the quoter both misinterpret the interface in the same way.
No. There is a large difference between parsing commands which affect the structure of a program and directly parsing the structure itself. For one, commands that affect the structure in an invalid way can be rejected. Second, commands can be stored so that a history of source code manipulations can be replayed (which while possible via text ala git is very lossy and subject to merge issues - as stated in the article). In addition, commands are limited by context. So when parsing the user's input we now have a limited context to work within greatly improving accuracy and also simplifying the input space of the user.
https://i.imgur.com/wvcduDk.png
(in the picture above "string" is selected, and "find" is slightly highlighted because its on the same level. this helps visualize the tree and plan your movements)
It actually started as my final project during undergrad. Here's my 78-page thesis on it (unfortunately Portuguese, but has English pictures):
https://projetos.inf.ufsc.br/arquivos_projetos/projeto_1398/...
Structured editing is much, much, MUCH better suited for programming than plain text. Syntax preservation and source-display separation are game changers, and they are not the only benefits.
Unfortunately it has a Vim-level difficulty curve and people don't realize how much time they currently waste on syntax mucking. This makes marketing kinda hard, even with a fully operational and well polished implementation.
Fun fact: structured editing also works with non-programming structured text, like JSON, HTML and CSS!