> Simple debugging. This is an area where AI techniques could be quite helpful.
Detecting and explaining common syntax and runtime errors is a crucial step in
teaching a new language. Tools which provide this type of support could increase
the interest in this language in the mainstream. Our current debugging support is
minimal and provides only the basic commands: step, skip, and continue.
I have given some thought to this in the past. Has there ever been a serious attempt at this?
Sometimes the error messages for really simple mistakes (missing semicolon, wrong punctuation, etc.) are obtuse and almost completely meaningless. Of course, if it were easy to give simple, clear error message in such cases, they resulting message would be simple, so it's other complexities that get in the way. The biggest problem I see is that having labelled training data, e.g. source code that fails compilation along with the specific actual error, is unlikely, and having an expert provide small, self-contained examples is unlikely to be useful (really, enough training data to be effective) in real world errors, even when novice programs are fairly small.
Edit: I was just thinking about this, and with prudent use of a VCS, labelled training data would just be pairs of the form <commit that fails, next commit that compiles>. It would be interesting to see if it's possible to pull useful data from exiting history, or if some level of restraint is necessary in practice. Really, I'm just thinking about the precipice of supervised and unsupervised learning here.
Among the differences between Felleisen's "Student Languages" and Racket proper is the higher quality of their error messages. Howev err, it's not a function of AI but of the much smaller size of the DSL's and a more narrowly defined purpose.
I have given some thought to this in the past. Has there ever been a serious attempt at this?
Sometimes the error messages for really simple mistakes (missing semicolon, wrong punctuation, etc.) are obtuse and almost completely meaningless. Of course, if it were easy to give simple, clear error message in such cases, they resulting message would be simple, so it's other complexities that get in the way. The biggest problem I see is that having labelled training data, e.g. source code that fails compilation along with the specific actual error, is unlikely, and having an expert provide small, self-contained examples is unlikely to be useful (really, enough training data to be effective) in real world errors, even when novice programs are fairly small.
Edit: I was just thinking about this, and with prudent use of a VCS, labelled training data would just be pairs of the form <commit that fails, next commit that compiles>. It would be interesting to see if it's possible to pull useful data from exiting history, or if some level of restraint is necessary in practice. Really, I'm just thinking about the precipice of supervised and unsupervised learning here.