Maybe with LLMs it won't matter much anymore? Maybe in the near future we'll have copilot for crystal as vscode plugin and the rest doesn't matter much - just quality of this plugin?
> We're going to write a program in a "new" language that is a mix of Ruby and INTERCAL. We're going to take the "come from" statement and use it to allow "hijacking" the return of a function. Furthermore, we're going to do it conditionally. "come from <method> if <condition>" will execute the following block if <method> was executed and <condition> is true. In <condition>, "result" can be used to refer to the result of executing <method>.
And that was enough to get it to understand my example code and correctly infer what it was intended to return.
Given it took that little, I don't think you need much code in a new language before you have a reasonable starting point as long as you can give examples and document the differences, and then use that on a few large projects that has good test suites, and work through any breakage.
If the LLM understands the language it can aid in creation of the libraries and ecosystem because it can also translate code. I just tested it by having ChatGPT translate one of my Ruby scripts to Python for example.
I don't like Crystal all that much, but it's similar enough to Ruby that if ChatGPT can handle Ruby->Python, it can handle Ruby->Crystal with relatively little work.
But it doesn't need to handle it flawlessly, because every new library you translate and fix up gives you a new codebase you can use to finetune it.
At that point, why bother with high-level languages at all? A sufficiently-good AI should be able to read a specification / test suite and directly generate a binary which passes those tests.
Maybe. Or maybe it'll become validation/what-is-happening lowest denominator. Programming languages are also a good Intermediate Language between humans<->machines and machines<->machines apparently due to recent AI advancements.