Looks very powerful especially if you can popularize it. Maybe mention the plugins and other platforms besides MacOS on the website. Also the chat thing wouldn't allow me to click a reply on my Nexus Android phone.
I wonder if the parsing, schemas and the transformations etc. could be a starting point for an advanced neural network AI programmer. Perhaps if you could combine it with an interactive system that converts text to schemas.
For example, say you had a way (a skill?) to convert a schema to a web form. Then the AI would need to handle speech-to-schema-change such as "add a field for Middle Name". Then perhaps there is a skill to go from the web form to a save data request and another to accept the request and another to update the database schema etc.
But the AI might come more in handy for interpreting more complex schemas such as nesting, grouping or workflow etc.
But besides an advanced domain specific AI it seems like this foundation of parsing, schemas and transformations would be a good start for speech-based or other higher level program generation tools. I can envision a web page that generates a CRUD application from mobile front end to Firebase back end and then spits out the code that you can then add more advanced features to.
So it seems a smart strategy to embrace open source because that can allow other developers to add mappings that you can leverage for your own app or program generation tools which you can sell.
To me the interactive web-based CRUD application generator is low hanging fruit but more interesting would be to combine your tech with a Google Duplex style interactive speech based AI programmer front end. Probably only Google and a relatively small number of companies could pull something like that off though.
Thanks for sharing all these awesome ideas. I can't say I know enough about NLP to evaluate how feasible some of your speech to code ideas are, but I do think Optic is a great base to any project trying to do meta-coding.
I'd love to see a 2-way code generator built on Optic for a simple crud builder like you describe. It could let people choose the libraries they want to use from the list and design their schemas, routes and queries at some higher level of abstraction. Unlike the current generator tool you could actually make changes to the high level models and get Optic to update the underlying code for you even after you make changes.
That whole field could be huge and I think Optic's major contribution will be abstracting away all the dirtiness of code gen. People who build this next generation of tools could interact purely with Swagger-like-JSON.
Is this something you'd be interested in collaborating on?
I think ordinary NLP wouldn't get you very far at all, but it seems the Google Duplex people have some advanced tricks. But its still probably a bit of a stretch.
A two-way code generator for CRUD applications is a lot more realistic for a start. I am interested in trying to collaborate on that, although I can't promise how much time I will have because of other obligations.
But it would be fun to give it a start at least. You started describing something, maybe if you want you could elaborate a little bit more about how you think this type of tool would work in a document online somewhere or just in a reply. It would be fun to play around with Optic with that type of goal. I will need to read the docs and experiment some to make sure I understand the system better before I get too far along in coding something. Mainly I use Node and JavaScript these days so theoretically that could work for a web-based CRUD builder.. I guess by talking to the Scala server over REST? I will need to read your docs.
Optic React / JS for all its GUIs. We’re planning to have microeditors that pop up and help you work with specific kinds of code so this might just run from within Optic somehow.
All fun ideas to brainstorm. Let’s take it offline. Email me at aidan@useoptic.com
I wonder if the parsing, schemas and the transformations etc. could be a starting point for an advanced neural network AI programmer. Perhaps if you could combine it with an interactive system that converts text to schemas.
For example, say you had a way (a skill?) to convert a schema to a web form. Then the AI would need to handle speech-to-schema-change such as "add a field for Middle Name". Then perhaps there is a skill to go from the web form to a save data request and another to accept the request and another to update the database schema etc.
But the AI might come more in handy for interpreting more complex schemas such as nesting, grouping or workflow etc.
But besides an advanced domain specific AI it seems like this foundation of parsing, schemas and transformations would be a good start for speech-based or other higher level program generation tools. I can envision a web page that generates a CRUD application from mobile front end to Firebase back end and then spits out the code that you can then add more advanced features to.
So it seems a smart strategy to embrace open source because that can allow other developers to add mappings that you can leverage for your own app or program generation tools which you can sell.
To me the interactive web-based CRUD application generator is low hanging fruit but more interesting would be to combine your tech with a Google Duplex style interactive speech based AI programmer front end. Probably only Google and a relatively small number of companies could pull something like that off though.