We paid a lot of attention to this exact problem at Delver, talking to potential users it was a big concern especially for reporting/BI. We’d never give results until we had a single unambiguous parse of a question, but this required a core that was quite old fashioned NLP, with machine learning only really being used around the edges. So ultimately we’d always generate a series of parse trees with embedded semantics that composed together, but the parser could look at things like external ontologies or word embeddings to try and work out the exact analogy in the domain for something the user said (and confirm with them that the new language was what they meant).
It was reasonably cool but I was a terrible CEO and slowly killed it:
It was reasonably cool but I was a terrible CEO and slowly killed it:
https://youtu.be/a2ufq6CdCYw