In my (admittedly limited) experience, the difficulty of working with Selenium doesn't stem so much from finding appropriate selectors but from britleness: network delays/timeouts, objects not yet being yet lodaded into the DOM, dynamically generated classnames (think scraping obfuscation).
I do see a niche for an system (AI or not) that deals with that last case (i.e. it automatically grabs some correlated selectors to fallback to), or as a quick tool to scrape static sites.
However (and this may be out of your proposed scope, and that's fine) I see more value on something that can construct a sort of DOM timeline - in order to accurately know what is available and when. If you start "recording" network and user events since page load, you may be able to reconstruct a) which nodes the user interacted with b) which preconditions there are for these nodes to be consistently available, no matter which stochastic delays/errors are present.
This is tricky and time consuming even when done manually, so I'm not sure it can be AI-mplemented. But that's maybe an idea to explore down the line :)
This is actually very interesting. I like how it's done. Very great job.
We also do the same thing but we also understand the box (context) then we can say. We should be on Sign in box and a button called Sign in should be present. If we are on a different page and we cannot find something similar to it, we will give better error messages. Like: we should have been on Sign in page but we are on sign up. etc.
Any one of those will have bits of the other two in it. MDC establishes the mindset, the other two are deep dives into setting fees & drafting proposals the learnings in both of which served me very well.
This study is in the context of previous assumptions of a U-curved relationship: low equality == high birth rates, medium equality == low birth rates, high equality == high birth rates. That study shows that the last part doesn't seem to be true.