"At first it is helpful to be able simply to look at what you want and have it occur without further action; soon, though, it becomes like the Midas Touch. Everywhere you look, another command is activated; you cannot look anywhere without issuing a command."
Yeah, Apple Vision doesn't have this problem, because eye tracking is used just for pointing; not for clicking on items.
Vision Pro pinch-touch is essentially the same as this paper's
> some form of ‘‘clutch’’ to engage and disengage the monitoring
The paper does talk about other challenges with look-to-select, even if it is biased by the thinking of back in the day:
> Unlike a mouse, it is relatively difficult to control eye position consciously and precisely at all times.
You have to remember that the historical setting for much of this research was to help paralyzed people to communicate, and pushing a button or the modern pinch-touch was not really always an option.
> They expect to be able to look at an item without having the look cause an action to occur.
So this doesn't seem to be a problem that Vision Pro has?