> I'm sure that the engineers are flagging voice requests that happen more then once
As a user, I expect to see feedback declaring that it's having trouble and specifically requesting permission to send my unparseable requests to some review queue. I expect to see feedback indicating what could resolve the issue: was it something I said? was there too much noise in the background? was there a software fault and a fix will be deployed at such-an-such date?
The fact that I do not get any feedback whatsoever when things go wrong leads me to have no faith that the problem will be resolved in any satisfactory manner. When combined with a complete lack of consumer/business interaction options in general (w.r.t. Google), it leads to some very dissatisfied consumers.
You think engineers are flagging voice requests that happen more than once? How would engineers have access to that data in the first place if it's supposed to be anonymous and private?
>The fact that I do not get any feedback whatsoever when things go wrong leads me to have no faith that the problem will be resolved in any satisfactory manner. When combined with a complete lack of consumer/business interaction options in general (w.r.t. Google), it leads to some very dissatisfied consumers.
I guess you can ask for your money back? Lets stay grounded in reality here, Google is paying money on your behalf to improve your experience. They could, and from a financial perspective maybe should try getting you to annotate your own data. But all that annotated data gets pooled and used to improve the models. I don't think their goal here is to give you faith they are resolving your issue. The algorithm is going to get some things wrong. Its a matter of improving the overall accuracy and precision in the algorithm.
>You think engineers are flagging voice requests that happen more than once? How would engineers have access to that data in the first place if it's supposed to be anonymous and private?
100% they have to be. Its too costly and time intensive to slog through all the data. Probably* they have a flag that goes off on a voice request when something is either not intelligibly interpreted so many times in a row. Or if a use has to go manually do something after repeatedly making a request. This flags the interaction for manual review. Then it processes through some algorithm to strip it of identifying data. Then it gets put in front of a warm body for review.
Trust me that the company wants to avoid putting things in front of warm bodies as much as possible. Its expensive.
> Lets stay grounded in reality here, Google is paying money on your behalf to improve your experience.
Lets stay grounded in reality here, Google is not doing this for me or on my behalf. Google just invests a bit to take a bunch of very valuable user data and then monetizes it for far more than it took to obtain it. This relies on the fact that most users are unaware of how valuable their data is.
As a business model there's nothing illegal about this. But in any other sense it's no different from tricking an uneducated individual into selling their kidney for $2000 just so you can resell it for $20.000 and pretend you're doing them a favor by giving them money.
And as long as the vast majority of customers are in the dark regarding the value of their data and what they're actually trading when using such a system then yes, this is Google (and not only) abusing ignorance to line their pockets.
Can you imagine how valuable voice data could be if it could be mined to show what products, politics, opinions are being discussed in the real world?
Can you imagine how valuable your voice data would be to a marketing campaign which you didn't even know you had participated in?
Can you imagine how much valuable information is contained in the vocal enunciation of A/B testing? Every little "hmmm" or "how do I go backwards?" or "what does this button do?" that people don't even realize they're saying.
Can you imagine how much that violates someone's privacy?
> other then by selling devices that use voice recognition
That's one way but what's the actual question? My point is they are not doing this as a favor to the user (as GP seemed to suggest), they are doing it for a profit by getting the user's data far too cheaply. And for this they rely on the user staying unaware of the value of their data, how it will be used, and how much is collected in the first place.
The voice recognition tech itself (and any ancillary parts) can be sold/licensed to so they can build similar systems. The actual data obtained by the voice recognition can be used exactly like any other data Google collects. They literally have access to what you say around their microphone. You can't not see any way they could make money from this.
20 years ago monetizing the free search was just as much of a mystery for many, including seasoned investors.
As a user, I expect to see feedback declaring that it's having trouble and specifically requesting permission to send my unparseable requests to some review queue. I expect to see feedback indicating what could resolve the issue: was it something I said? was there too much noise in the background? was there a software fault and a fix will be deployed at such-an-such date?
The fact that I do not get any feedback whatsoever when things go wrong leads me to have no faith that the problem will be resolved in any satisfactory manner. When combined with a complete lack of consumer/business interaction options in general (w.r.t. Google), it leads to some very dissatisfied consumers.
You think engineers are flagging voice requests that happen more than once? How would engineers have access to that data in the first place if it's supposed to be anonymous and private?