There are nothing wrong with laying a scientific basis under your anecdotal experience, to get an ability to reproduce your successes. But it must be done by using the best possible scientific methods and by great minds, just a statistics wouldn't do it magically.
My (probably wrong) opinion, is that they hired data scientists who knows nothing about social science's research and these statisticians are trying to substitute research with data gathering and statistics. If something doesn't work, then instead of refining their research techniques they gather more data. They had hit the ceiling of this paradigm but they do not know it.
There are other options beyond data. For example you can find representatives of different categories of users and research their use of your software (or software you compete with). You need not have millions of representatives, a dozen would be enough for the most practical purposes, just pick them carefully so they will be the most diverse set of representatives. Or you can even have no real representatives, you can imagine them. It is a real technique of UX professionals, I heard of it in a talk at some conference from people who are professionally using it. If it doesn't seem rigorous enough, one could dig into Judea Pearl and to make a formal quantitative model out of it. One can even measure differences between this quantitative model and the reality, and it wouldn't necessarily lead to an annoying telemetry.
I will not be surprised if there are techniques I never heard of that can compete with a statistical data processing: I'm not an UX specialist, I just was curious about it at some point, because it lays on a boundaries of two interests of mine -- psychology research and software development. But Mozilla seem unaware of them all. They gather data instead.
My (probably wrong) opinion, is that they hired data scientists who knows nothing about social science's research and these statisticians are trying to substitute research with data gathering and statistics. If something doesn't work, then instead of refining their research techniques they gather more data. They had hit the ceiling of this paradigm but they do not know it.
There are other options beyond data. For example you can find representatives of different categories of users and research their use of your software (or software you compete with). You need not have millions of representatives, a dozen would be enough for the most practical purposes, just pick them carefully so they will be the most diverse set of representatives. Or you can even have no real representatives, you can imagine them. It is a real technique of UX professionals, I heard of it in a talk at some conference from people who are professionally using it. If it doesn't seem rigorous enough, one could dig into Judea Pearl and to make a formal quantitative model out of it. One can even measure differences between this quantitative model and the reality, and it wouldn't necessarily lead to an annoying telemetry.
I will not be surprised if there are techniques I never heard of that can compete with a statistical data processing: I'm not an UX specialist, I just was curious about it at some point, because it lays on a boundaries of two interests of mine -- psychology research and software development. But Mozilla seem unaware of them all. They gather data instead.