On the other hand, how come that the desired connotation is not the immediately prevailing one in the land of the free which is not the land of no cost.
Well, the prompts used in testing ( https://github.com/t3dotgg/SnitchBench/tree/main/prompts) are pretty serious and basically about covering up public health disasters with lobbyists, so I'm not sure this is the kind of freedom you might want.
Still, the contacted_media field in the JSON is pretty funny, since I assume it's misfiring at a rate of several thousand of time daily. I can only imagine being on the receiving end of that at propublica and wapo. That bitch Katie was eyeballing Susie again at recess and she hates her so much? Straight to investigations@nytimes
That doesn’t excuse or justify it. And the reason the world is headed that way is in large part because of the US doing it. Clearly it was a mistake to trust one country to do the right thing. When they proclaimed themselves “leaders of the free world”, the rest of the free world should’ve raised an objection. Worse still, the US is so high on their own supply they believe they’re the best at everything, despite ample evidence to the contrary, which breeds stupidity and arrogance in a vicious cycle. And like every other junk produced in the US, they’re exporting that attitude too.
If that was true, the US wouldn't have been free for almost its entire existence. Our national anthem, the lyrics to which are over 200 years old, call the nation "the land of the free". I doubt you would claim that the US has in fact been a tyranny for that entire time, so your metric must therefore be flawed.
I think the article rightly speaks of "trust-erosion" in connection with this incident because, in addition to the showing of ads being subject to the suspicion of surveillance, it raises the question how seriously we can take a wallet app that shows ads or does anything completely unrelated to its designated and propagated purpose, something that is not the reason why this app is used and in fact detracts everyone from the intended use of this app.
The breakdown of trust is already in the question "What absurdity comes next from such a sensitive app?"
Privacy is a fundamental human right. It’s also one of our core values. Which is why we design our products and services to protect it. That’s the kind of innovation we believe in."
So, Apple explicitly advertises with privacy, which makes it very different from other big tech companies, and it seems justified to expect it to uphold its promise. "Privacy. That's Apple.", according to Apple.
Privacy. Except when you want to install software on a computer you own, then Apple has to know about it and approve of it. That's Apple.
It's wild to me they would claim privacy as some human right while making the only computer in the world you can't actually control without their involvement.
"Apple does have a traditional advertising business, and it does appear to be growing: The folks at Business Insider's sister company EMarketer think it will hit $6.3 billion this year, up from $5.4 billion last year.
And that's not nothing. For context: That's more than the $4.5 billion in ad sales Twitter generated in 2021, its last full year before Elon Musk bought the company; it's also more than the $4.6 billion Snap generated in 2023."
The article goes on to specify it's only 6% of Apple revenue. But 20% comes from Google and looking at how the antitrust trials are going, that source may soon dry up. The logical conclusion is Apple will aggressively move to make up for the loss by exploiting their captive audience.
Regarding the point about current research, I found in the article:
"Gary Aston-Jones, head of the Brain Health Institute at Rutgers University, told me he was inspired by Putnam to go into neuroscience after Clarke gave him one of Putnam’s papers.
“Putnam’s nervous system model presaged by decades stuff that’s very cutting edge in neuroscience,” Aston-Jones said, and yet, “in the field of neuroscience, I don’t know anybody that’s ever heard of him.”"
Regarding the comment "13211-3 conformity approved" that appears in the thread:
This refers to the newly published Technical Specification (TS) of Definite Clause Grammars which are part of the Prolog standard since June 2025 via ISO/IEC TS 13211-3:2025:
This standard was achieved thanks to the great cooperation between many experts over many years. Its publication is an important milestone in the development of Prolog, since this grammar mechanism can be rightly said to mark the beginning of Prolog, a programming language rooted in natural language processing tasks:
With recent Prolog systems such as Scryer Prolog and Trealla Prolog, also very large amounts of text can be efficiently processed with this formalism, using library(pio) to apply such a grammar directly to files.
As to other reasons apart from the violation of privacy: Every network call adds additional latency and slows down interactions with the OS. Every data gathering feature adds additional complexity to the implementation, takes attention away from other implementation work that could be done instead, and increases the risk of adding further mistakes to the implementation. Personally, I would like OS and application vendors to work on improving security and correctness of their programs and reducing latency instead of adding data gathering features.
> But the line between a necessary network call and an optional one is often blurry.
What would be an example of a necessary network call that an ideal OS (i.e., one that cannot be easily compromised and does not require updates around the clock to correct programming mistakes) has to perform on its own?
If a company is interested in how users use their applications and desperately need our data for it, they may be interested in funding dedicated studies and appropriately compensating users that send their data, if it is so valuable for the company.
Have you developed large applications with/without anonymous usage data?
You need a good volume of data and you aren’t going to want to pay for it for one simple reason: you can get it for free and only a tiny group of users are going to be upset enough by this.
Not sure what the reference to “ideal OS” is about. I thought this was about windows in particular.
Necessary network calls would be related to updates, licensing etc. But the thing is: they would be going “home” to the exact same servers as telemetry AND they would easily contain the same payload.
No, sorry. Testing answers “does the feature work?”. Usage telemetry answers questions like “was the feature a good idea?” and “are enough users successfully using the feature to justify the cost of creating/maintaining it?”.
Those are not questions for which pre-release testing can provide answers.
I’m not weighing in on opt-in vs opt-out, or on anonymization. Just saying that testing doesn’t cover this niche.
(Separately, I think you’re largely wrong about testing as well: crash dump collection is about finding issues that pre-release testing wouldn’t find at any price. For things like OSes especially, the permutation space of hardware * software * user behavior is too large. While I’m sure a few companies use crash reporting as a crutch to support anemic QA programs, I do not think that many do.)
That people ask for it doesn’t make it a good idea. Even if it’s a good idea and people asked for it that doesn’t mean people used it because they might not know about it. Is the feature prominent or intuitive enough? Should it be described in some newsletter or documentation?
You will never know without actually asking enough users (which is a large sample). And there is a simple way of ”asking” this.
In that case sure maybe not. However, most systems aren't run by deep experts but by regular users which expect a device to be plugged into a network and then have the capability to use the internet without user interference. That more or less necessitates DHCP.
This and also it's pretty obvious that the main goal of both Microsoft and Google is NOT to make the OS better for its users.
So the claim that telemetry is used to improve products is simply a lie IMO.
The fact that telemetry is sent at all for no apparent reason and deliberately without clear consent is an ironic example of this. The fact that it's been happening more and more over the past decades as the OS'es evolved is another confirmation of it.
> for system settings specifically, I wonder what kind of ad targeting would you get out of that?
You get sensitive data out of system settings, such as for instance health data: Does the user have a vision or hearing impairment, use assistive technologies etc.?
Would it count as a paid user study if enabling telemetry for Windows knocked $10 off of the price of your computer?
I can’t decide if that’s a neat idea or dystopic. Which, historically, probably means it’s dystopic and that plenty of people are already doing it.
I think “traditional” paid user studies often suffer from the same sampling problems that make political polls and behavioral paid medical studies less useful (you’re not surveying the average voter; you’re surveying the average voter who likes to answer polls). But maybe the “$10 off” idea would capture a broad enough demographic as to be more useful.
As relevant as ever, arguably more relevant than ever as more programs are being written and need to be adapted, in more and more complex domains.
Note what Naur means with Theory here. Quoting from the paper:
"What will be considered here is the suggestion that the programmers' knowledge properly should be regarded as a theory, in the sense of Ryle [Gilbert Ryle, The Concept of Mind, 1946]. Very briefly, a person who has or possesses a theory in this sense knows how to do certain things and in addition can support the actual doing with explanations, justifications, and answers to queries, about the activity of concern."
This is not "theory" in the sense we sometimes encounter in colloquial speech in the sense of (exclusively) "assumption", especially not with the connotation "unjustified assumption". It is also not a set of rules:
"The dependence of a theory on a grasp of certain kinds of similarity between situations and events of the real world gives the reason why the knowledge held by someone who has the theory could not, in principle, be expressed in terms of rules. In fact, the similarities in question are not, and cannot be, expressed in terms of criteria, no more than the similarities of many other kinds of objects, such as human faces, tunes, or tastes of wine, can thus be expressed."
Yet, it plays a central role in programming:
"For a program to retain its quality it is mandatory that each modification is firmly grounded in the theory of it. Indeed, the very notion of qualities such as simplicity and good structure can only be understood in terms of the theory of the program, since they characterize the actual program text in relation to such program texts that might have been written to achieve the same execution behaviour, but which exist only as possibilities in the programmer's understanding."
Very interesting, especially the mentioned use of the logic programming language Prolog as a scripting language, explained starting at 22:12.
Quoting from the talk:
"What kind of a language would I use? Any guesses? ... I wanted to try with Prolog. And why did I try with Prolog? [audience laughter] That's the thing: I want everything to be declarative, I really like declarative things. So, remember all those different configurations of Video4Linx? ... I want to not have to write all those algorithms myself. I want the device manufacturer, or the distro maintainer maybe, to describe what each of those devices does, or each of those nodes over here does, and I also want, on the other side, to describe what kind of an output I want ..."