> It has always bothered me that by "spectrum" they mean not the sort of continuous thing that spectra actually are, but instead some disjoint set of "colors" any one of which might describe a person.
Wasn't Newton making the point that we normally perceive and treat colors as qualitatively different, but that they're in fact caused by a single underlying mechanism that can take on any of a continuous range of quantities?
Thus using the term "spectrum disorder" would be making precisely the same point, to describe a set of apparently qualitatively different disorders that are in fact caused by some underlying mechanism with a range of quantities? (To be clear, I don't know if any so-called spectrum disorders actually meet this criterion, and it's probably more complicated than that, but it seems to be the reason the term was chosen.)
> It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
Or, as a slight variation of that, they think the underlying technology will always be quickly commoditized and that no one will ever be able to maintain much of a moat.
I think anyone sane will have had the same conclusion a long time ago.
It's a black box with input/output in text, thats not a very good moat.
especially given that Deepseek type events can happen because you can just train off of your competitors outputs
I've tried out Gemini 2.5/3 and it generally seems to suck for some reason, problems with lying/hallucinating and following instructions, but ever since Bard came out at first, I thought Google would have the best chances of winning since they have their own TPUs, YouTube (insane video/visual/audio data), Search (indexed pages), and their Cloud/DCs and they can stick it into Android/Search/Workspace.
meanwhile OpenAI has no existing business, they only have API/Subs as revenue, and they're utilizing Nvidia/AMD
I really wonder how things will look once this gold rush stabilizes
To me it just looks like unacceptable carelessness, not an indictment of the alleged "lack of explicitness" versus something like gRPC. Explicit schemas aren't going to help you if you're so careless that, right at the last moment, you allow untrusted user input to reference anything whatsoever in the server's name space.
But once that particular design decision is made it is a question of time before that happens. The one enables the other.
The fact that React embodies an RPC scheme in disguise is quite obvious if you look at the kind of functionality that is implemented, some of that simply can not be done any other way. But then you should own that decision and add all of the safeguards that such a mechanism requires, you can't bolt those on after the fact.
Isn't that still "acqui-hiring" according to common usage of the term?
Sometimes people use the term to mean that the buyer only wants some/all of the employees and will abandon or shut down the acquired company's product, which presumably isn't the case here.
But more often I see "acqui-hire" used to refer to any acquisition where the expertise of the acquired company are the main reason to the acquisition (rather than, say, an existing revenue stream), and the buyer intends to keep the existing team dynamics.
Acquihiring usually means that the product the team are working on will be ended and the team members will be set to work on other aspects of the existing company.
That is part of the definition given in the first paragraph of the Wikipedia article, but I think it’s a blurry line when the acquired company is essentially synonymous with a single open source project and the buyer wants the team of experts to continue developing that open source project.
The team is continuing to develop the open source project that was synonymous with the company, but they're explicitly no longer going to try to monetize it. I think that squarely counts as an acquihire according to common usage.
> you still have a higher chance of hitting something slightly off the middle than the perfect 100/100
That's because "something slightly off the middle" is a large group of possible results. Of course you can assemble a group of possible results that has a higher likelihood than a single result (even the most likely single result!). But you could make the same argument for any single result, including one of the results in your "slightly off the middle" group. Did you get 97 heads? Well you'd have a higher likelihood of getting between 98 and 103 heads. In fact, for any result you get, it would have been more likely to get some other result! :D
> But you could make the same argument for any single result
Isn't that the point? The odds of getting the "most likely result" are lower than the odds of getting not the most likely result. Therefore, getting exactly 100/100 heads and tails would be unlikely!
But as I said, getting any one specific result is less likely than getting another other possible result. And the disparity in likelihoods is greater for any one specific result other than the 50% split.
I think the disagreement is about what that unlikeliness implies. "Aha! You got any result? Clearly you're lying!"... I'm not sure how far that gets you.
There's probably a dorm-quality insight there about the supreme unlikeliness of being, though: out of all the possible universes, this one, etc...
Try thinking of it this way: you're in highschool and your stats teacher gives you a homework assignment to flip a coin 200 times. You respect her and don't want to disappoint her, but at the same time the assignment is pointlessly tedious and you want to write down a fake result which will convince her you actually did it.
A slightly imperfect split is more likely to convince your teacher that you did the assignment. Intuitively this should be obvious.
> "Remember, if you flip a coin 200 times and it comes heads up exactly 100 times, the chances are the coin is actually unfair. You should expect to see something like 93 or 107 instead".
Inverting the statement makes it read something like this:
You are more likely to not get 100/100 than you are to get exactly 100/100
...which is exactly what I was saying. Nobody is arguing that there is a single value that might be more likely than 100/100. Rather, the argument is that a 100/100 result is suspiciously fair.
You can use combinatorics to calculate the likelihood. If your PRNG is in a cycle of length N in its state space (assuming N>200), and half the state space corresponds to heads (vs tails), then the likelihood would be (N/2 choose 100)^2/(N choose 200) versus your baseline likelihood (for a truly random coin) of (200 choose 100)/2^200.
Graphing here https://www.wolframalpha.com/input?i=graph+%28%28N%2F2+choos... and it does look like it's only a slight improvement in likelihood, so I did overstate the claim. A more interesting case would be to look at some self-correcting physical process.
If a student was tasked with determining some physical constant with an experiment and they got it exactly right to 20th decimal place - I'll check their data twice or thrice. Just saying. You continue believing it was the most likely value ;)
Okay, but it doesn't make sense to arbitrarily group together some results and compare the probability of getting any 1 result in that group to getting 1 particular result outside of that group.
You could just as easily say "you should be suspicious if you flip a coin 200 times and get exactly 93 heads, because it's far more likely to get between 99 and 187 heads."
It's suspicious when it lands on something that people might be biased towards.
For example, you take the top five cards, and you get a royal flush of diamonds in ascending order. In theory, this sequence is no more or less probable than any other sequence being taken from a randomly shuffled deck. But given that this sequence has special significance to people, there's a very good reason to think that this indicates that the deck is not randomly shuffled.
In theory terms, you can't just look at the probability of getting this result from a fair coin (or deck or whatever). You have to look at that probability, and the probability that the coin (deck etc.) is biased, and that a biased coin would produce the outcome you got.
If you flip a coin that feels and appears perfectly ordinary and you get exactly 100 heads and 100 tails, you should still be pretty confident that it's unbiased. If you ask somebody else to flip a coin 200 times, and you can't actually see them, and you know they're lazy, and they come back and report exactly 100/100, that's a good indicator they didn't do the flips.
> It's suspicious when it lands on something that people might be biased towards.
Eh, this only makes sense if you're incorporating information about who set up the experiment in your statistical model. If you somehow knew that there's a 50% probability that you were given a fair coin and a 50% probability that you were given an unfair coin that lands on the opposite side of its previous flip 90% of the time, then yes, you could incorporate this sort of knowledge into your analysis of your single trial of 200 flips.
You can certainly do the frequentist analysis without any regard to the distribution of coins from which your coin was sampled. I’m not well studied on this stuff, but I believe the typical frequentist calculation would give the same results as the typical Bayesian analysis with a uniform prior distribution on “probability of each flip being heads.”
I guess it depends on exactly what kind of information you want. Frequentist analysis will give you the probability of getting an exact 100/100 split in a world where the coin was fair. That probability is about 0.056. Or you can go for p values and say that it's well within the 95% confidence interval, or whatever value. But that's not very useful on its own. What we typically want is some notion of the probability that the coin is fair. This is often confused with the probability of getting the result given a fair coin (e.g. 5% probability X would happen by chance, therefore 95% probability that the null hypothesis is false) but it's very different. In this context, the question people are interested in is "how likely is it that Fleming/Mendel p-hacked their results, given the suspicious perfection of those results?" Analogous to "how likely is it that the coin is fair, given the exact even 100/100 split we got?" And for that, you need some notion of what unfairness might look like and what the prior probability was of getting an unfair coin.
I’d be curious what the point is even if it were written by humans with some evidence of non-zero effort, but posting something with no point and no effort is really puzzling.
This would have the exact problem mentioned immediately after the paragraph you quoted. Every computer, phone, etc. would need specific setup. The author is clear about their goal:
> I wanted something cleaner: a solution that works for every device on my network, automatically, without any client-side configuration.
Wasn't Newton making the point that we normally perceive and treat colors as qualitatively different, but that they're in fact caused by a single underlying mechanism that can take on any of a continuous range of quantities?
Thus using the term "spectrum disorder" would be making precisely the same point, to describe a set of apparently qualitatively different disorders that are in fact caused by some underlying mechanism with a range of quantities? (To be clear, I don't know if any so-called spectrum disorders actually meet this criterion, and it's probably more complicated than that, but it seems to be the reason the term was chosen.)
reply