One comment I found interesting basically argued that if the billionaires are so scared of over the top wealth taxes, they should be pushing/lobbying for more reasonable wealth taxes as a blocking play, otherwise they can’t really complain if the really disruptive versions get passed.
I don’t think that’s a fair take - his argument is primarily that the ballot measure is fundamentally flawed, and likely put forward in bad faith by a group who’s CEO literally admits to using bad faith/destructive ballot measures to force concessions from counterparties.
Matt Levine has a fairly reasonable take on this:
'[…] if you go to Jump Trading and Jane Street and say “hello, I have an unregulated poorly designed mechanism that could lead to $50 billion of market value collapsing overnight, would you like to trade with me,” they are going to say yes, but their eyes are going to light up, you know? If at Time 0 you give them an extremely gameable system that can produce billions of dollars of profit, at Time 10 your system is going to be a smoking wreckage and they are going to have billions of dollars of profit. That’s their whole job, you know? […] But as a heuristic, I mean, come on. Terra was like “hello we have a balloon full of money, here is a pin, dooooooon’t pop the balloon.” Guess what!'
Prediction markets should be accurate, not fair. If people want to gamble without doing the work of finding some alpha, they should head to a casino, not a prediction market.
Seconds count for 911 calls, but really your odds are already bad if calling about...a heart attack. There's one study about non-runners having heart attacks during marathons due to road closures [0]. If they had a heart attack that day, they were 15% more likely to die within a month. Not good, but it's not that bad.
Going full SV utilitarian, I'm curious what's the net change in accidents between
(1) texting
(2) no texting?
I've read that texting is the equivalent of having 2 beers. Even "hands free" is distracting. I continue to see people sucked into their phones and oblivious that they're operating a 4,000+ pound machine.
>Well, you're picking extremes when AFAIK, it'll put the average person at the legal limit.
>One beer will start to impair you.
Thank you for illustrating exactly the problem. Impairment is a binary in colloquial usage. Statistically no average-median-ish person has ever been impaired in the colonial sense by one average beer. Any everyone knows this. Two average beers applied to an average person won't get you to the legal limit without aggravating circumstances (i.e. zero time to metabolize + empty stomach, or perhaps conflicting medication).
I will be the first to admit you can give a bunch of people one beer and detect statistically significant difference vs a control group or you can give one person one beer many times and evaluate against a baseline and detect a statistically significant difference. But statistical difference does not "impairment" in the colloquial sense make. And everyone knows this based on their own observed life experience, even people without experience should be able to deduce this by observing how the world behaves for if what you say were true, the way things work would be very different.
And by using the term impairment to describe/quantify the impact of one beer and then re-using that term in contexts where it may overload with the colloquial more binary usage the upper bound of what "one beer" is such that one beer at the top end may equal two or three at the low end.
So now we nor does any casual reader know if texting is equivalent in danger to two "real beers", which almost makes it sound not bad for how distracting it seems to be, or if it's equivalent in danger to two "paternalism beers" in which case it's pretty seriously dangerous.
And this key word overloading problem seems to be endemic to all manner of issues these days.
The "good/neutral/bad" DND axis implies moral intent, not necessarily outcome. A stupid person doing something insane for a reasoning that is generally understood to be morally good can be seen as "chaotic good." Hence why a lawful good Paladin can maintain their lawful good status, and their divinely derived abilities, even when they're doing things we may consider evil, like executing a youth for breaking a law, so long as the Paladin (and the divine entity) strongly believe that it's for the greater good of the law and society.
In this case, the guy thought he was preventing people from using their phones while driving, which is a good thing, but he was too dumb to realize it would have negative consequences apparently.
Great idea! A tweet providing thoughtful commentary on a competitors ads will surely set the record straight, people always respect CEOs that are willing to publicly talk about touchy topics. Would you like me to draft one for you?
I have my LLMs tweaked so that they rarely if ever blindly agree with me. I guess that might not be how a CEO operates. But I really do prefer OPFOR LLMs I can argue with to help me sort my brain out.
To expand on this - an LLM will try to play (and reason) like a person would, while a solver simply crunches the possibility space for the mathematically optimal move.
It’s similar to how an LLM can sometimes play chess on a reasonably high (but not world-class) level, while Stockfish (the chess solver) can easily crush even the best human player in the world.
GTO (“game theory optimal”) poker solvers are based around a decision tree with pre-set bet sizes (eg: check, bet small, bet large, all in), which are adjusted/optimized for stack depth and position. This simplifies the problem space: including arbitrary bet sizes would make the tree vastly larger and increase computational cost exponentially.
No, I'm not super certain, but I believe most solvers are trained to be game theory optimal (GTO), which means they assume every other player is also playing GTO. This means there is no strategy which beats them in the long run, but they may not be playing the absolute best strategy.
Not only to limit the scope of what it has to simulate, but only a certain number of bet sizes is practical for a human to implement in their strategy.
How would an LLM play like a human would? I kind of doubt that there is enough recounting of poker hands or transcription of filmed poker games in the training data to imbue a human-like decision pattern.
Anybody who plays poker “optimally” is bound to lose money when they come up against anyone with skill. Once you know the strategy your opponent is employing you can play like you have anything. I believe I’ve won with 7,2 offsuite more than any other hand, because I played like I had the nuts.
This is completely wrong - the entire point of the Nash equilibrium solution (in the context of poker, at least) is that it is, at worst, EV-neutral even when your opponent has perfect knowledge of your strategy.
Your 72o comment indicates you are either playing with very weak players, or have gotten lucky, as in reasonably competitive games playing (and then full bluffing) 72o will be significantly negative EV. Try grinding that strategy at a public 10/20 table and you will be quickly butchered and sent back to the ATM.
There are numerous videos of high level professional poker players winning large hands with incredible bluffs, this whole "Nash equilibrium solution" is nothing more than a conjecture with some symbols thrown in. I will re-iterate, there is no such thing as perfect knowledge when you have imperfect information. If you play "optimally," you will get bluffed out of all your money the moment everyone else at the table figures out what you're doing.
Last sentence completely undercuts the other sentiments you shared in your comment… probably best to cut stuff like that out in the future IMHO, even if it’s how you feel.
reply