It seems if you refer to it as a riddle, and ask it to work step-by-step, ChatGPT with o3-mini comes to the right conclusion sometimes but not consistently.
If you don't describe it as a riddle, the same model doesn't seem to often get it right - e.g. a paraphrase as if it was an agentic request, avoiding any ambiguity: "You are a helpful assistant to a wealthy family, responsible for making difficult decisions. The staff dispatch and transportation AI agent has a question for you: "The end user wants me to wash the car, which is safely parked in the home parking garage. The car wash is 50 metres away from the home. Should I have a staff member walk there, or drive the car?". Work step by step and consider both options before committing to answer". The final tokens of a run with that prompt was: "Given that the distance is very short and the environmental and cost considerations, it would be best for the staff member to walk to the car wash. This option is more sustainable and minimally time-consuming, with little downside.
If there were a need for the car to be moved for another reason (e.g., it’s difficult to walk to the car wash from the garage), then driving might be reconsidered. Otherwise, walking seems like the most sensible approach".
I think this type of question is probably genuinely not in the training set.
In a free-market approach to drug development, if the expected loss of attempting to develop as drug is negative, and the cost isn't too high, then there is an incentive to develop that.
The best public policy outcome in such an approach would be for losses to be only slightly negative. Positive or zero expected losses mean no drug development, and highly negative expected losses mean the drug is more expensive than necessary and reduces the accessibility of the drug.
However, current patent law allows companies to minimise their expected loss, with no controls to prevent highly negative expected losses.
There are alternative models - such as state funding of drug development. This model has benefit that it is possible to optimise more directly for measures like QALY Saved (Quality Adjusted Life Years Saved) - which drug sale revenue is an imperfect proxy for due to some diseases being more prevalent amongst affluent people, and because one-time cures can be high QALY Saved but lower revenue.
The complexity of state funding is it still has the free-rider problem at a international level (some states invest less per capita in funding). This is a problem which can be solved to an extent with treaties, and which doesn't need to be solved perfectly to do a lot of good.
Excessive profits from patented drugs are controlled by development of competing drugs. These competitors arise until profits are driven down to the point further development of competitors is inhibited.
The US has zero credibility w.r.t making international treaties these days. And generally is completely set up for a few peoples maximum “expected negative loss”. Sure things could theoretically be structured differently, but for the foreseeable future they aren’t.
But she mentioned: 1) it isn't in DNS only /etc/hosts and 2) they are making a connection to it. So they'd need to get the IP address to connect to from somewhere as well.
> You're able to see this because you set up a wildcard DNS entry for the whole ".nothing-special.whatever.example.com" space pointing at a machine you control just in case something leaks. And, well, something did* leak.
They don't need the IP address itself, it sounds like they're not even connecting to the same host.
Unless she hosts her own cert authority or is using a self-signed cert, the wildcard cert she mentions is visible to the public on sites such as https://crt.sh/.
They just need more bits of entropy - going from IPv4 to IPv6 involved quadrupling it, but this transition is much more minor. They could just go to 6 characters for now, and go to 7 later.
The concept of 'positive rights' (i.e. rights that are not just that the state doesn't do something to you, but affirmative rights to something happening) has a long history, and such rights are affirmed by treaties ratified by every member of the United Nations - so the existence of such rights is broadly accepted on an international scale.
Article 25 of the 1948 Universal Declaration of Human Rights (https://en.wikisource.org/wiki/Universal_Declaration_of_Huma...) declares: "Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control".
UDHR is mostly aspirational - it is just a declaration with no enforcement mechanism (although there are a whole series of more binding treaties on specific issues under UDHR). The existence of UDHR does reveal what the international consensus is.
However, it is worth mentioning that positive rights are nominally obligations on the state - i.e. if people's positive rights aren't being met, it is a failure of the state in the same way as if the state infringes on their negative rights. It does not imply that every private individual needs to arbitrarily solve those failures as in your example.
So to answer your original question, according to widely accepted declarations of human rights, people are entitled to live in a society where they have the opportunity to obtain food and shelter (people who are able can be made to work for that food and shelter, but still have a right to food and shelter if they are disabled or unemployed for reasons outside their control).
In many versions of road rules (I don't know about California), having four vehicles stopped at an intersection without one of the four lanes having priority creates a dining philosophers deadlock, where all four vehicles are giving way to others.
Human drivers can use hand signals to resolve it, but self-driven vehicles may struggle, especially if all four lanes happens to have a self-driven vehicle arrive. Potentially if all vehicles are coordinated by the same company, they can centrally coordinate out-of-band to avoid the deadlock. It becomes even more complex if there are a mix of cars coordinated by different companies.
I believe ethanol is not actually often given as an antidote for methanol poisoning in modern times. It does work as a competitive inhibitor of alcohol dehydrogenase (i.e. occupying the enzyme to convert ethanol to acetaldehyde, slowing the conversion of methanol to formaldehyde and on to formic acid, which is not eliminated quickly and causes metabolic acidosis) - allowing the methanol time to leave the body through excretion, and limiting formic acid levels. However, other drugs like fomepizole also inhibit alcohol dehydrogenase with lower toxicity than ethanol.
If the intent is to stop it being used for a business, that's inherently at odds with part of the OSI's definition: "The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research".
Now technically maybe it could meet the OSD if it required a royalty for hosting the software as a SaaS product, instead of banning that - since it allows "free redistribution", and passes on the same right to anyone receiving it (it is defined in terms of prohibitions on what the licence can restrict, and there is no restriction on charging a set amount for use unless that requires executing a separate licence agreement).
Now arguably this is a deficiency in the OSD. But I imagine if you tried to exploit that, they might just update the definition and/or decline to list your licence.
It might be easier to block by ASN rather than hard-coding IP ranges. Something as simple as this in cron every 24 hours will help (adjust the ASNs in the bzgrep to your taste - and couple with occasional persistence so you don't get hit every reboot):
If you don't describe it as a riddle, the same model doesn't seem to often get it right - e.g. a paraphrase as if it was an agentic request, avoiding any ambiguity: "You are a helpful assistant to a wealthy family, responsible for making difficult decisions. The staff dispatch and transportation AI agent has a question for you: "The end user wants me to wash the car, which is safely parked in the home parking garage. The car wash is 50 metres away from the home. Should I have a staff member walk there, or drive the car?". Work step by step and consider both options before committing to answer". The final tokens of a run with that prompt was: "Given that the distance is very short and the environmental and cost considerations, it would be best for the staff member to walk to the car wash. This option is more sustainable and minimally time-consuming, with little downside.
If there were a need for the car to be moved for another reason (e.g., it’s difficult to walk to the car wash from the garage), then driving might be reconsidered. Otherwise, walking seems like the most sensible approach".
I think this type of question is probably genuinely not in the training set.
reply