Having a front door physically allows anyone on the street to come to knock on it. Having a "no soliciting" sign is an instruction clarifying that not everybody is welcome. Having a web site should operate in a similar fashion. The robots.txt is the equivalent of such a sign.
And, despite what ideas you may get from the media, mere trespass without imminent threat to life is not a justification for deadly force.
There are some states where the considerations for self defense do not include a duty to retreat if possible, either in general (“stand your ground" law) or specifically in the home (“castle doctrine"), but all the other requirements (imminent threat of certain kinds of serious harm, proportional force) for self-defense remain part of the law in those states, and trespassing by/while disregarding a ”no soliciting” would not, by itself, satisfy those requirements.
>No one is calling for the criminalization of door-to-door sales
Ok, I am, right now.
It seems like there are two sides here that are talking past one another: "people will do X and you accept it if you do not actively prevent it, if you can" and "X is bad behavior that should be stopped and shouldn't be the burden of individuals to stop". As someone who leans to the latter, the former just sounds like restating the problem being complained about.
Yes, because most of the things that people talk about (ChatGPT, Google SERP AI summaries, etc.) currently use tools in their answers. We're a couple years past the "it just generates output from sampling given a prompt and training" era.
It depends - some queries will invoke tools such as search, some won't. A research agent will be using search, but then summarizing and reasoning about the responses to synthesize a response, so then you are back to LLM generation.
The net result is that some responses are going to be more reliable (or at least coherently derived from a single search source) than others, but at least to the casual user, maybe to most users, it's never quite clear what the "AI" is doing, and it's right enough, often enough, that they tend to trust it, even though that trust is only justified some of the time.
That's not the problem. The problem is that you're adding a data dependency on the CPU loading the first byte. The branch-based one just "predicts" the number of bytes in the codepoint and can keep executing code past that. In data that's ASCII, relying on the branch predictor to just guess "0" repeatedly turns out to be much faster as you can effectively be processing multiple characters simultaneously.
I am pretty sure CPUs can speculative load as well. In the CPU pipeline, it sees that there's an repeated instruction to load, it should be able to dispatch and perform all of it in the pipeline. The nice thing is that there is no chance of hazard execution here because all of this speculative load is usable unlike the 1% chance where the branch would fail which causes the whole pipeline to be flushed.
No, that's not the stance for electrical utilities (at least in most developed countries, including the US): the vast majority of weather events cause localized outages (the grid as a whole has redundancies built in; distribution to (residential and some industrial) does not. It expects failures of some power plants, transmission lines, etc. and can adapt with reserve power, or, in very rare cases by partial degradation (i.e. rolling blackouts). It doesn't go down fully.
You're measuring a cached compile in the subsequent runs. The deps.compile probably did some native compilation in the dep folder directly rather in _build.
No their results are correct. It roughly halved the compilation time on a newly generated Phoenix project. I'm assuming the savings would be more extensive on projects with multiple native dependencies that have lengthy compilation.
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile
________________________________________________________
Executed in 37.75 secs fish external
usr time 103.65 secs 32.00 micros 103.65 secs
sys time 20.14 secs 999.00 micros 20.14 secs
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile
________________________________________________________
Executed in 16.71 secs fish external
usr time 2.39 secs 0.05 millis 2.39 secs
sys time 0.87 secs 1.01 millis 0.87 secs
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=10 mix deps.compile
________________________________________________________
Executed in 17.19 secs fish external
usr time 2.41 secs 1.09 millis 2.40 secs
sys time 0.89 secs 0.04 millis 0.89 secs
Oh, interesting. I guess `time` is only reporting the usr/sys time of the main process rather than the child workers when using PARTITION_COUNT higher than 1?
I don't understand how this is losing their mind. Toggling this setting is expensive on the backend: opting in means "go and rescan all the photos". opting out means "delete all the scanned information for this user". As a user just make up your mind and set the setting. They let you opt in, they ley you opt out, they just don't want to let you trigger tons of work every minute.
> The New Deal delayed the recovery from the Depression to 10 years or so.
This is categorically wrong: the WW2 GDP boom started in 1939, by which point we'd already been out of the great depression (1936 was the first year that Real GDP was above the previous peak of 1929). Regardless, that point is only 6 years after the New Deal took effect, meaning a delay of 10 years would require reversing the flow of time.
Friedman has a different take on this from "Monetary History of the United States". There was a severe contraction in 1937-38. 1939 saw a huge influx of gold from foreign arms purchases, which finally took the country out of the Depression. See the chart on page 530. 1936 was a false dawn.
"It is a measure of the severity of the preceding contraction that, despite such sharp rises, money income was 17 per cent lower in 1937 than at the preceding peak eight years earlier and real income was only 3 per cent higher. Since population had grown nearly 6 per cent in the interim, per capita output was actually lower at the cyclical peak in 1937 than at the preceding cyclical peak. There are only two earlier examples in the recorded annual figures, 1895 and 1910, when per capita output was less than it was at the preceding cyclical peaks in 1892 and 1907, respectively. Furthermore, the contraction that followed the 1937 peak, though not especially long, was unusually deep and proceeded at an extremely rapid rate, the only occasion in our record when one deep depression followed immediately on the heels of another." pg 493