Hacker Newsnew | past | comments | ask | show | jobs | submit | yuliyp's commentslogin

Having a front door physically allows anyone on the street to come to knock on it. Having a "no soliciting" sign is an instruction clarifying that not everybody is welcome. Having a web site should operate in a similar fashion. The robots.txt is the equivalent of such a sign.


No soliciting signs are polite requests that no one has to follow, and door to door salesman regularly walk right past them.

No one is calling for the criminalization of door-to-door sales and no one is worried about how much door-to-door sales increases water consumption.


If a company was sending hundreds of salesmen to knock at a door one after the other, I'm pretty sure they could successfully get sued for harassment.


Can’t Americans literally shoot each other for trespassing?


Generally, legally, no, not just for ignoring a “no soliciting” sign.


But they’re presumably trespassing.


And, despite what ideas you may get from the media, mere trespass without imminent threat to life is not a justification for deadly force.

There are some states where the considerations for self defense do not include a duty to retreat if possible, either in general (“stand your ground" law) or specifically in the home (“castle doctrine"), but all the other requirements (imminent threat of certain kinds of serious harm, proportional force) for self-defense remain part of the law in those states, and trespassing by/while disregarding a ”no soliciting” would not, by itself, satisfy those requirements.


> door to door salesman regularly walk right past them.

Oh, now I understand why Americans can't see a problem here.


>No one is calling for the criminalization of door-to-door sales

Ok, I am, right now.

It seems like there are two sides here that are talking past one another: "people will do X and you accept it if you do not actively prevent it, if you can" and "X is bad behavior that should be stopped and shouldn't be the burden of individuals to stop". As someone who leans to the latter, the former just sounds like restating the problem being complained about.


> No one is calling for the criminalization of door-to-door sales

Door-to-door sales absolutely are banned in many jurisdictions.


And a no soliciting sign is no more cosmically binding than robots.txt. It's a request, not an enforceable command.


Tell me you work in an ethically bankrupt industry without telling me you work in an ethically bankrupt industry.


Yes, because most of the things that people talk about (ChatGPT, Google SERP AI summaries, etc.) currently use tools in their answers. We're a couple years past the "it just generates output from sampling given a prompt and training" era.


It depends - some queries will invoke tools such as search, some won't. A research agent will be using search, but then summarizing and reasoning about the responses to synthesize a response, so then you are back to LLM generation.

The net result is that some responses are going to be more reliable (or at least coherently derived from a single search source) than others, but at least to the casual user, maybe to most users, it's never quite clear what the "AI" is doing, and it's right enough, often enough, that they tend to trust it, even though that trust is only justified some of the time.


Perhaps those were different iterations of the technique over time. Start with marking cards to identify face cards, then move on to x-ray table.


That's not the problem. The problem is that you're adding a data dependency on the CPU loading the first byte. The branch-based one just "predicts" the number of bytes in the codepoint and can keep executing code past that. In data that's ASCII, relying on the branch predictor to just guess "0" repeatedly turns out to be much faster as you can effectively be processing multiple characters simultaneously.


I am pretty sure CPUs can speculative load as well. In the CPU pipeline, it sees that there's an repeated instruction to load, it should be able to dispatch and perform all of it in the pipeline. The nice thing is that there is no chance of hazard execution here because all of this speculative load is usable unlike the 1% chance where the branch would fail which causes the whole pipeline to be flushed.


No, that's not the stance for electrical utilities (at least in most developed countries, including the US): the vast majority of weather events cause localized outages (the grid as a whole has redundancies built in; distribution to (residential and some industrial) does not. It expects failures of some power plants, transmission lines, etc. and can adapt with reserve power, or, in very rare cases by partial degradation (i.e. rolling blackouts). It doesn't go down fully.


Spain and Portugal had a massive power outage this spring, no?


Yeah, and it has a 30 page Wikipedia article with 161 sources (https://en.wikipedia.org/wiki/2025_Iberian_Peninsula_blackou...). Does that seem like a common occurrence?


You're measuring a cached compile in the subsequent runs. The deps.compile probably did some native compilation in the dep folder directly rather in _build.


No their results are correct. It roughly halved the compilation time on a newly generated Phoenix project. I'm assuming the savings would be more extensive on projects with multiple native dependencies that have lengthy compilation.

    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile
    ________________________________________________________
    Executed in   37.75 secs    fish           external
       usr time  103.65 secs   32.00 micros  103.65 secs
       sys time   20.14 secs  999.00 micros   20.14 secs

    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile
    ________________________________________________________
    Executed in   16.71 secs    fish           external
       usr time    2.39 secs    0.05 millis    2.39 secs
       sys time    0.87 secs    1.01 millis    0.87 secs
    
    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=10 mix deps.compile
    ________________________________________________________
    Executed in   17.19 secs    fish           external
       usr time    2.41 secs    1.09 millis    2.40 secs
       sys time    0.89 secs    0.04 millis    0.89 secs


Similar result on one of my real projects that's heavier on the Elixir dependencies but that only has 1 additional native dependency (brotli):

    mise use elixir@1.19-otp-26 erlang@26
    
    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile
    ________________________________________________________
    Executed in   97.93 secs    fish           external
       usr time  149.37 secs    1.45 millis  149.37 secs
       sys time   28.94 secs    1.11 millis   28.94 secs
    
    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile
    ________________________________________________________
    Executed in   42.19 secs    fish           external
       usr time    2.48 secs    0.77 millis    2.48 secs
       sys time    0.91 secs    1.21 millis    0.91 secs


Oh, interesting. I guess `time` is only reporting the usr/sys time of the main process rather than the child workers when using PARTITION_COUNT higher than 1?


They pushed it out while the vehicle was parked. The bug seemed to not break the vehicle immediately, but after some time driving.


I don't understand how this is losing their mind. Toggling this setting is expensive on the backend: opting in means "go and rescan all the photos". opting out means "delete all the scanned information for this user". As a user just make up your mind and set the setting. They let you opt in, they ley you opt out, they just don't want to let you trigger tons of work every minute.


If this was the case, they would leave it in the off state after you run out of toggles. The reality is that it will magically turn on every month.


I don't understand how you think repeating this nonsense excuse for an argument will achieve anything.


> The New Deal delayed the recovery from the Depression to 10 years or so.

This is categorically wrong: the WW2 GDP boom started in 1939, by which point we'd already been out of the great depression (1936 was the first year that Real GDP was above the previous peak of 1929). Regardless, that point is only 6 years after the New Deal took effect, meaning a delay of 10 years would require reversing the flow of time.

Source: https://alfred.stlouisfed.org/series?seid=GDPCA (I can't figure out how to hotlink to a specific time range so you'll have to plug it in yourself).


Friedman has a different take on this from "Monetary History of the United States". There was a severe contraction in 1937-38. 1939 saw a huge influx of gold from foreign arms purchases, which finally took the country out of the Depression. See the chart on page 530. 1936 was a false dawn.

"It is a measure of the severity of the preceding contraction that, despite such sharp rises, money income was 17 per cent lower in 1937 than at the preceding peak eight years earlier and real income was only 3 per cent higher. Since population had grown nearly 6 per cent in the interim, per capita output was actually lower at the cyclical peak in 1937 than at the preceding cyclical peak. There are only two earlier examples in the recorded annual figures, 1895 and 1910, when per capita output was less than it was at the preceding cyclical peaks in 1892 and 1907, respectively. Furthermore, the contraction that followed the 1937 peak, though not especially long, was unusually deep and proceeded at an extremely rapid rate, the only occasion in our record when one deep depression followed immediately on the heels of another." pg 493


Sure they do. Amazon built AWS well after it was big. Apple built the iPhone. Microsoft built VS Code. just to name a few examples.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: