That's not really the full story. The US didn't come up with the moon goal. It was the Soviets' plan already, which is why JFK publicly announced it in a speech: to force them into a public prestige battle. The Soviets had the habit of repeated private failure. If they achieved something, they'd announce it afterwards; if they failed, they kept quiet. The US broadcast launches on TV and pre-announced goals, which was a major propaganda effort and much more effective than post-flight releases.
Anyone who thinks finder is the worst file browser hasn't used Windows for 25+ years. Explorer can't even search files on the hard drive of the computer it's running on.
Explorer has its thumbnail processor on the same thread as the UI so if you have a lot of pictures in a directory it'll just hang indefinitely. Sometimes if you have too many files it won't display any at all.
If explorer crashes, it's the same process as your desktop and taskbar, so that disappears too.
This reads like a semi-incoherent essay from someone who doesn't really understand what complexity is and has a chip on their shoulder about something completely unrelated to the topic at hand.
Yeah and coming from someone with so much experience and industry knowledge as dannybee i find that perspective very puzzling.
Just painting the situation as well google have influence because they work the hardest is just bizare. Having been in some standard / comity meetings. Everyone in those room work very hard... but someone hard work is not enough
The latter is definitely true, and i don't claim otherwise.
But that's not actually the argument really being made here with any evidence.
That would be a reasonable argument, but it's also always true - we are human, not robots, that's how humans work in any group setting. So it's not particularly interesting or particular to this that social and other things matter as much as pure technical merit or hard work.
But again, this isn't the argument the post makes. Instead, in this case, the argument being made is (basically) "Nobody in those rooms is operating in good faith, they are instead deliberately trying to make it harder for newcomers. They also only have any power at all through illegitimate means in the first place".
It would be a tremendous amount of work, and would take years. Meanwhile, the problems are avoidable. It's not exactly the "rust way" to just remember and avoid problems, but everything in language design is compromises.
> Why use Eyra? It fixes Rust's set_var unsoundness issue. The environment-variable implementation leaks memory internally (it is optional, but enabled by default), so setenv etc. are thread-safe.
I think glibc made the same trade-off. It makes sense for most types of programs, but there's certainly a lot of classes of programs that wouldn't take it.
When you squash a branch you'll have 200+ lines of new code on a new feature. The diff is not a quick way to get a summary of what's happening. You should put the "what" in your commit messages.
One thing to be said for Prusa is that their support is actually knowledgeable and experienced. You're not going to get a tier 1 support person who has never touched a printer and is just reading from a script.
> while generally achieving somewhat better results
I agree with this.
I'd also like to add that my Prusa Mk3s+ is significantly slower than my P1S. Also, without the MMU it still cost more than my P1S with AMS. Choosing a Prusa is making a philosophical choice, because it's certainly not about convenience, speed, versatility (considering you need to buy a separate enclosure and pricey MMU), bed size, or price. It's a choice you make because you're okay with spending a lot more to support an open platform where you can flash your own firmware without voiding your warranty, not because you want a better experience.
The mk4 and mk3 are vastly different machines. If you want to compare the P1S, do it against a contemporary machine. Of course a machine released several years after the mk3 is faster.
If I were starting today I'd definitely choose the Core One over the P1S (thanks to this rug pull). It's vastly more expensive, and the MMU isn't worth it from what I've heard, and the build volume is significantly smaller, but I don't think I'd go with Bambu after this week.
I wouldn't buy any new Prusa printer until it's been in the wild at least a year, they tend to be very buggy at launch.
They also have no multimaterial support at launch, the MMU3 will not work with the Core One until they release an update, which they've not yet given a timeline for.
> TikTok is the exact same garbage you can get from Insta, YouTube, for fuck's sake even LinkedIn has video now
As a user of TikTok and Instagram Reels, TikTok had an actually good algorithm that would show you interesting things after you showed interest. I found a lot of film students with great reviews through TikTok. Instagram and YouTube don't care to show you that kind of thing.
IDK, I feel that if you're doing 5000 HTTP calls to another website it's kind of good manners to fix that. But OpenAI has never cared about the public commons.
Nobody in this space gives a fuck about anyone outside of the people paying for their top-tier services, and even then, they only care about them when their bill is due. They don't care about their regular users, don't care about the environment, don't care about the people that actually made the "data" they're re-selling... nobody.
Yeah, even beyond common decency, there's pretty strong incentives to fix it, as it's a fantastic way of having your bot's fingerprint end up on Cloudflare's shitlist.
Kinda disappointed by cloudflare - it feels they have quite basic logic only. Why would anomaly detection not capture these large payloads?
There was a zip-bomb like attack a year ago where you could send one gigabyte of the letter "A" compressed into very small filesize with brotli via cloudflare to backend servers, basically something like the old HTTP Transfer-Encoding (which has been discontinued).
Attacker --1kb--> Cloudflare --1GB--> backend server
Obviously the servers who received the extracted HTTP request from the cloudflare web proxies were getting killed but cloudflare didn't even accept it as a valid security problem.
AFAIK there was no magic AI security monitoring anomaly detection thing which blocked anything. Sometimes I'd love to see the old web application firewall warnings for single and double quotes just to see if the thing is still there. But maybe it's misconfiguration on side of cloudflare user because I can remember they at least had a WAF product in the past.
> But maybe it's misconfiguration on side of cloudflare user because I can remember they at least had a WAF product in the past
They still have a WAF product, though I don't think anything in the standard managed ruleset will fire just on quotes, the SQLi and XSS checks are a bit more sophisticated than that.
From personal experience, they will fire a lot if someone uses a WAF-protected CMS to write a post about SQL.