I've actually not worked anywhere that has used CrowdStrike. It's usually ruled out as too expensive (I've mostly worked in public sector). I've had very good experiences with Sentinel One and Microsoft Defender. I've had terrible experiences with Trellix and Sophos."Oopsy" aside, is CrowdStrike really that much better than the competition?
I only worked at one shop that used CrowdStrike but TBH compared to the others I've had to deal with, definitely is the 'least' shitty compared to other competitors...
It's enterprise software. The people using the software and the people choosing the software are not the same people. In many cases they only buy it to satisfy a contractual or regulatory requirement and then the primary criterion is which one costs less or which one's sales reps give the best kickbacks, with considerations like "is it any good" not really playing a major role.
I have trouble believing that 6.5ms in increased latency would be perceptible to any more than a fraction of a percent of the most elite gamers. Most the people claiming that this level of difference is impacting their gameplay are victims of confirmation bias.
David Eagleman has done some work with drummers. Granted the audio system might be a bit more accurate than the visual, or maybe drummers are just weird. On the other hand, vim taking 30 milliseconds to start (ugh) and having sluggish cursor motions is why I'm on vi now. Haven't tried Wayland. Maybe in some number of years once it's more portable and more developed? (And how many years has it already been out?)
> “I was working with Larry Mullen, Jr., on one of the U2 albums,” Eno told me. “ ‘All That You Don’t Leave Behind,’ or whatever it’s called.” Mullen was playing drums over a recording of the band and a click track—a computer-generated beat that was meant to keep all the overdubbed parts in synch. In this case, however, Mullen thought that the click track was slightly off: it was a fraction of a beat behind the rest of the band. “I said, ‘No, that can’t be so, Larry,’ ” Eno recalled. “ ‘We’ve all worked to that track, so it must be right.’ But he said, ‘Sorry, I just can’t play to it.’ ”
> Eno eventually adjusted the click to Mullen’s satisfaction, but he was just humoring him. It was only later, after the drummer had left, that Eno checked the original track again and realized that Mullen was right: the click was off by six milliseconds. “The thing is,” Eno told me, “when we were adjusting it I once had it two milliseconds to the wrong side of the beat, and he said, ‘No, you’ve got to come back a bit.’ Which I think is absolutely staggering.”
It doesn't need to be perceptible to cause a difference in a game.
Suppose two players notice each other at the same time (e.g. as would naturally happen when walking around a corner in a shooter), first to shoot wins, and their total latencies are identical Gaussians with a standard deviation of 100ms. Then a 6.5ms reduction in latency is worth an additional 2.5% chance of winning the trade. Maybe you won't notice this on a moment by moment basis, but take statistics and its impact should be measurable.
In ELO terms a 2.5% gain in win rate is around a 10 point increase (simplifying by assuming that single Gaussian is the entire game). That's small, but if you were a hardcore player and all it took to raise your ELO by 10 points was using a better monitor/mouse/OS... why not? Doing that is cheap compared to the time investment required to improve your ELO another 10 points with practice (unless you're just starting).
Also, I think you'd be surprised what people can perceive in a context where they are practiced. Speed runners hit frame perfect tricks in 60FPS games. That's not reaction time but it does intimately involve consistent control latency between practice and execution.
> Suppose two players notice each other at the same time (e.g. as would naturally happen when walking around a corner in a shooter)
This is not true for third person games. Depending on a left sided or right sided peek and your angle or approach, players see asymmetrically.
For example, Fortnite is a right side peek game. Peeking right is safer than peeking left as less of your body is exposed before your camera turns the corner.
I believe distance also plays a part in the angles.
Yeah, network latency and client side prediction and accuracy will also play huge roles. The actual distributions will be very complex, but in general reacting faster is going to be better.
Do people not play deathmatches on LAN parties anymore these days? 2.5 is huge if the game lasts long enough that someone would be leading with 200. ;)
I would postulate that 100% of professional (i.e. elite) competitive gamers would be able to tell the difference. See this old touchscreen demonstration: https://www.youtube.com/watch?v=vOvQCPLkPt4
About the difference between 60hz and 120hz monitor, instantly noticeable just by moving the mouse in windows (just by looking
at the distance cursor updates as it moves). Would you argue that all gaming monitors are placebo?
I actually would. Gaming monitors are the equivalent of fancy audiophile gear. It's a way to fleece people by making them think they can perceive a difference that isn't really there.
Those sorts of latencies actually are noticeable! As an example, 6.5ms latency between a virtual instrument and its UI is definitely noticeable.
I didn’t think it was. But it is. I promise!
It’s not necessarily a reaction-time game-winning thing. It’s a feel.
With virtual instruments, my experience is that when you get down to ~3ms you don’t notice the latency anymore… but!, when you go below 3ms, it starts feeling more physically real.
You may think 6.5 ms of input latency is imperceptible. But combine it with the rest of the stack (monitor refresh rate, local network latency, RTT between client and server, time for server to register input from client and calculate “winner”), and it becomes the diff between an L and W. In the case of pros, the diff between a multimillion dollar cash prize and nil.
There are noticability thresholds where this could push it over. For fighting games if you have the reactions to whiff punish N frame recovery moves this may push you to only being able to punish N+1 recovery moves and really impact your ranking. This is a little over 1/3rd of a 60hz frame.
We can't have people doing things like searching for Tiananman Square or Mao Zedong or talking about how Taiwan and Hong Kong want complete independence from China.
I'm sure a big part of the cost is the additional infrastructure and manpower to implement all of China's censorship, tracking, etc.
I tried to get this to work a couple of times and gave up. I was trying to rebind a mouse button (back) to a macro and just gave up after a while. I ended up using the G Hub on a Mac and applying settings to the onboard config. I like the idea of Solaar, but the initial learning curve was more effort than I wanted to put in to rebind a single key.
If the button already has a function (like back) assigned to it, I think the input-remapper[0] software would work. That is what I use with my deathadder.
Claude 3 + Kagi is the exact combo I use. Claude has been my go to since discovering it didn't have the same level of curation / censorship models like ChatGPT and Gemini have.
I picked up Kagi the day they released their $10/mo plan and I don't see myself dropping it unless their quality degrades or price increases substantially. The search result I want is typically in the first few results, while Google was just ads the first few results, SEO spam, and then MAYBE the result I want near the end of the page.
Kagi users with the Ultimate plan can access most of the major LLMs via the !chat bang too (OpenAI, Gemini, Claude, and Mistral). Regular plans have access to the cheaper models only. Then there’s the FastGPT that runs when the search query contains a question mark
Really makes it easy to use when it’s all accessed from the address bar.
I've had Kagi for several months now and have no desire to go back. Brave is horrible, DDG is bad, and Google is bad AND Google. I decided it was time to trim subscriptions a few months ago and chose to ditch ChatGPT and keep Kagi.
I recently traded a friend my Nvidia 3070 for his Radeon 6700 XT, because I'd returned to Linux a few months ago and was tired of Nvidia. Nvidia should will likely get much better as NVK grows, but I think it's better to just not use their products unless you want to have Microsoft spywareOS installed on your computer.
I've had one or two upgrade problems in the last 10 years, but otherwise the Nvidia drivers have worked great for me. My biggest complaint is they dropped support for the GPU in my Macbook, and I had to install the nouveau drivers (which I can never spell correctly).
At least it's not from the FSF, and GPUs aren't gendered, or you'd have to choose from multiple gendered drivers:
- "gnuveau" for one masculine GPU.
- "gnuvelle" for one feminine GPU.
- "gnuveaux" for multiple masculine GPUs.
- "gnuvelles" for multiple feminine GPUs.
They really don't want this to happen, which I think is a big part of the push behind the "AI is dangerous" narrative. They want to put in place regulations and 'safeguards' that will prohibit any open-source, uncensored, or otherwise competitive models.
My graphics card is an old AMD card so I haven't done much in the way of experimenting with LLMs beyond what's online. Are the open source models available to run locally have censorship baked into them? Or are they just so much smaller than what the big corporations are doing that they're essentially censored through omission?
The open models have varying levels of censorship (Llama 2 Chat would make Oliver Cromwell say "too far"), but it doesn't really matter because the retraining it takes to fix them is within the capabilities of single hobbyists.