Lenovo sells ThinkPad T-series laptops as “Linux-supported.” However, if you happen to buy an AMD version of one of these laptops, you may be surprised by how poorly the Wi‑Fi works. It’s been several years since the T14 Gen4 was released, and yet the Wi‑Fi is still not stable.
QLC retention reported to be around 1 year in unpowered state. I would assume, that drive does background refresh, though. No idea what effect it has on total drive lifetime. It is still mean that if you use it for cold storage it has to be powered.
A drive's write endurance rating is derived at least in part from the JEDEC standard data retention requirements: 1 year at 30C for consumer drives, 3 months at 40C for enterprise drives, IIRC. Thus, a drive that has reached the end of its rated write endurance can be expected to have those retention characteristics. A drive that hasn't been subjected to that much wear will have significantly longer retention.
Why is it mean? Why would you want to use a technology that is unsuitable for cold storage for cold storage? You won't even get the power / IOPS benefit if all it does is an infrequent replication of data and is then switched off.
I believe it has read speeds of 13GB/s, not 3 (unless you are referring to an equivalent array of 10 HDD). It will almost certainly be used to store training datasets and model weights. Which I assume are good use cases for fast sequential reads.
Yup, it was also posted in the other thread on GPS the other day and it is quite a bit better than OP's article, particularly because it doesn't give a false account of the involved relativistic effects:
> Satellites at the GPS altitude travel at the speed of about 2.4 mi/s relative to Earth, which slows the clock down, but they’re also in weaker gravity which causes the clock to run faster. The latter effect is stronger which in total results in a gain of around 4.4647 × 10−10 seconds per second, or around 38 microseconds a day.
> Unfortunately, this is where many sources make a mistake with their interpretation of that result. It’s often erroneously claimed that if GPS didn’t correct for these relativistic effects by slowing down the clocks on satellites, the system would increase its error by around 7.2 mi per day as this is the distance that light travels in those 38 microseconds.
> Those assertions are not true. If relativistic effects weren’t accounted for and we let the clocks on satellites drift, the pseudoranges would indeed increase by that amount every day. However, as we’ve seen, an incorrect clock offset doesn’t prevent us from calculating the correct position.
(Nevertheless there are of course relativistic effects to account for, which Ciechanow proceeds to mention and which are explained in more detail in the other link I shared here: https://news.ycombinator.com/item?id=47861535 )
> If you want to go much deeper, Bartosz Ciechanowski's interactive explainer on GPS is the gold standard. It covers signal modulation, orbital mechanics, and receiver architecture in far more detail than we do here.
You don't need to belittle someone else's work. It's a series of articles, and author has 2 more articles that aren't related to articles Ciechanowski wrote at all.
It's implicit state that's also untyped - it's just a String -> String map without any canonical single source of truth about what environment variables are consulted, when, why and in what form.
Such state should be strongly typed, have a canonical source of truth (which can then be also reused to document environment variables that the code supports, and eg. allow reading the same options from configs, flags, etc) and then explicitly passed to the functions that need it, eg. as function arguments or members of an associated instance.
This makes it easier to reason about the code (the caller will know that some module changes its functionality based on some state variable). It also makes it easier to test (both from the mechanical point of view of having to set environment variables which is gnarly, and from the point of view of once again knowing that the code changes its behaviour based on some state/option and both cases should probably be tested).
That's exactly why, access to global mutable state should be limited to as small a surface area as possible, so 99% of code can be locally deterministic and side-effect free, only using values that are passed into it. That makes testing easier too.
environment variables can change while the process is running and are not memory safe (though I suspect node tries to wrap it with a lock). Meaning if you check a variable at point A, enter a branch and check it again at point B ... it's not guaranteed that they will be the same value. This can cause you to enter "impossible conditions".
Wait, is it expected for them to be able to change? According to this SO answer [0] it's only really possible through GDB or "nasty hacks" as there's no API for it.
Rust cannot help you if race condition crosses API boundary. No matter what language you use, you have to think about system as a whole. Failure to do that results in bugs like this
The bigger problem here is it seems like the rust utilities were rushed to be released without extensive testing or security analysis because simply because they are written in rust. And this isn't the first serious flaw because of that.
Doesn't surprise me coming from Canonical though.
At least that's the vibe I'm getting from [1] and definitely [2]
> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.
> The Rust language, its type system and its borrow checker (and its community!) work together to encourage developers to write safe, sound, resilient software. With added safety comes an increase in security guarantees, and with an increase in security comes an increase in overall resilience of the system - and where better to start than with the foundational tools that build the distribution?
So yes, it sounds like the primary official reason is "enhanced resilience and safety". Given that, I would be interested in seeing the number of security problems in each implementation over time. GNU coreutils does have problems from time to time, but... https://app.opencve.io/cve/?product=coreutils&vendor=gnu only seems to list 10 CVEs since 2005. Unfortunately I can't find an equivalent for uutils, but just from news coverage I'm pretty sure they have a worse track record thus far.
> Performance is a frequently cited rationale for “Rewrite it in Rust” projects.
Rewrite from what? Python/Perl? If the original code is in C there _might_ be a performance gain (particularly if it was poorly written to begin with), but I wouldn't expect wonders.
Could be. The thing is, it kinda doesn't matter; what matters is, what will result in the least bugs/vulnerabilities now? To which I argue the answer is, keeping GNU coreutils. I don't care that they have a head start, I care that they're ahead.
That's short sighted. The least number of bugs now isn't the only thing that matters. What about in 5 years from now? 10 years? That matters too.
To me it seems inarguable that eventually uutils will have fewer bugs than coreutils, and also making uutils the default will clearly accelerate that. So I don't think it's so easy to dismiss.
I think they were probably still a little premature, but not by much. I'd probably have waited one more release.
It's extremely early to say if things are rushed or not. It's unsurprising that newer software has an influx of vulnerabilities initially, it'll be a matter of retrospectively evaluating this after that time period has passed.
It's a little different with software since you don't usually have the code or silicon wearing out, but aging software does start to have a mismatch with the way people are trying to use it and the things it has to interact with, which leads to a similar rise of "failure" in the end.
It's not even about API boundaries, it's about logic and the language isn't really responsible for that.
Expecting it to prevent it would be as gullible as expecting it to prevent a toctou or any other type of non trivial vulnerability.
That's why even though I appreciate the role of these slightly safer languages I still have a bit of a knee-jerk reaction to the exagerated claims of their benefits and how much of a piece of crap C is.
Spoiler, crappy programmers write crappy code regardless of the language so maybe we should focus on teaching students to think of the code they're writing from a different perspective and focus safety and maintainability rather than "flashiness"
So I was saying that rust monolithicism is NOT based on ignorance and naivety.
Do you see what I mean by nuance? I think you just glanced at the comment, saw that there were negative words around rust, and you lossy compressed into "Rust bad".
You can bump /proc/$firefox_pid/oom_score_adj to make it likely target. The easiest way is to make wrapper script that bumps the score and then starts firefox. All children will inherit the score.
reply