Lack of DC fast charging makes the range even more limiting. It takes 2.7 hours to add another 150 miles. Modern EVs can add 150 miles of range in 10-15 minutes.
It's a recreational vehicle for booting around to and from the country club and out to the fancy places that European gentlemen go on afternoon Sunday drives to impress their mistresses.
Oh that reminds me, I should go check my lottery ticket.
Debian's tooling for packaging Cargo probably got better, so this isn't as daunting as it used to be.
Another likely thing is understanding that the unit of "package" is different in Rust/Cargo than traditionally in C and Debian, so 130 crates aren't as much code as 130 Debian packages would have been.
The same amount of code, from the same number of authors, will end up split into more smaller packages (crates) in Rust. Where a C project would split itself into components internally in a way that's invisible outside (multiple `.h` files, in subdirectories or sub-makefiles), Rust/Cargo projects split themselves into crates (in a Cargo workspace and/or a monorepo), which happen to be visible externally as equal to a package. These typically aren't full-size dependencies, just a separate compilation unit. It's like cutting a pizza into 4 or 16 slices. You get more slices, but that doesn't make the pizza bigger.
From security perspective, I've found that splitting large projects into smaller packages actually helps review the code. Each sub-package is more focused on one goal, with a smaller public API, so it's easier to see if it's doing what it claims to than if it was a part of a monolith with a larger internal API and more state.
Except that the hardware doesn’t necessarily offer perfect integer scaling. Oftentimes, it only provides blurry interpolation that looks less sharp than a corresponding native-resolution display.
The monitor may or may not offer perfect scaling, but at least on Windows the GPU drivers can do it on their side so the monitor receives a native resolution signal that's already pixel doubled correctly.
Most modern games already have built-in scaling options. You can set the game to run at your screen’s native resolution but have it do the rendering at a different scale factor. Good games can even render the HUD at native resolution and the graphics at a scaled resolution.
Games are not what I had in mind. Last time I checked, most graphics drivers didn’t support true integer scaling (i.e. nearest-neighbor, no interpolation).
With very high PPI displays the gamma corrected interpolation scaling is far better than nearest neighbor scaling.
The idea is to make the pixels so small that your eyes aren’t resolving individual pixels anyway. Interpolation appears correct to your eyes because you’re viewing it through a low-pass filter (the physical limit of your eyes) anyway.
Reverting to nearest neighbor at high PPI would introduce new artifacts because the aliasing effects would create unpleasant and unnatural frequencies in the image.
Most modern GPU drivers (nVidia in particular) will do fixed multiple scaling if that’s what you want. Nearest neighbor is not good though.
Extra "wasted" capacity has many benefits for EV battery packs.
It allows distributing load across more cells. It allows using cells with a lower C-rating, which typically have better energy density, and longer lifespan.
Distributing load reduces energy loss during charging, makes cooling less demanding, while allowing higher total charging power, which means adding more range per minute.
The excess capacity makes it easier to avoid fully changing and discharging cells. This prolongs life of NMC cells.
(but I agree that cars are an inefficient mode of transport in general)
Nothing in the C standard requires bytes to have 8 bits either.
There's a massive gap between what C allows, and what real C codebases can tolerate.
In practice, you don't have room to store lengths along pointers without disturbing sizeof and pointer<>integer casts. Fil-C and ASAN need to smuggle that information out of band.
Note that Fil-C is a garbage-collected language that is significantly slower than C.
It's not a target for writing new code (you'd be better off with C# or golang), but something like sandboxing with WASM, except that Fil-C crashes more precisely.
From the topic starter: "I've posted a graph showing nearly 9000 microbenchmarks of Fil-C vs. clang on cryptographic software (each run pinned to 1 core on the same Zen 4). Typically code compiled with Fil-C takes between 1x and 4x as many cycles as the same code compiled with clang"
Thus, Fil-C compiled code is 1 to 4 times as slow as plain C. This is not in the "significantly slower" ballpark, like where most interpreters are. The ROOT C/C++ interpreter is 20+ times slower than binary code, for example.
Cryptographic software is probably close to a best case scenario since there is very little memory management involved and runtime is dominated by computation in tight loops. As long as Fil-C is able to avoid doing anything expensive in the inner loops you get good performance.
> best case scenario since there is very little memory management involved and runtime is dominated by computation in tight loops.
This describes most C programs and many, if not most, C++ programs. Basically, this is how C/C++ code is being written, by avoiding memory management, especially in tight loops.
This depends heavily on what problem domain you're talking about. For example, a DBMS is necessarily going to shuffle a lot of data into and out of memory.
It depends. Consider DuckDB or another heavily vectorized columnar DB: there's a big part of the system (SQL parser, storage chunk manager, etc.) that's not especially performance-sensitive and a set of tiny, fast kernels that do things like predicate-push-down-based full table scans, ART lookups, and hash table creation for merge joins. DuckDB is a huge pile of C++. I don't see a RIIR taking off before AGI.
But you know what might work?
Take current DuckDB, compile it with Fil-C, and use a new escape hatch to call out to the tiny unsafe kernels that do vectorized high-speed columnar data operations on fixed memory areas that the buffers safe code set up on behalf of the unsafe kernels. That's how it'd probably work if DuckDB were implemented in Rust today, and it's how it could be made to work with Fil-C without a major rewrite.
Granted, this model would require Fil-C's author to become somewhat less dogmatic about having no escape hatches at all whatsoever, but I suspect he'll un-harden his heart as his work gains adoption and legitimate use-cases for an FFI/escape hatch appear.
> DuckDB is a huge pile of C++. I don't see a RIIR taking off before AGI.
While I'm not a big fan of rewriting things, all of DuckDB has been written in the last 10 years. Surely a rewrite with the benefit of hindsight could reach equivalent functionality in less than 10 years?
for one, duckdb includes all of sqlite (and many other dependencies). it knows how to do things like efficiently query over parquet files in s3. it's expansive - a swiss army knife for working with data wherever it's at.
sqlite is a "self contained system" depending on no external software except c standard library for target os:
> A minimal build of SQLite requires just these routines from the standard C library:
> Most builds also use the system memory allocation routines:
> malloc(), realloc(), free()
> Default builds of SQLite contain appropriate VFS objects for talking to the underlying operating system, and those VFS objects will contain operating system calls such as open(), read(), write(), fsync(), and so forth
I was thinking less about the DB data itself and more about temporary allocations that have to be made per-request. The same is true for most server software. Even if arenas are used to reduce the number of allocations you're still doing a lot more memory management than a typical cryptographic benchmark.
Most databases do almost no memory management at runtime, at least not in any conventional sense. They mostly just DMA disk into and out of a fixed set of buffers. Objects don't have a conventional lifetime.
Along with the sibling comment, microbenchmarks should not be used as authoritative data when the use case is full applications. For that matter, highly optimized Java or Go may be "1 to 4 times as slow as plain C". Fil-C has its merits, but they should be described carefully, just with any technology.
I maintain that microbenchmarks are not convincing, but you have a fair point that GP's statement is unfounded, and now I've made a reply to GP to that effect.
WASM is a sandbox. It doesn't obviate memory safety measures elsewhere. A program with a buffer overflow running in WASM can still be exploited to do anything that program can do within in WASM sandbox, e.g. disclose information it shouldn't. WASM ensures such a program can't escape its container, but memory safety bugs within a container can still be plenty harmful.
You can buffer overflow in fil-c and it won't detect it unless the entire buffer was its own stack or heap allocation with nothing following it (and also it needs to be a multiple of 16 bytes, cause that's padding that fil-c allows you to overflow into). So it arguably isn't much different from wasm.
Quick example:
typedef struct Foo {
int buf[2];
float some_float;
} Foo;
int main(void) {
Foo foo = {0};
for (size_t i = 0; i < 3; ++i) {
foo.buf[i] = 0x3f000000;
printf("foo.buf[%zu]: %d\n", i, foo.buf[i]);
}
printf("foo.some_float: %f\n", foo.some_float);
}
This overflows into the float, not causing any panics, printing 0.5 for the float.
At least WASM can be added incrementally. Fil-C is all or nothing and it cannot be used without rebuilding everything. In that respect a sandbox ranks lower in comprehensiveness but higher in practicality and that's the main issue with Fil-C. It's extremely impressive but it's not a practical solution for C's memory safety issues.
What language do people considering c as an option for a new project consider? Rust is the obvious one we aren't going to discuss because then we won't be able to talk about anything else, Zig is probably almost as well loved and defended, but it isn't actually memory safe, just much easier to be memory safe. As you say, c# and go, also maybe f# and ocaml if we are just writing simple c style stuff none of those would look all that different. Go jhs some ub related to concurrency that people run into, but most of these simple utilities are either single threaded or fine grained parallel which is pretty easy to get right. Julia too maybe?
I keep ignoring nim for some reason. How fast is it with all the checks on? The benchmarks for it julia, and swift typically turn off safety checks, which is not how I would run them.
Since anything/0 = infinity, these kinds of things always depend upon what programs do and as a sibling comment correctly observes how much they interfere with SIMD autovectorization and sevral other things.
That said, as a rough guideline, nim c -d=release can certainly be almost the same speed as -d=danger and is often within a few (single digits) percent. E.g.:
A GC lang isn't necessarily significantly slower than C. You should qualify your statements. Moreover, this is a variant of C, which means that the programs are likely less liberal with heap allocations. It remains to be seen how much of a slowdown Fil-C imposes under normal operating conditions. Moreover, although it is indeed primarily suited for existing programs, its use in new programs isn't necessarily worse than, e.g., C# or Go. If performance is the deciding factor, probably use Rust, Zig, Nim, D, etc. .
You seem to think of "rust enthusiasts" as some organized group with a goal of writing Rust for the sake of it. Rust is long past such extremely early adopter phase.
What you're seeing now is developers who are interested in writing a better version of whatever they're already working on, and they're choosing Rust to do it. It's not a group "Rust enthusiasts" ninjas infiltrating projects. It's more and more developers everywhere adopting Rust as a tool to get their job done, not to play language wars.
Nah, I called out redox and another commenter pointed out ripgrep as an even better example of what I’d prefer to see, and those are also by what I would call rust enthusiasts. I don’t think of them as a monolithic group.
Where we disagree is I would not call injecting rust into an established project “writing a better version”. I would love it if they did write a better version, so we could witness its advantages before switching to it.
They are referring to adopting the Sequoia PGP library, which is written in Rust. There are plenty of benefits to using Sequoia which you can investigate now, no need to theoretically wait for the integration to happen. Not coincidentally, the RPM package manager also adopted Sequoia PGP.
First off, the mail references far more rust adoption than just Sequoia, but since you bring it up: here is how RPM adopted Sequoia in Fedora-land. There was a proposal, a discussion with developers about the ramifications (including discussion about making sure the RPMs built on all architectures), and there were votes and approvals. Costs and benefits and alternatives were analyzed. Here's a page that has links to the various votes and discussion: https://fedoraproject.org/wiki/Changes/RpmSequoia
Can't you see how much more thought and care went into this, than is on display in this Debian email (the "if your architecture is not supported in 6 months then your port is dead" email)?
The problem here is that C is too basic, dated, with inadequate higher-level abstractions, which makes writing robust and secure software extra difficult and laborious. "Learning underlying hardware" doesn't solve that at all.
Debian supports dozens of architectures, so it needs to abstract away architecture-specific details.
Rust gives you as much control as C for optimizing software, but at the same time neither Rust nor C really expose actual underlying hardware (on purpose). They target an abstract machine with Undefined Behaviors that don't behave like the hardware. Their optimisers will stab you in the back if you assume you can just do what the hardware does. And even if you could write directly for every logic gate in your hardware, that still wouldn't help with the fragility and tedium of writing secure parsers and correct package validation logic.
reply