there are no good reasons we don't do this in the standards themselves, C, C++, and POSIX should all be working on editions that add safer APIs and mark unsafe APIs as deprecated, to start a long term migration. we know how to do this, we've had a lot of success with this. there are real engineering concerns, sure, but they're not reasons to not do it. compilers and library chains can retain support for less safe variants for plenty of time.
The reason this wasn't done by the standards committees is that they spent decades refusing to admit there was even a problem they could help fix. And if there was a problem, it was easily avoided by just writing better code. And if writing better code wasn't enough, well it was certainly too expensive to provide as a debug option. And if it wasn't too expensive to provide as a debug option, the implementors should really lead the way first. And on and on.
The C committee at least seems to get it now. The C++ committee still doesn't, led in large part by Bjarne.
This is a misrepresentation based on a misunderstanding on how standardization works. The C standard committee has long recognized the need for better safety and carefully made it possible so that C could be implemented safely. But the process is that vendors implement something and then come together during standardization so that it is compatible, not that the standardization is the government that prescribes top-down what everybody has to do. Vendors did not bother to provide safer C implementations and safety features (such as bounds checking) did not get much attention from users in the past. So I am happy to see that there is now more interest in safety, because as soon as there solutions we can start putting them into the standard.
(We can do some stuff before this, but this is always a bit of a fight with the vendors, because they do not like it at all if we tell them what to do, especially clang folks)
Stop mixing C and C++, tons of people on Unix still hate C++ (Motif a bit less) for being un-Unixy and megacomplex, even more today. Die had Unix and C people created Plan9 and now Go, which is maybe the other succesor to C before Inferno and Limbo, where programming it's more simpler than the whole C and POSIX clusterfux (even Plan9 and 9front itself can be called a "Unix 2.0").
C++ is something else. Heck, it's often far more bound to a Windows domain (and for a while Be/Haiku) than Unix itself by a huge stretch.
It is probably worth noting that C++, like C/Unix, originated at AT&T Bell Labs and was originally referred to as "C with classes." Classes were implemented using a preprocessor.
Unix creators called Unix "dead and rotten" because the eulogy was done by Perl, and Plan9/9front and Inferno obliterated it. Ditto with C+POSIX against Plan9's C (and 9front) and Inferno vs Limbo, the grandparents of Golang, which is seen from Pike and so as the tool set C++ should have been.
Golang it's like Windows NT. C++ it's like Windows ME, it might have their cases on RT performance and multimedia because of having far less layers than NT, (and much better on single core), but it crumbles down fast and it was really easy to shoot yourself in the foot. Windows 2000 and XP killed it for the good.
Some day Golang would be performant enough (even with CSP) with multiple cores so all the 'performance' advanteges -suppossedly C++ brings- aren't needed at all.
Even C# can be as good as C++ today in tons of cases (AOT and emulators like Ryuyinx are not a bluff), even SBCL for Common Lisp too if you finetune the compiling options.
To clarify, I do agree with you that C and C++ have been two distinct languages for a very long time. And C++ doesn’t have much in common with POSIX.
What I disagree with is the idea that C++ was developed completely independently of C (and Unix) - it originated at Bell Labs and was initially just an extension of C with classes. If you looked at the document I linked to, you would see that Bjarne Stroustrup thanks Dennis Ritchie in it for being a source of good ideas and useful problems. I don’t think I need to explain who Dennis Ritchie was for C and Unix.
Yup, and its not just the standards committees. Look at TR 24731 as an example, an absolute no-brainer for security adding (shock, horror!) bounds checking to long-standing trouble-prone APIs that's been around for 20 years, and the response from most compiler writers/library authors has been "lalalalala I'm not listening I'm not listening". Even then it only got as far as it did due to relentless pressure from Microsoft, anyone else and it'd have been rejected outright.
Having said that, some of it may be due to "it's from Microsoft, we can't ever use it". I'm actually surprised not to have seen any anti-MS diatribes in the discussion so far.
Anything needs to be demonstrated and used in practice before being included in the standard. The standard is only meant to codify existing practices, not introduce new ideas.
It's up to compiler developers to ship first, standardize later.
That produces a bit of a chicken and egg probablem for a stdlib overhaul. Compilers and libc implementations don't have a strong reason to implement safer APIs, because if it is non-standard then projects that want to be portable won't use it , but it won't get standardized unless they do add safer APIs.
So the best hope is probably for a third party library that has safet APIs to get popular enough that it becomes a de facto standard.
I think the real failing is that new language features then must be prototyped by people who have a background in compilers. That's a very small subset of the overall C community.
I don't have any clue how to patch clang's front end. I'm not a language or compiler person. I just want to make stuff better. There needs to be a playground for people like me, and hopefully lib0xc can be that playground.
By adding to the language itself, you mostly make stuff worse. The major reason why C is useful is its quite stable syntax and semantics. Language is typically not the area where you want to add code. It's much better (and much easier) to invent function APIs. See how they shake out, if they're good you might get some adoption.
A vast number of C++ programs import C and POSIX headers directly, so the language level distinction you wish to make isn’t all that relevant to the subject matter.
Lawyers I have spoken to have stated strongly that they believe collective works doctrine will provide strong protections for most mature and sizable software. I see no mention of these considerations here.
Cachy pushed a Limine update last weekend without any testing.
It broke everyone with secure boot signing.
Head proton versions are great, but games tend to turn into a laggy mess after a couple of hours and need regular restarts.
It's decent, but it's not all roses at all, and I wouldn't inflict it on non-techies yet.
that's not a statement from a lawyer, and it's confused. there is one true thing in there which is that at least under US considerations the LLM output may not be copyrightable due to insufficient human involvement, but the rest of the implications are poorly extrapolated.
there are lots of portions of code today, prior to AI authorship, that are already not copyrightable due to the way they are produced. the existence of such code does not decimate the copyright of an overall collective work.
I got a SARS virus flying to Udon Thani in 2019. We were seated next to two thai guys who were so sick they could barely sit up straight. We offered them help and treats because they looked like they were about to vomit.
Plane lands, next day I'm sick. I was laid up for 2 weeks with fever, the shits, and I had a weird spontaneous cough for over 1 month after I got better.
I bet most of that plane got sick, and it was so damn avoidable.
The problem is there can he huge penalties for not flying when you booked. You might not be able to rebook your flight or hotel or days off so you're stuck either getting everyone sick or perhaps being out thousands of dollars or not going on vacation at all.
If that occurs and it’s a substantial enough body of output that it is itself copyrightable and not covered by fair use. Confluence of those conditions is intentionally rare.
Deployments like bedrock have no where near SOTA operational efficiency, 1-2 OOM behind. The hardware is much closer, but pipeline, schedule, cache, recomposition, routing etc optimizations blow naive end to end architectures out of the water.
Many techniques are documented in papers, particularly those coming out of the Asian teams. I know of work going on in western providers that is similarly advanced. In short, read the papers.
reply