Hacker Newsnew | past | comments | ask | show | jobs | submit | raggi's commentslogin

there are no good reasons we don't do this in the standards themselves, C, C++, and POSIX should all be working on editions that add safer APIs and mark unsafe APIs as deprecated, to start a long term migration. we know how to do this, we've had a lot of success with this. there are real engineering concerns, sure, but they're not reasons to not do it. compilers and library chains can retain support for less safe variants for plenty of time.

The reason this wasn't done by the standards committees is that they spent decades refusing to admit there was even a problem they could help fix. And if there was a problem, it was easily avoided by just writing better code. And if writing better code wasn't enough, well it was certainly too expensive to provide as a debug option. And if it wasn't too expensive to provide as a debug option, the implementors should really lead the way first. And on and on.

The C committee at least seems to get it now. The C++ committee still doesn't, led in large part by Bjarne.


This is a misrepresentation based on a misunderstanding on how standardization works. The C standard committee has long recognized the need for better safety and carefully made it possible so that C could be implemented safely. But the process is that vendors implement something and then come together during standardization so that it is compatible, not that the standardization is the government that prescribes top-down what everybody has to do. Vendors did not bother to provide safer C implementations and safety features (such as bounds checking) did not get much attention from users in the past. So I am happy to see that there is now more interest in safety, because as soon as there solutions we can start putting them into the standard.

(We can do some stuff before this, but this is always a bit of a fight with the vendors, because they do not like it at all if we tell them what to do, especially clang folks)


Stop mixing C and C++, tons of people on Unix still hate C++ (Motif a bit less) for being un-Unixy and megacomplex, even more today. Die had Unix and C people created Plan9 and now Go, which is maybe the other succesor to C before Inferno and Limbo, where programming it's more simpler than the whole C and POSIX clusterfux (even Plan9 and 9front itself can be called a "Unix 2.0").

C++ is something else. Heck, it's often far more bound to a Windows domain (and for a while Be/Haiku) than Unix itself by a huge stretch.


It is probably worth noting that C++, like C/Unix, originated at AT&T Bell Labs and was originally referred to as "C with classes." Classes were implemented using a preprocessor.

https://www.tuhs.org/Archive/Documentation/TechReports/USG_L...


Unix creators called Unix "dead and rotten" because the eulogy was done by Perl, and Plan9/9front and Inferno obliterated it. Ditto with C+POSIX against Plan9's C (and 9front) and Inferno vs Limbo, the grandparents of Golang, which is seen from Pike and so as the tool set C++ should have been.

Golang it's like Windows NT. C++ it's like Windows ME, it might have their cases on RT performance and multimedia because of having far less layers than NT, (and much better on single core), but it crumbles down fast and it was really easy to shoot yourself in the foot. Windows 2000 and XP killed it for the good.

Some day Golang would be performant enough (even with CSP) with multiple cores so all the 'performance' advanteges -suppossedly C++ brings- aren't needed at all.

Even C# can be as good as C++ today in tons of cases (AOT and emulators like Ryuyinx are not a bluff), even SBCL for Common Lisp too if you finetune the compiling options.


To clarify, I do agree with you that C and C++ have been two distinct languages for a very long time. And C++ doesn’t have much in common with POSIX.

What I disagree with is the idea that C++ was developed completely independently of C (and Unix) - it originated at Bell Labs and was initially just an extension of C with classes. If you looked at the document I linked to, you would see that Bjarne Stroustrup thanks Dennis Ritchie in it for being a source of good ideas and useful problems. I don’t think I need to explain who Dennis Ritchie was for C and Unix.


I agree, but are you responding to me?

Yup, and its not just the standards committees. Look at TR 24731 as an example, an absolute no-brainer for security adding (shock, horror!) bounds checking to long-standing trouble-prone APIs that's been around for 20 years, and the response from most compiler writers/library authors has been "lalalalala I'm not listening I'm not listening". Even then it only got as far as it did due to relentless pressure from Microsoft, anyone else and it'd have been rejected outright.

Having said that, some of it may be due to "it's from Microsoft, we can't ever use it". I'm actually surprised not to have seen any anti-MS diatribes in the discussion so far.


Despite all security denial attitude, WG21 is doing much better than WG14.

Still looking forward to the day C supports something like std::string, std::string_view, std::span, std:;array.

Which starting with C++26 finally have a standards compliant story about having bounds checks enabled by default.


The C charter has a rule of "no invention".

Anything needs to be demonstrated and used in practice before being included in the standard. The standard is only meant to codify existing practices, not introduce new ideas.

It's up to compiler developers to ship first, standardize later.


That produces a bit of a chicken and egg probablem for a stdlib overhaul. Compilers and libc implementations don't have a strong reason to implement safer APIs, because if it is non-standard then projects that want to be portable won't use it , but it won't get standardized unless they do add safer APIs.

So the best hope is probably for a third party library that has safet APIs to get popular enough that it becomes a de facto standard.


I think the real failing is that new language features then must be prototyped by people who have a background in compilers. That's a very small subset of the overall C community.

I don't have any clue how to patch clang's front end. I'm not a language or compiler person. I just want to make stuff better. There needs to be a playground for people like me, and hopefully lib0xc can be that playground.


By adding to the language itself, you mostly make stuff worse. The major reason why C is useful is its quite stable syntax and semantics. Language is typically not the area where you want to add code. It's much better (and much easier) to invent function APIs. See how they shake out, if they're good you might get some adoption.

Well, there is Annex K which is based on a previous Microsoft effort. Almost universally it is considered terrible and few people implemented it.

Immediately what I thought of when I saw /microsoft.

Not all of the APIs were brain-dead. They just ignored all previous developments and in the proposal they didn't even remove the C++-related language.


There are only two kinds of standards: ones that prioritize stability and backwards compatibility over usefulness and security, and ones nobody uses.

C and POSIX aren't related to C++ at all.

A vast number of C++ programs import C and POSIX headers directly, so the language level distinction you wish to make isn’t all that relevant to the subject matter.

Lawyers I have spoken to have stated strongly that they believe collective works doctrine will provide strong protections for most mature and sizable software. I see no mention of these considerations here.

multiple times


Cachy pushed a Limine update last weekend without any testing. It broke everyone with secure boot signing. Head proton versions are great, but games tend to turn into a laggy mess after a couple of hours and need regular restarts.

It's decent, but it's not all roses at all, and I wouldn't inflict it on non-techies yet.


Ah, I disabled secure boot assuming it's pointless and wouldn't work with arch and dual booting anyway. Maybe I have more to learn.

Perhaps cachyos should maintain LTS metapackages for more than just the kernel. Video drivers, boot managers and whatnot.

For a "non-gamer" I would probably keep them on Fedora or even Debian.


Same for me in firefox and chrome. I'm sure it's one of the DNS block lists I have and some really crappy marketing tracking code.

Edit: confirmed, loads with a public DNS provider that has no blocklists.


that's not a statement from a lawyer, and it's confused. there is one true thing in there which is that at least under US considerations the LLM output may not be copyrightable due to insufficient human involvement, but the rest of the implications are poorly extrapolated.

there are lots of portions of code today, prior to AI authorship, that are already not copyrightable due to the way they are produced. the existence of such code does not decimate the copyright of an overall collective work.


Ok, but how about kicking sick people off of flights, particularly trans continental?


I'm behind this 100%.

I got a SARS virus flying to Udon Thani in 2019. We were seated next to two thai guys who were so sick they could barely sit up straight. We offered them help and treats because they looked like they were about to vomit.

Plane lands, next day I'm sick. I was laid up for 2 weeks with fever, the shits, and I had a weird spontaneous cough for over 1 month after I got better.

I bet most of that plane got sick, and it was so damn avoidable.


The problem is there can he huge penalties for not flying when you booked. You might not be able to rebook your flight or hotel or days off so you're stuck either getting everyone sick or perhaps being out thousands of dollars or not going on vacation at all.


Then they should have containment suits on the plane. If they see someone THAT sick, stick em in the suit.


> how about kicking sick people off of flights

Difficult for the airline to do given the myriad of health privacy adjacents.


What if we asked the President to give us a quick rundown of each passenger's health?


What's the threshold for sick?

It'll never happen becasue everything around travel is to hard to reschedule.


this. llm's aren't that special, access _maybe_, but there's plenty of access to terrible rumor mills.


If that occurs and it’s a substantial enough body of output that it is itself copyrightable and not covered by fair use. Confluence of those conditions is intentionally rare.


Deployments like bedrock have no where near SOTA operational efficiency, 1-2 OOM behind. The hardware is much closer, but pipeline, schedule, cache, recomposition, routing etc optimizations blow naive end to end architectures out of the water.


Do you have evidence for any of this, or are you repeating a bunch of buzzwords you’ve heard breathlessly repeated on Twitter?


Many techniques are documented in papers, particularly those coming out of the Asian teams. I know of work going on in western providers that is similarly advanced. In short, read the papers.


Evidence?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: