Hacker Newsnew | past | comments | ask | show | jobs | submit | notquiteright's commentslogin

It’s not that easy. The federal government is in fact spending over $100 billion in the next decade laying fiber (and that’s only partial subsidies, so the total investment will be even higher) and it still won’t be enough to get fiber everywhere.


Based on what happened last time:

- the telcos pocketed the money and did very very little actual deployment, and the FCC did nothing

- lobbied the FCC to deny Starlink funds when Starlink actually delivered a usable rural service and got them to declare it "not broadband"

At least the FCC hasn't been completely useless in mobile broadband. I'm sure if 5G starts approaching usable competition with Cable/Telcos, the FCC will kill it for them.


Not sure why the taxpayers are footing the bill.


Because it’s not profitable to build fiber out to many places.


The point of this definition of broadband is to guide FCC subsidies. They’re saying 100/20 is goal for what everyone should have in every single home and business in the country, and that it’s so crucial that the taxpayer should pay for it if need be.


Taxpayers pay for libraries too, despite most not using them. What's the point of the state or taxes if not to push society in a better direction?

20Mbps isn't even a lot.


Federal policy is not targeted at your personal use case. 20mbit up is an entirely satisfactory BASELINE for residential internet access in 2024. You’re so far into a bubble that you can’t even see it.


I personally like libraries, but you have a point, maybe we should put the money that goes toward library taxes to better use.


We’re talking about you saying the FCC should define broadband as requiring gigabit upload speeds minimum.


Yet the US spends a greater amount EVERY year on building fiber into ever more remote areas (between FCC and NTIA programs, over $10 billion per year) than it has cost Starlink TOTAL to build a network that covers the entire country. Starlink needs more satellites to increase the max number of subscribers they can support, but their costs are actually going down.

Digging trenches in the ground is expensive. It’s a big country. You need immense amounts labor and at the end of the day, you just have a barely profitable fiber line, if that. Many of the new fiber lines will only survive on further government subsidies. Meanwhile, Elon builds a spaceship to launch satellites and people pay him handsomely for access to it. That’s not to mention that construction is heavily unionized, requires locality by locality planning permission and rights of way, etc.


I knew these were all ”I don’t know” edit: but yes, deceptive phrasing by the quiz author. If you don’t know that, then yeah, you’re probably not a great C programmer. Adjust your mental model: C is meant to be useful on a very wide variety of platforms, including ones that don’t follow the conventions (ASCII character table, size of byte, alignment requirements, etc.) of the most common platforms.

Even including knowledge of what is IB and UB, C is still simpler than most languages in common use these days.


If “the standard doesn’t define it” is a reason for “I don’t knows then all of rust is “I don’t know”. Seems silly.


Rust still behaves in a consistent way, anything undefined by the C standard can and often will change between platforms and compilers and optimisation levels


Undefined per se is not a bad thing. It means "compiler can make choices based on certain assumptions in the name of performance". The problem with c is that you must have a comprehensive dictionary in your brain with tons of corner cases to know what is or is not undefined in any given compiler setting.

If C could have a consistent set of rules, and/or easily tag something as undefined a la "unsafe" or have some sort of visual reminder signal (like using function name prefixes or suffixes) that would go a long way to making it better.


> The problem with c is that you must have a comprehensive dictionary in your brain with tons of corner cases to know what is or is not undefined in any given compiler setting.

I'm reminded of a quote: [0]

> Ada has made you lazy and careless.

> You can write programs in C that are just as safe by the simple application of super-human diligence.

To your point about performance:

> compiler can make choices based on certain assumptions in the name of performance

I don't think the performance argument really applies with modern optimising compilers. The (too often overlooked) Ada language is safer than C, but has about the same performance, provided you avoid the features with runtime overhead. Similarly I don't think the performance of Rust, and in particular its Safe Rust subset, suffers much for its lack of undefined behaviour.

It's true that, say, Java, doesn't perform as well as C even today, but Java requires a slew of runtime checks and is hostile to micro-optimised memory-management. In Ada, things like array bounds checks can be enabled for debug builds but disabled for production builds, which isn't easy to do in C.

> If C could have a consistent set of rules, and/or easily tag something as undefined a la "unsafe" or have some sort of visual reminder signal (like using function name prefixes or suffixes)

This essentially can't be done. Even the MISRA C coding style, which aims to help with this kind of thing, can't completely guarantee to eliminate undefined behaviour from C codebases. To illustrate the challenge of 'containing the risk' with the C language, it's undefined behaviour to do this:

    int i; int j = i;
Fortunately other languages do a much better job at offering truly safe subsets (Rust and D for instance).

[0] https://people.cs.kuleuven.be/~dirk.craeynest/quotes.html


> The problem with c is that you must have a comprehensive dictionary in your brain with tons of corner cases to know what is or is not undefined in any given compiler setting.

The cases of undefined behavior in the C standard are independent of compiler settings or options.

> If C could have a consistent set of rules …

The C language has a well-defined standard, but the presence of undefined behavior is a deliberate aspect of that standard.


Deliberateness is not the same as consistency.

You can have well defined standards, that are wildly inconsistent. For example, in python, in file A:

    def get(dict, key):
      return dict[key]
In file B:

    def get(key, dict):
      return dict[key]

Imagine working in this codebase!


I’m not saying it’s a bad thing, but it is something to be aware of


And program runs, and different times the same process execute them. Those two are what separate IB from UB on the original interpretation, from the last century.

But nowadays UB is something much more complex and dangerous than "I can't ever know the results of this".


Right. Personally, I think those could have been better phrased as "I can't tell". As for C's simplicity, 100% what you said.


> Even including knowledge of what is IB and UB, C is still simpler than most languages in common use these days.

That depends what you mean by simple. C is a fairly small language, but it has far more than its share of footguns.

In terms of compiler engineering, sure, C is comparatively simple.


>C is still simpler than most languages in common use these days.

Having less keywords and implementing less fancier concepts

doesn't make my day2day life easier, actually it usually makes my day harder.


Would you write a web app in it? (Just out of interest. I'm not trying to be an ass.)


No, wrong level of abstraction. Why do manual memory management in a realm where it doesn’t really matter? In addition to the excessive amount of work (also why I wouldn’t choose Rust for the job), you’re opening yourself up to a whole class of errors for no good reason.


I've written plenty of C web apps back in the old cgi-bin days, for no particular reason, and haven't found the memory management an issue. cgi-bin is a great example of a lifetime-managed execution environment where you can just use the heap as one big arena allocator, with malloc() having unchanged semantics (or, if you're feeling fancy, going to a bump allocator), and free() being no-op'd (or, more realistically, just omitted when writing code). Yeah, your high water mark memory usage might be a bit higher than a more managed approach, but the OS is a perfect garbage collector, and the fastest possible garbage collector, for short-lived executables.

The bigger issue is the mediocre string processing libraries that are so common to C.


I wrote my first web app in C (1999ish). Emitting HTML was way easier than any GUI libraries I had encountered up to that point.

I probably wouldn't write one in C today, but webapps are one of the easier string-heavy things to write in C[1]; you can get away with using a hierarchical memory manager and just free everything after the request is complete.

1: Having to do a lot of string manipulation is usually a good sign "you shouldn't be using C" There's a reason awk was written in 1977.


if it runs on a microcontroller, then yes


It would have been great if they just made a perfect, de-googled version of Chrome with extra features. Instead, they violate your privacy more than Google ever dreamed.


Using a decryption password on boot is less secure than TPM + measured boot/secure boot. Specifically, it’s vulnerable to a two-touch attack. In the first touch, the attacker replaces your bootloader with one that looks identical but steals your password. On the second touch, they now use the password to steal your data.


If the attacker can install a custom boot loader the system is already defective by design.


If the attacker can replace your bootloader, why can't they just get the decryption key from the kernel later? And if you did have Secure Boot, then using a password with encryption at rest is just as secure : you can't change the bootloader and you can't change the OS (since it's encrypted), so you can't exfiltrate the password. The end result is that the TPM doesn't have a practical benefit.


The bit about "two touches" seems to imply physical access, so in absence of TPM the attacker can replace your bootloader with little effort vs with TPM they'd need to break TPM.


You can fix this by asking for the password before letting the attacker replace the bootloader.


Sorry, I missed the bit about Secure Boot.

Yes, with Secure Boot and password your data is safe. But you have to type the password to boot your system, which is impractical for remote and headless systems, or even local systems that need to be available remotely.


You would still use the TPM to verify the software chain. But don’t use the TPM to Auto unlock disks. That’s the part that feels like a bad idea


The issue is that data disks and system disks get conflated. For the system disk (anything outside of /home) you generally only care about signing - which FDE does as a side-effect. Each user should have their own disk/partition/subvolume with a distinct key that is retrieved from the PAM.

This achieves two things: I know that I am typing my password into the OS that I or a trusted third party compiled (not one planted by a hacker), and my home directory gets decrypted as part of my normal login routine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: