Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I’ve been in the D community for a decade and a half. People keep telling us they don’t like the GC. They keep saying GC is a dealbreaker. They want predictable memory usage and cleanup times. Somehow, none of this is ever an issue with C# and Java. Somehow, the fact that any memory allocator, including glibc’s, can incur arbitrary delays never matters. Well, fine! Fine. Whatever. I disagree with the choice, but as an offshoot of an offshoot, I really can’t afford alienating folks. I’m tired of arguing this point. Nobody has ever said that reference counting was a dealbreaker. So reference counting it is.

I think this is a bit off. It is an issue with Java and C#, but they don't position themselves as a C++ replacement in the same way D does. Even so you still get lots of complaints about missing RAII and GC pauses (witness how well received Go's low latency pauses have been received).

And yes glibc could introduce arbitrary delays, but it generally doesn't. Allocation is way faster than GC.

Maybe he was talking about long pauses when you destroy a big C++ object (e.g. a big `std::map`)? That can definitely cause annoying big pauses due to deallocation. But he already identified the critical factor - it's predictable. You can fix it deterministically.

Anyway reference counting is a decent choice. It can be very fast (especially if you are only referencing counting big objects and not little integers etc.)



I've come around more to "yeah GC is a problem actually" since I wrote that snippet. It really depends on your usecase though.


Yeah I agree. There are plenty of use cases where it isn't a problem, but in those cases you would probably not pick D/C++/Rust/Zig anyway. Or you don't have to at least (I still use Rust in cases where a GC is fine because it's such a great language).


> glibc could introduce arbitrary delays, but it generally doesn't.

Yeah it does, that's why gamedevs and hfts don't use any malloc in the fast path, glibc or otherwise.


Incorrect. They don't use malloc in the fast path because allocation is slow, not because it introduces arbitrary delays like GC pauses do. To be clear:

* Not allocating (or stack/bump allocation): extremely fast, completely deterministic.

* malloc: pretty fast, in theory arbitrarily slow but in practice it's well bounded

* GC pauses: very slow (until Go anyway)


> GC pauses: very slow (until Go anyway)

If by "slow" you mean "high latency", then this isn't true. Plenty of GC algorithms provide guaranteed sub-millisecond pauses.


It wasn't the norm until Go focused on it though, and "sub milliseconds" is not a high bar. Malloc is much much faster than that.


> It wasn't the norm until Go focused on it though

It's still not the norm, but the point is that those were available

> and "sub milliseconds" is not a high bar. Malloc is much much faster than that.

Malloc is not guaranteed to return in that time frame. The sub-millisecond latency of these GCs is a real-time guarantee.


> Malloc is not guaranteed to return in that time frame

No but in practice it always does.


It doesn't though, which is why malloc is rarely used in embedded and realtime systems.


If you can avoid allocations where speed matters, then the GC won't slow you down either, as (at least in D) it cannot be triggered if you don't allocate.


The difference is that it's deterministic. If you avoid allocation in C/C++/Zig/Rust then you know nothing is going to slow you down, and at worst you have a couple of allocations that won't cause a big spike.

With GC you can try to avoid allocations, but you might miss a couple and get occasional frame stutters anyway every now and then. Also avoiding allocation is a whole lot harder in languages that use a GC for everything than languages that provide an alternative.

The real solution for games is explicit control over GC runs - tell it not to GC under any circumstances until the frame is finished rendering, then it can GC until the next frame starts. I assume Unity does this for example. Still, games are only one application where you don't want big pauses - one that happens to have convenient regular times when you probably aren't doing anything.


> The difference is that it's deterministic. If you avoid allocation in C/C++/Zig/Rust then you know nothing is going to slow you down, and at worst you have a couple of allocations that won't cause a big spike.

> With GC you can try to avoid allocations, but you might miss a couple and get occasional frame stutters anyway every now and then. Also avoiding allocation is a whole lot harder in languages that use a GC for everything than languages that provide an alternative

Something that I think doesn't get the attention it deserves in Rust is how explicit it makes allocations at the type level. You don't have a pointer that maybe points to the heap and maybe to the stack (or in the case of GC maybe even is on the stack right now but won't be when you change how it's used later) or a slice that you need to track down whether it originated as a reference to a fixed-size array or if it it's dynamically allocated; you have either type that you know is a reference like &T or &str or a slice, or you have a type you know was allocated on the heap like a Box or an Arc or a String or a Vec. I've seen people need to spend a lot of time profiling projects in other languages to track down where they can optimize their memory usage. I wonder if people who are skeptical of languages with expressive type systems might see the benefits more if they were presented less in terms of what the type system provides you directly, but in terms of what it makes available for tooling to take advantage of; in the case of using the type system to track allocations, the advantage might seem limited to your own code and not helpful when handling dependencies without being willing to dive into their code, but I think it's easy to overlook that having the information available statically makes it possible for tooling to utilize it, and that applies just as much to dependencies.


In D you have 2 options to deal with that:

- As you described, you can disable the GC to do your thing. That's not the solution I would recommend, but if you call GC.disable then the GC won't collect anything until you enable it back (or if the program is running out of memory).

- You can mark a function as @nogc, and then the compiler will prevent you from allocating anything that could trigger GC allocation.

There is some level of support for @nogc in the language and few libraries to help, the issue is more in the standard library which relies a lot on the GC.


Go's GC throttles on allocation if it can't keep up and causes bad tail latencies just as much as GC in .NET or Java or even worse if you have high memory traffic (because Go's GC has far lower throughput).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: