Hacker News new | past | comments | ask | show | jobs | submit login

Please don't get me wrong, as I don't want to start a flame here, but why do they call D a "systems programming language" when it uses a GC? Or is it optional? I'm just reading through the docs. They do have a command line option to disable the GC but anyway...this GC thing is, imho, a no-go when it comes to systems programming. It reminds me of Go that started as a "systems programming language" too but later switched to a more realistic "networking stack".

Regards,




Systems programming languages with GC exist since the late 60's, with ALGOL 68RS being one of the first ones.

Since then a few remarkable ones were Mesa/Cedar, Modula-2+, Modula-3, Oberon(-2), Active Oberon, Sing#, System C#.

The reasons why so far most of them didn't won the hearts of the industry weren't not only technical, but also political.

For example Modula-3 research died the moment Compaq bought DEC research labs, or more recently System C# died when MSR disbanded the Midori research group.

If you want to learn how a full workstation OS can be written in a GC enabled systems programming language, check the Project Oberon book.

Here is the revised 2013 version, the original one being from 1992.

https://people.inf.ethz.ch/wirth/ProjectOberon/index.html


D's garbage collector is both written in D and entirely optional, that alone should qualify it as a systems language.

The gc can be disabled with @nogc, the command line flags are only if you want to disable it for the whole program or if you want warnings as to where the allocations happen. https://godbolt.org/g/IQ0O06


> https://godbolt.org/g/IQ0O06

(And, of course, the entire main() disappears on -O1 and above: https://godbolt.org/g/FAWtak.)


The GC is only relevant if you allocate using the GC, because that is the only time the GC can run. If you use @nogc on your functions, you are guaranteed not to have GC allocations. You can use D as a better C with no GC but other good features. You can even avoid the D runtime compeletely if you want.


Thanks for your helpful answer.

...and I don't understand why some people have downvoted my question. Anyway, I'll continue reading the docs :)


I don't understand the downvotes either.

Anyway, the docs are not that great, so Mike Parker has started a blog post series about the GC.[1] If you have questions, drop them in the d.learn forum.[2] They're pretty friendly (most of the time).

[1] http://dlang.org/blog/2017/03/20/dont-fear-the-reaper/ [2] https://forum.dlang.org/group/learn


Didn't downvote, but found "Please don't get me wrong, as I don't want to start a flame here, but…" redundant and slightly annoying. ;)


Why does gc disqualify a language as a system's language?

P.S. I think of a system's language as one that runs directly on the machine, e.g. Swift, C, go. They operate at the "system" level.


You can definitely do systems programming with a GC, but it does get in the way sometimes. Drivers will most likely need to bypass it entirely; more importantly, libraries to be embedded in other programs (including scripting languages) now have to deal with two garbage collectors rather than one.


> Why does gc disqualify a language as a system's language?

AFAIK a "system programming language" should have deterministic performances, obviously Go hasn't. But different people might define "systems" differently.


The people who define Go as a "systems language" use their own terribly useless definition of "systems language". Per wiki: "For historical reasons, some organizations use the term systems programmer to describe a job function which would be more accurately termed systems administrator."

So for them, a language that can be used by DevOps Engineers is a systems language. While for sane developers, a systems language is one with deterministic attributes and the ability to be compiled to native code with no/a minimal runtime. I.e. one you can code "a system" in.


I.e. one you can code "a system" in

Yeah, but "a system" doesn't necessarily mean "an operating system". There are lots of kinds of systems. Most people I know, who I've talked about this with, consider middlware'ish development (think, message queuing systems, application servers, etc.) as an aspect of "systems programming".


People have been writing OSes with GC enabled systems programming languages since the late 60's.

Would you consider the employees of UK Royal Navy, Xerox PARC, ETHZ, DEC, Compaq, Microsoft as insane developers?


Also, does the Go standard "require" a mark+sweep?

If you write an implementation which uses Reference Counting, it can be "deterministic" and won't require a runtime.


Sure, but then you'll leak data every time there's cycles in your data structures.


Also, it depends on how you define an "OS". If you're in the GNU/BSD/Windows/MacOS camp (where init, ls, sh, cat, etc. Are part of the OS), you can have go (as well as java) in the OS.


I think Go should be pretty deterministic, if you don't allocate or free at all at runtime.

Just like C. You lose determinism with malloc() and free(). As any embedded/kernel developer knows, malloc() takes often unacceptably long time. Or even free().


Go does allocate and free at runtime, it just isn't explicit because its a garbage collected language and the garbage collector handles most of the freeing for you. Different GC'd languages handle allocations differently, some have a keyword, some do it whenever creating an instance of a type over a certain size.

With C, C++ and Rust allocating memory often boils down to calling something equivalent to malloc and free. While the cost is not precisely known all modern OS provide guarantees are the time to execute relative to the size of the amount requested. This is almost always such a simple and fast operation that allocation is what gets optimized only after the algorithms and data structures have been tuned and this is a known bottleneck. Many application never get to the stage of optimizations (Games almost always do, stupid fixed time frame budget).

Consider the amount of work the GC does and understand why any GC is generally considered no-deterministic: https://blog.golang.org/go15gc


Does modern C++ or rust allow you to say "deallocate this pointer here" (like free or delete)?

I was under the impression that Rust (and safe_ptr) deallocate at scope end, which could also cause framerate issues (unless you do ugly scope hacks).

I do agree that you're unlikely to bump into this issue, though.


Yes, you can explicitly call drop ( https://doc.rust-lang.org/std/mem/fn.drop.html ) in Rust, which just uses move semantics to force it out of scope. A similar function could be implemented in C++, and called like drop(std::move(value)).


The are a couple of options in C++.

You can just use delete, as you mentioned, it is a C++ only construct as far as I know.

For the std library smart pointer, you can get their value (the raw pointer), delete that then assign nullptr to the smart pointer. I would consider this a code smell, and ask hard questions of the authors of such code.

The simplest thing to do is to add new scopes. You can introduce as many blocks with { and } as you like. It is a common pattern to lock a mutex with a class that releases the mutex in its destructor (and acquired it in the constructor). The std library includes std::lock_guard[0]. To insure the smallest possible use of the lock a new scope can be introduce around just the critical section and the first line of the block can pass the mutex to the scope guard and it should be about as small and efficient as can be, while be exception safe and easy to write. Hopefully this is also easy to read.

You can introduce new scopes with std::shared_ptr or std::unique_ptr as well. This seems common and reasonable.

[0] - http://en.cppreference.com/w/cpp/thread/lock_guard


With Rust there is the drop() function in the standard library (which is just a no-op function that takes an owned value as an argument) which allows you to shorten the lifetime of the value without having to do ugly things with blocks.


There are techniques used in GC languages to reduce or even eliminate runtime allocations. Basically - you re-reuse objects and pre-allocated collections rather than creating new ones. These techniques are common in games programming; and libraries like libgdx[0] (for Java) offer object pools to make this easier. I know some Java server applications use similar techniques.

[0]https://github.com/libgdx/libgdx/wiki/Memory-management#obje...


I can have non-deterministic performance even when using C. Also if I use Boehm GC with C, does that disqualify C as system language?


Is GC with C mandatory? it is in Go. See the difference? C doesn't come with a GC.


GC in D is NOT mandatory.


Could one write a driver in D?


There's a fairly recent growth in the D community of a "better C" movement, that is, people using D for lowest-level systems development, device drivers, etc. This seems to have lit a fire under initiatives to reduce D's GC dependency, and its runtime dependency in general. I don't follow this too closely, but it seems they are still at the "hacks and experiments" stage in terms of real-world use (e.g., people writing custom runtimes that stub out the GC, etc.).

DConf (http://dconf.org/2017/schedule/) is in three weeks, and there are a number of related talks. Once the videos are out, you might get a better sense of the current "D as a better C" landscape.


Here is someone writing a kernel in D: https://github.com/Vild/PowerNex



Ah, the memories...


Yes – there are a number of research and/or hobby projects that implement kernels, drivers, etc. in D.


Yes


Thanks!

Still reading the docs....D seems to have a very clean design. No baggage from some "glorious" past (70es, PDP, IBM 360 etc).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: