Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Crown – A flexible game engine written from scratch in C++ (github.com/dbartolini)
193 points by dbartolini on June 22, 2017 | hide | past | favorite | 170 comments


While I find the "sane C++" approach pragmatic and practical all things considered, I'm firmly in the "time to use a better language if possible" camp.

The problem with approaches requiring extra discipline is: it's an extra mental burden to bear while programming. Also, you'll always be limited by the fact that you're working in a less pure ecosystem and will likely end up using libraries written in "not very sane C++" anyway.

We seem to have flexible compiler SDKs these days (LLVM etc.), why isn't there a strict "sane C++" or "orthodox C++" subset available as a custom language or compiler option yet?


> why isn't there a strict "sane C++" or "orthodox C++" subset available as a custom language or compiler option yet?

Probably because sanity is (1) subjective, (2) in this case doesn't really come from disabling features (restricting language). He is still using most of the C++ features (overloads, operator overloading, templates) but only in semantic context that is obvious for them (i.e. templates for collections, overloaded operators for vectors and matrices). Therefore it might be a little bit hard to create a compiler front end that would understand which class acts as a collection or whether given structure is implementation of well known mathematical concept.


The first step would be to actually define such a subset. And what happens when this subset calls into "full" C++? If you don't want to lose interoperability you have to be pretty conservative with the things you disallow.

In particular since C++ kept the C-style preprocessor "just copy and paste that file in there" includes it would be tricky to handle "sane C++" including "full C++", especially since templated C++ tends to put a massive amount of code in the headers (think boost for instance, which is mostly .hpp "headers"). You'd have to tell the compiler to switch the "sane" flag on and off within the same translation unit depending on the original source of the code. Nothing impossible, but not exactly elegant. Alternatively you could use a new `extern "sane-C++" { ... }`-type block around all your code to tell the compiler what to do.

At any rate you'll have to make sure that your sane subset can always inter-operate with the "wild" C++ without any cross-contamination.

But really I think the main problem is that you'd have trouble getting a consensus on what would be your sane subset. Some devs will tell you to get rid of exceptions altogether, others will tell you that multiple inheritance is the work of the devil. Some will want to ban raw pointers (or at least severely gimp them). And some will want none of that but something else instead.


> But really I think the main problem is that you'd have trouble getting a consensus on what would be your sane subset.

This can probably be solved by adding flexible configuration options (C++ compilers already have plenty). In practice, project leaders would then decide on a set of such options, similar to what happens with code style guidelines.

Perhaps some day a majority of people will agree on a "good" subset of C++ / option set and it will become standardized as a new language or dialect.


Because catering a C++ compiler towards people who can't (and won't) actually learn C++ is stupid.


People are working towards that objective, see for example Jonathan Blow's Jai.


I'm really excited about this language. Here is a link where Blow live codes and explains features:

https://www.youtube.com/playlist?list=PLmV5I2fxaiCKfxMBrNsU1...


Debatably, this exists. With GCC or Clang there are several command line flags that can help.

If you use the flag -Werror all warnings will be errors.

Then add -Wall which enables all unambiguously good warnings, which will stop a whole lot of things that skirt the type system, it will prevent silly rounding bugs and in general reduce the bug surface of your code.

Then if you enable "-pedantic" more errors are found, but not all are clearly improvements. I think most of them are good and I think most would agree that mandating the "override" keyword is good, but not all would agree with every signed to unsigned comparison warning.

Put together this add "-Werror -Wall -pedantic" or a "/w4" MSVC) to the command line, or more likely makefile/CMakeLists.

I personally advocate enabling all these and a similar set from msvc then adding a set of robust warning suppression macros to you code and suppress the warning that really make no sense to fix. This has prevented a ton of bugs in my code, it minimizes the amount of premature optimizations I see in code that tried to cast from one type to another by relying on some unspecified binary compatibility and in general makes writing C++ really enjoyable.


The best argument I've seen for using orthodox C++ is when writing something you want to compile to both native platforms and web using emscripten, while aiming for good performance and small distribution size.

For example, this is what Oryol achieves, targeting OpenGL on native and WebGL on web. I don't think anything could beat the "Orthodox C++" approach for that purpose, aside from using C (which is what I'm using for a project right now).

https://github.com/floooh/oryol/


I think what you're referring to is D.


There was an attempt at a somewhat standardized C++ subset called Embedded C++. Even within embedded circles I don't think it ever got much traction, presumably because it was _too_ limited.


For embedded systems there is MISRA C. It is a set of guidelines that is included in some compilers and analysis tools. MISRA C++ exists too but I don't know how well supported it is.

It is, however, a rather restrictive standard with critical systems in mind. Usually, you don't want such restrictions in regular software development, you want the full power of the language.

Ensuring sanity is better served with static analyzers (linters) that can be tailored to the specific needs of the project and be regularly updated with new rules.


I guess the reason is that a combination of static analysis and code formatting does most of the same job.

But I would also love to have it as a compiler switch.


I'm not trying to be too critical, but the first thing I look for whenever I come across a new game engine is an actual game implemented in it. Unfortunately, almost none of these engines ever seem to get around to providing a MVP.


Bingo. Designing a game engine that has no dogfood project(s) just portrays it has no applied use, and hasn't ran into issues with its own design before.


I partially agree with "Orthodox C++". Templates should be used only when needed and for turn the code simpler(not like the abomination used in most of Boost libraries).

But also, i don,t see any sane reason to reinvent the wheel and reimplement basic stuff like thread/mutex classes when the C++ version works well. Or using "NULL" instead "nullptr", or using that pre-processor macro garbage instead templates/constexpr, etc

I think some modern C++ features, that if well used, turn the code much clear and expressive.


When I started the project C++11 was not supported well on most compilers. I'm fine with new features when they don't limit my freedom.

I'm fine with nullptr, it is going to replace NULL very soon.


You should just replace it and define a nullptr macro for older compiles.


If you come to finance, most of the developers love premature templatization, it makes them feel like they know something. Not sure it that can be attributed to their insecurity about C++ coding skills, but it gets really ridiculous at times.


I have a friend who runs a hedge fund, not really insecure at all but uses templates vigorously. More stuff in compile time rather than runtime. In critical sections inheritance can be prohibitively expensive because of the use of lookup tables.


Thrashing your I$ because you have too many template instantiations can be prohibitively expensive too, it's just harder to metric.


Just how many nested templates are we talking about here? You can have that problem (obviously) with or without templates. Most sane template designs don't exist when the program is actually run (Debugging them when they don't work is another matter)


if you're using templates for compile time computations it's maybe time for c++14 or even c++17 simpler constexpr functions (if possible and available, and if allowed to, ofc.)


I guess they start over each time the requirements change? :)


What does this have to do with the link?


What's wrong with using templates vigorously?


Some things that are not so nice about templates:

- dozens to hundreds of compiler error lines for a single error, where it's hard to find out what the real problem is (IDEs often point to the wrong line)

- Code is hard to follow. E.g. try to figure out from boost asio source code which code is actually used if if you do a async_read(socket). I personally gave up after the second level of template substitutions, and have only a chance to follow the execution path in the debugger.

- Besides goto definition also other IDE features do not work really well with templates. E.g. no autocompletions for constructors with make_shared.


As a bonus you can meditate deeply on all conseqences of using tempates in the vastly increased compilation time that they are going to introduce.


D has templates, yet their error messages seem fine for debugging what's going on.


That's because D's standard library doesn't use templates in the same way as the STL (No iterators, less policies). Also, D has is actually vaguely modern and has proper static reflection, static asserts and static if: Writing template constraints is easy, unlike C++ (even with Concepts/-lite)


All three of these seem to be compiler/tool UI problems and not problems with templates themselves.


Well, let me give you an example that I have seen recently. There was this function that suppose to convert numeric value to string. So numeric type was templatized. Then with combination of enable_if and static_asserts there were checks to avoid doubles/floats negative numbers. So whats there left besides unsigned integers?


explicit overloads or concepts


New hobbyist engines pop up from time to time and then they fade into memory. Why is this going to be different?

Not to be too grumpy but game engines are a weird thing cause they are such complex beasts and usually the people using them have invested a lot of time in learning them. It's fun to make a new engine, but it probably doesn't have the community nor interest to cover the hard 10 to 20% that inevitably come up. Seems like the time is better invested in one of the super big AAA engines or in writing your own.

What's the plan for 2 to 3 years from now?

https://news.ycombinator.com/item?id=5442366

Polycode had some interest around here 4 years ago, with lots of the same goals, but haven't really heard from it since. Similarly, how does this compare to Godot or even something like Torque?


Yeah, writing game engines can be a great framework for learning all kinds of useful and interesting concepts (and trying out experimental things!) but an actual, usable engine needs a ton of "boring" stuff regarding tooling and asset pipeline that hobby engines usually miss. When artist creates shiny visuals in <some editor> it is expected to look somewhat like that in engine and then you find yourself deep in some FBX/whatever SDK/your own loader + shaders wondering how the pipeline is supposed to work and how all the usual formats seem to suck in some way. Add animations, IK etc. and suddenly there is a lot of work to do that production quality engines solve.

That is not to say it is always needed though if the game is really simplistic. But there are a lot of engines capable of rendering instanced bouncing OBJs out there.


While in general you are right about the boring stuff (although some people find writing tools far from boring - e.g. me, i should actually write less tools if ever want to finish some of my own stuff:-P) the shiny visual bits aren't usually "just" imported. In the (commercial, not personal) engines i worked at, the artists used their 3D mesh editor of choice (max and maya) to make the meshes but only bothered with the basic texturing in that 3D mesh editor. The materials were created inside the engine's own editor since 3dsmax/maya's materials both do not make much sense to fully replicate and they lack functionality that the engine's own renderer (and material system) can provide.

In the last (commercial) engine i worked at, the pipeline was to export the mesh from the 3D mesh editor (3dsmax or maya) to a custom easy to parse format and then import it from the editor to a more compact faster and easier to work with format. Then the artists would create the materials and other resources that the engine needed to work with from inside the engine's own tools. The imported resource remembered the original file so that artists could simply export again and ask the editor to reimport stuff (at a later point we made the editor to automatically monitor the directories for changes - both Windows and Linux provide functionality for this - so the artists would simply export from their 3D mesh editor and the engine's editor would reimport the meshes automatically).

In the previous (commercial) engine i worked at, things were simpler in that we only supported 3ds max (although the 3dsmax SDK was far from simple, if it wasn't for SymbianOS it would be one of the worst SDKs i've worked with... but that is another story) and we exported animations and meshes directly to a custom format the engine expected. The exporter also had a "preview" feature to allow the artists preview the exported files in a standalone viewer that used the engine's renderer to make sure that things looked fine (in that case we actually did try to use 3ds max's materials, although in hindsight that was a mistake since even with the viewer the artists often assigned textures incorrectly - we should have relied only as little as necessary on 3ds max instead of making it the primary content editor).


The use of Orthodox C++ is more interesting than yet another game engine I think.

This speaks about language design and communities, approach to development practices.


Do features or concepts kicked around in the "hobbyist" engines ever influence or inform the mainstream engines?


These are features and concepts kicked around by mainstream engines. In fact, this work is inspired by Bitsquid/Stingray.


Interesting! I'd be interested in better understanding the motivation behind Orthodox C++. In particular, you seem to dump most of the C++ standard library:

"Don't use anything from STL that allocates memory, unless you don't care about memory management."

I now mostly avoid templatization in my own code unless there's a really good reason. But the standard library often lets me avoid explicit memory allocation. Would love to hear more about the motivation for this (and other aspects of your C++ usage).

Also, if you have a demo of the engine in use that would be fun to see!


Generally speaking many C++ game engines avoid the STL stuff and reimplement their own more predictable containers, often with custom allocation schemes. The engine at the last game company i worked at, for example, had its own containers and memory allocator and allowed you to define the allocation category and pool per allocator and per object class (so, e.g., dynamic strings would be isolated to their own pool to avoid fragmenting the heap).

Related, Andrei Alexandrescu had a great talk about allocators in C++ a couple of years ago: https://www.youtube.com/watch?v=LIb3L4vKZ7U

Also related to the Orthodox C++, the same engine also ousted exceptions and RTTI. I'm not sure about the reason for exceptions, but C++'s RTTI was simply inadequate and instead it was replaced with a custom one made using macros (similarly to wxWidgets and MFC) that allowed automatic object serialization and reflection which was used for all saving and loading, exposing objects to the editor automatically with a common UI and exposing classes and objects to the (custom) scripting language with very little setup.

Interestingly most of the stuff the engine had to reinvent seem to be first class citizens in the D language. Also Andrei's allocators also seem to be available (experimentally) there too.

Personally i prefer plain old C (C89 even, although with a few commonly available or easily reproducible extras like stdint) because i see C++ as too complex for what it is worth. However D seems to provide more power with less complexity and more and more makes me want to try it, especially the new "better C" mode that DMD has got (which i think is somewhat the D equivalent to Orthodox C++ that is linked from the page).


I felt the same way dipping my toes in C++ for a few years. C99 is definitely my preferred language. But when in Rome...


I'm kind of curious about doing this more often, but it's the lack of clean collections that puts me off.

What do you do regarding collections? (Dynamic arrays, hashmaps)?


> What do you do regarding collections? (Dynamic arrays, hash maps)?

I don't do anything!

I use the most appropriate data-structure and minimal transformation necessary to get the work done. I use maths and higher-level tools to verify my designs but the implementation, when required to be soft-realtime needs to exploit as much mechanical sympathy from my target platform as possible.

You might be interested in https://dataorientedprogramming.wordpress.com

Cheers!


Indeed collections are a stinky bit, but fortunately they are not the majority of the code. Generally i either do a "list of pointers" (for example http://runtimeterror.com/rep/engine/artifact/0a8bb29493c782f... from my own C engine) or i define `DECLARE_LIST(type)` and `IMPLEMENT_LIST(type)` macros which declares types and functions for handling those (note that with the word list "list" in both cases i mean a conceptual list of items, not the data structure, in practice it isn't a linked list but a vector). The latter can be faster, more flexible (e.g. you can specify how the comparisons are done so that you can use == for simple stuff, strcmp for strings, memcmp for structs or custom calls for more complex structures) and more type safe but on the other hand it can be very annoying to write, debug and extend which is why i rarely do it. Another way is to use an include trick where you do something like

    #define TYPE int
    #include "list_template.h"
    #undef TYPE
with `list_template.h` using TYPE wherever a data type would be needed and defining inline (C99) and/or static (C89) functions so that they can be redefined in multiple files (or have a dedicated C file that includes the above header with all data types and an additional macro that enables the implementation). This is basically sort of implementing templates in C.

The void pointer approach is the simplest and most macro free (despite me using macros here, i'm a bit macro happy sometimes :-P) but at the same time you are limited to pointers. In practice i've found that most of the time this is enough, which is why i still haven't replaced that yet. But there are cases where i'd prefer to be able to have a list of structs instead of pointers to structs, both because it is simpler (no need to define a custom free function) and faster (less indirections), so i'll most likely replace that code with another approach (most likely the macro that defines the types, not the include header).

But if there is a single feature i'd like to see from C++ to C that would be templates, even if they are single depth. I don't even care about classes or the other stuff (classes are nice to have, but not necessary as long as the compiler can figure out that the template parameters to structs and functions with the same name refer to the same type when used together).


> dynamic strings would be isolated to their own pool to avoid fragmenting the heap

Strings have arbitrary sizes. How does pooling them together reduce fragmentation? Do they always come and go in groups?


I think it reduces fragmentation in the other pools, not the string pool.


Bingo. This helps restrict the fragmentation from dynamic strings to one pool of memory, allowing the others to remain nicely packed, aligned, etc (whatever your ideal for them is).


> Generally speaking many C++ game engines avoid the STL stuff and reimplement their own more predictable containers

This seems a little like cargo cultism. I wonder if any of these shops regularly measure the performance of their custom containers and compare with the standard library on a modern optimizing compiler and make a reasoned judgment that it's still currently worth the trade-offs to stick with their own stuff.


IME most of it is due to how bad c++'s builtin allocator support is though. C++11 helps a little but a lot of stuff you might want to do seemed to be impossible last time I looked. This is awful since games will often use pools and arenas, often heavily.

Part of it is also for cross-platform consistency. No chance for bugs caused by using a different stdlib, etc. This is worth it when a lot of the toolchains for consoles are arcane have the chicken/egg-ish problem of often having poor stl implementations since they expect everybody to implement their own.

I've been out of game dev for a while though. These days I'd expect using the C++ stdlib to be more common, and mallocs are faster now (although even if you're bundling e.g. jemalloc, I imagine you still get a substantial benefit from using pools or arenas in many cases).


Older implementations of STL had a lot of issues, EA wrote their own version way back when to address them with a list of whys here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n227...


Right, but that was a long long time ago, 64 bits long. Even calling it "STL" nowadays kind of dates someone. I wonder how regularly EA currently does bake-offs between their stuff and the standard library. Not that it would matter at this point--they probably have so much legacy code depending on it that it would be painful to switch.


You can clone the repo and run the benchmarks yourself. When I last did it it was quite mixed (compared to libc++ on a MBP) - a lot of things were the same or slightly faster in EASTL, occasionally some things were an order of magnitude faster, some things an order of magnitude slower. Those were in what I would consider "edge cases" rather than general usage, for which the two were fairly similar.

As other comments have said though, the main reason it's used (and the reason I was interested even though I don't do games) is because the allocation story in the stdlib sucks.


My recollection (which could be wrong, it's been a long time) is that the white papers Microsoft freely distributed for XBox 360 development said specifically to not use any STL containers except contiguous memory ones and probably not those either without custom allocators. The penalties for cache misses/pipeline flushes were very high and naive STL usage made both those things happen a lot in real games (Microsoft would step in and help developers and would do post mortems on things they did to improve performance).


You might also do this to improve debuggability and unoptimised build performance. The VC++ stdlib is particularly bad, as its authors have gone down the ultra-DRY rabbit hole even for simple stuff - a pain to step through, and it relies entirely on the optimizer doing its thing.

(libstdc++'s vector looks sensible in this respect - a good decision on their part. Haven't looked at any other aspects of it though.)

At one point the contents of vectors were often inconvenient to examine in just about every debugger, because you'd have to type out some infeasibly long expression to get at them, "vec._Mybase._Myval._Myptr[0]", that kind of thing, which you could also fix by writing your own container and simply calling your pointer field something like "p". (Same goes for smart pointers.) Luckily this is much improved in the latest Visual Studio but it may still be an issue elsewhere.


> I wonder if any of these shops regularly measure the performance of their custom containers

Yes, while it wasn't often, we sometimes did benchmark tests to improve performance when bottlenecks were found, especially towards the game's release when we were focusing on optimizations. IIRC we did some minor changes in the dynamic array container and we rewrote the hashmap and hashset implementations. One of the programmers wrote a performance test comparing several algorithms both with synthetic and real data (from the case that created the bottleneck).

> compare with the standard library on a modern optimizing compiler and make a reasoned judgment that it's still currently worth the trade-offs to stick with their own stuff.

There are other reasons to use a custom container than just the pure performance of the container itself. One is using a different allocation scheme, as the example i gave in the grandparent post, another is to use a friendlier API (see `find` and friends) and add more features. An important one in our engine was support for the custom RTTI that was used for object serialization and the scripting language that also worked and exposed those directly - the container, the RTTI implementation and the scripting runtime had to have intimate knowledge about each other to work transparently (especially when the editor entered the picture, where you could create new entries, often objects but also sometimes structs or other data types, by editing the array directly in a property editor).

Of course not all engines do that and TBH most of the performance and memory related bits are more relevant to consoles than (desktop) PCs (the API friendliness and RTTI stuff are platform agnostic though :-P). At the previous gaming company i worked at, the engine used standard containers. Also AFAIK the engine used by the Two Worlds games also uses standard containers (based on some of their developers' comments).

Personally when i write C++ i implement my own containers not because of performance but simply because i dislike the STL API - for example i want to have "Find", "IndeOf", "Swap", etc methods in the container itself :-P. Sadly it seems that i'll also need to do the same if i decide to start working with D seriously since D's standard library seem to more or less copy the STL API style.


> I wonder if any of these shops regularly measure the performance of their custom containers

You don't know? Maybe you should find out before slinging around accusations of cargo cultism.


Of course I don't know--that's why I said "I wonder". I have, however, worked in non-game-developing companies who eschewed the standard library, and when pressed the argument boiled down to: someone who worked here long ago said the STL was slow and therefore we don't use it. I am _wondering_ if it's the same story at other companies.


I do not use STL because I want consistent implementation across all supported platforms. Also, you have to care about memory management when you need performance or when working with memory constrained devices. The way STL deals with custom allocators makes no sense to me, hence my own implementation.

You can find demos in the `samples` folder. :)


Show me a man who thinks STL allocators make complete sense, and I'll show you a man with some serious cognitive issues.


Would love to hear more about the motivation for this (and other aspects of your C++ usage).

I know EA wrote their own implementation of STL[1] to get around the problems that the standard implementation was causing in their game engines. Doing a 'diff' between that and a standard implementation should highlight some of the potential problems they found.

[1] https://github.com/electronicarts/EASTL


What is "a standard implementation"?


By far the most used implementations are GNU's libstdc++ LLVM's libc++ and Microsoft's CRT.


The one that you get by default with your operating systems most popular C++ tool chain.


Yeah, I'd like to know why you'd still use C++ at all if you find Orthodox C++ appealing. It would seem easier to just write C. Unless I'm missing something (I probably am; don't write enough of either one to have an informed opinion, which is why I'd love to hear more about this).


RAII is incredibly attractive, especially if you disable exceptions.


Why would RAII be especially attractive if disabling exceptions?


Probably because exceptions are the biggest source of pain with RAII.


I'd say that exceptions are the biggest source of pain if you don't have RAII.

Or, rephrased: RAII is incredibly attractive, especially when exceptions are being used.


RAII is necessary for exceptions. Why the hell would you do that to yourself?


How do you deal with constructors that might fail?


You don't have constructors that 'fail'.


So, allocating memory for objects is not part of construction? Again, why not just stick with C?


Memory for objects is allocated structurally as part of the object, not dynamically, wherever possible. And it's usually very possible. If you can't do that, provide an initialization method. (In almost all cases, I would personally prefer hiding that two-phase initialization within a factory function.)

As for "why not stick with C"--all of the other reasons still hold true, from templates on down. The simple existence of dtors with viable scope guards that are guaranteed to fire when exiting scope is reason enough for me to never write C and to look with a default skepticism on any codebase that thinks its developers are perfect enough not to need them.


> Again, why not just stick with C?

The thing that is hard to replicate in C is destructors. Automatic deinitialization when leaving scope in very convenient. It allows you to have multiple exists from the scope without preceding each of them with prologue of dinit_*() calls or creating single exit point and jumping to it.

Coupling allocation with initialization is trivial to do without C++ constructors (which I find to be very poorly designed).


C doesn't let you have destructors that automatically clean up whatever got initialised in the Init method (if it was called), but C++ does.


If you disable exceptions hoe do you handle failures in constructors?


I mean it's kind of a smartass answer, but—don't fail during constructors. Move all possible code that can fail into an initialize method; check explicitly for allocation failure and/or put things on the stack instead of heap when possible; consider failing hard with a stack trace or core dump over catching and processing exceptions before (likely) failing anyway.


Instead of using a constructor you can use a constructor method e.g.:

    class Foo {
       public:
          static std::optional<Foo> create();

       private:
          Foo();
    };


In general if you're not using exceptions, you're not going to be using features that haven't actually been published in a formal standard (optional).

This now means that you can't use any constructors, so how do you have Containers of foo?


> you're not going to be using features that haven't actually been published in a formal standard (optional).

So you then have things like:

  class Foo
  {
  public:
    static Foo* create();
    ...
  };
  ...
  Foo* foo = Foo::create()
  if ( foo != nullptr ) ...
> so how do you have Containers of foo?

std::vector<Foo*>

Not saying either of those are better than the alternative (I prefer using exceptions and RAII), just pointing out what I've seen in real world projects.


I don't think this is the proposal. The proposal is that the object contains a genuine constructor that only does the bare-bones "safe" stuff, and then it has a separate non-static method that does the might-fail initialization. So:

  class Foo
  {
  public:
    Foo();
    bool initialize();  // returns success
    ...
  };
  ...
  Foo foo;
  if ( !foo->initialize() ) { // handle error }
This also means you can break up your initialization so that you drive the risky pieces from outside the object, rather than monolithically from within.

This has a further benefit for testing, since you can use your major objects without fully initializing the entire world that they depend on.


It was almost the exact proposal specified by my grandparent post except using a pointer rather than an optional.

It's also a technique that is widely used. See for example the cocos2d-x game library.

The benefit of such a technique is that you can then make the constructor private, making it impossible to create an object and not also call the initialize() method.


It is very different from having a static method returning an optional, which guarantees that you can access the optional (with at least an assert in debug mode) only if the object is actually successfully constructed.

Using a separate initialize member function means that you may have objects in a zombie state laying around after a failed construction which lead to all kind of initialization order issues (you might get a pointer to the object, but is it initialized?). Also you need to remember to check the return type, which also need to be meaningful (does it return false on failure? or it returns 0 on success?).

Two phase initialization is a known antipattern which is, unfortunately, widely used and lead to all kind of pains.

Friends do not let friends use 2PI.

edit: sorry, I misread your comment, you were referring to the static function returning a pointer, which as you note is almost the same as the optional version. It forces heap allocation though, which is bad.


I agree that 2PI leads to all kinds of pain, but what I meant is not 2PI. Here's a more complete example:

  class Foo
  {
  public:
    static Foo* create()
    {
      Foo* result = new Foo();  //exceptions disabled so new can return nullptr
      if ( result )
      {
        //configure result here
      }
      return result;
    }
  private:
    Foo() {}
  };

  ...
  Foo* badFoo = new Foo(); // compiler error because Foo() is private

  Foo* foo = Foo::create();  //all good, no 2PI and can't forget to call initialize code
  
  if ( foo ) //check for non-null, note, if using an optional you'd also need a similar check
  {
    ...
  }
Now the only way to create a Foo object is through the create() function and there is no separated initialize - it all happens in the same place.

This pattern of using a static create method is explicitly designed to avoid 2PI and is very common, especially in codebases that disable exceptions.

Also note that I'm not personally advocating using it, just that it is commonly used to avoid 2PI.


That still means you can't creat instances of Foo on the stack though, doesn't it? Or you can't have a storage or contiguous Foo's (e.g. vector<Foo>)


Correct. It means you can't create instances of Foo on the stack (outside of the Foo class).

You can have contiguous Foos, but not in a vector. You can either have another static function to return an array of Foos, or more commonly have some sort of pool allocator and have the create function allocate objects from the pool.

Anyway, yes, there are limitations for using this pattern, so like all things it's a matter of weighing up the tradeoffs.


There are lots of cases out there where the so-called "zombie" state is a perfectly valid one, and may for various reasons be preferable to representing that state externally (for example, with a null pointer). Such an object is simply a placeholder instance of the object that is ready to do work, but not yet actually doing anything. If necessary, it can check its internal state and throw exceptions when not-valid-while-uninitialized methods are called.

The examples that come immediately to mind are the Publiser, Subscriber, and Timer classes that are part of the ROS C++ API: http://docs.ros.org/api/roscpp/html/classros_1_1Publisher.ht...

I agree that there are caveats with it, but I get nervous when people toss around a phrase like "known antipattern" with such confidence.


> It forces heap allocation though, which is bad.

It does, and it can be, but in situations where it matters, the static create function typically returns a value from a preallocated pool of memory, so objects are all contiguous and cache friendly.


Even when using a pool allocator, you still have unnecessary indirections which is expensive. One of the benefits of C++ is the ability of being able to allocate subobjects inline with the containing object or array. By forcing indirection, allocating subobjects requires navigating a potentially deeply nested tree.


This is true, and like I said above, it's not a method I prefer to use, it's just something that is commonly seen in projects that disable exceptions in order to avoid 2 phase initialization.

There are definitely things to be aware of before adopting such a pattern, or when trying to optimize code that uses it.


My bad, I thought std::optional is part of C++14, it seems to be part of the next standard C++17, but there's still boost::optional.

About the container issue: if you have objects that might fail during the creation it seems like a bad idea to allow things like:

    std::vector<Foo> foos(10);
Having a separate initialization method which might fail - like proposed by others - is another option, but this means your objects need some kind of internal initialization state, and whenever you're handling such an object you never can be absolute sure that it's in a valid state.

I'm quite a big fan of making invalid state not representable in an object and handling failure cases as early as possible.

What the create method returns depends heavily on your use case. If the returned objects can always be allocated on the heap, then a pointer or unique_ptr can be returned.


How would you handle failure in copy constructors?


Most likely having an explicit copy method instead of a copy constructor.


Well, you still get classes and templates.


Sensible usage of templates, operator overloading and namespaces is ok.


Wow it seems to contain a full level editor written in vala, a language I hadn't heard of. It seems to be a high level language similar to C# but with a native compiler and some different semantics (RC instead of a tracing GC for example). Is vala widely used?


Vala came out of Gnome/Gtk app development on Linux IIRC. It takes the GObject object layer from Gtk/Glib and promotes it to a first class part of the language: There’s a fairly direct translation from the various parts of the Vala language to C + GObject / Glib / Gtk+.

I don’t think it’s widely used outside of the Gnome desktop world, but there’s quite a few apps written in it out there.


Its really only used for gtk3 apps. I think it was created with that in mind by the gnome team


Toolchain was initially written in C#. Vala simplified a lot the switch to GTK+3. Today I think an IMGUI-like approach is the way to go for game tools. Rewriting the editors should be rather painless due to their engine-decoupled TCP/IP architecture.


Vala has been out for some time. I haven't used it since 2007 though. Vala would always come up in discussions about C# on GNU/Linux and Stallman in particular wanted no parts of C# or Mono in the operating system.


> RC instead of GC

RC is a GC algorithm


I just had to change my comment now, thanks.


The code seems clear enough, though I think the preprocessor macros for wrapping OS specific logic could be at the function level or class level rather than at the statement level.


Why when code amount is small enough?


How long have you been working on this engine for?


I started with game engines well before I put this project on github in 2012.


Can you recommend any resources/books to learn game engine development? I would love to implement a mini one where I can rapidly prototype AR applications :).


You may just want to start by building your AR applications first. After a few of them, you'll have a better understanding of what common functionality is needed.

See also: https://geometrian.com/programming/tutorials/write-games-not...


I love the article! It makes sense that after developing AR application on a game engine, I will identify the common functionality that would fit better for an "engine" that would fit the ideas I want to develop.

I am interested in developing AR applications that are not as much of a game, so I was worried that a game engine would not fit.

With both learning more about graphics and making my own games, it would make future engine development more clear.


Game Engine Architecture by Jason Gregory is a nice one.

Search for "Bitsquid" and "Our Machinery": pure gold IMHO.


Props for the coding discipline. I couldn't even tell how long it has been since I saw C++ as readable as this.


Thanks, I'm glad you like it. :)


Really cool work! I would love to have some time over to try it out.


Hey, thanks. :)


Your use of `require` is incorrect.


Can you elaborate on this?




All resource paths (.lua files are resources like any other) in Crown are unix-style and do not include the extension.

I have a custom loader that deals with it: https://github.com/dbartolini/crown/blob/master/src/lua/lua_...


I would recommend that you extend `package.loaders` instead. You risk breaking expected `require` functionality. Then you get the benefits of both!!

https://www.lua.org/manual/5.1/manual.html#pdf-package.loade...



Great title. C++ is so terrible you have to prefix it with "modern" or "sane" to get people excited about it.


It's a lost cause: there's no such thing as "sane" or "orthodox" C++. Instead, use something actually sane like Go. Or Rust, if you really must (first bad pun is free, then it's 50 cents each).


This comment is so asinine, it's depressing it's at the top of the comment section right now. Why can't you just be nice?

Anyways, C++ is much easier to manage than any GC language for a real time application such as this. And your Rust "pun" is a rhyme.


Fabien Sanglard and John Carmack definitely disagree with you on this regard:

http://fabiensanglard.net/doom3/index.php http://fabiensanglard.net/doom3/interviews.php#qc++


Go is a bad choice for game with its unavoidable GC. And Rust is simply too complicated for a person wanting sane C++.

D or Nim (or even C!) would make more sense.


Is there not a point where GC overhead becomes negligible? There have been production examples of Go maintaining sub-millisecond GC pauses with a multi-GB heap under a server workload. https://twitter.com/brianhatfield/status/804355831080751104

Surely there are better reasons to disregard Go for gamedev by now.


I've dipped my toes into using OpenGL and Go a few times and it seems pretty nice. But I'm entry level graphics programming and so it scares me off to go further (surely someone would have used Go by now in a 3d game, if you can do Minecraft on the JVM, why not Go)?

I keep wondering if it's not possible or if there's just not a lot of overlap between Go programmers and gamedevs? Wish someone knew.


I wouldn't predict a lot of overlap. On the game dev side, Go offers nothing new, and the forced untuneable gc will be an instant turnoff for the usual reasons. On the Go side, my outside perspective is most of its users seem to be focused on web server backends, trying to justify async everywhere, and/or rewriting slow Python or Perl or Bash scripts and assuming that makes it a systems language.

Go users interested in diving in to SDL or OpenGL bindings to make a game shouldn't be discouraged. Lots of games are made in all sorts of high level languages, the heavy lifting is put onto a few native libraries. But if the goal is to make a general engine, I'd question its utility apart from fun/learning. Again there are game engines in high level languages (with dark native-level secrets in any that try to be performant) but they don't seem to get traction. Even an engine in e.g. C++ doesn't necessarily help your performance goals (http://www.yosoygames.com.ar/wp/2013/11/on-mike-actons-revie...) if your plan is to make it general instead of make it just support whatever sorts of games you're making and planning to make.


The vast majority of memory in a modern game stores data for the GPU. On a game console the game manages this data itself. If you ever want to ship on a console you better be sure Go can handle the memory visible from the GPU correctly e.g. keep the necessary alignment, ensure the GPU is not reading/writing the memory it's going to modify, do not charge address of anything, preserve layout of structures etc. I never used Go so I don't know if it already does this. But I would be considering these things way before I'd even started looking into performance.


> Go is a bad choice for game with its unavoidable GC.

Better let the Unreal guys know about it then.


There's a big difference in engine support for GC of a certain object type with lots of support for tuning (https://wiki.unrealengine.com/Garbage_Collection_%26_Dynamic... and https://docs.unrealengine.com/latest/INT/Programming/UnrealA...) and language-level untuneable GC of everything.


Not all language-level GCs are untuneable .

Also if one doesn't allocate like crazy on the heap, there is no reason the GC needs to work.


I agree, but the GP comment was in the context of Go. Nim is a newer option I'd like to see tried more, its GC is optional and swappable.

If you never run out of memory you'll also never need to GC. ;) Or even free(), just let the program finish and reset the machine. (Actually not too weird in some embedded systems...) Some languages make it easier to not heap allocate than others, or notice when you are heap allocating. I hear Go does better than Java in this regard. But if you're facing a performance issue at the level where you're fighting the GC as the biggest barrier, and the language doesn't give you much assistance (like being able to choose latency/throughput tradeoffs or controls on non-determinism), that's a sign the language isn't that suitable for that performance problem domain. With performance sensitive games, you're already in the corner of having to worry about hardware details, so there's a strong incentive to just start the fight at the beginning without your hands tied by some language's static GC.


Not all games need to be the next Crysis.

If fact, the majority of them gets abandoned even before memory pressure starts to be a relevant issue.

Even if Go isn't at the same level of D or Modula-3 in regards to memory management (heap, stack, global), it is already quite usable for many types of games.


Yeah, at some point it doesn't matter what language if the game is just game logic on a standard input and output layer that are already fast. Even high end engines like CryEngine often include a scripting layer in some higher level language like Lua that probably has GC, because it's nice to have for game logic that doesn't have the same constraints as other parts of the game. (http://docs.cryengine.com/display/SDKDOC4/Lua+Scripting)

As I mentioned in another comment a lot of fun games have been made in all sorts of languages. That doesn't really make any of them suitable for games though, and you'll still find far fewer examples of game engines in GC languages.


Actually, I am old enough to have heard the same kind of argumentation against adoption of C, Turbo Pascal and C++ for game development, depending when I heard it (80's, 90's, early 2000's), because how we do it today is the only way possible.

Game developers have a tendency to only update their tools when OS or console vendors force them to do so.


Lua has a GC, but it's incremental and tunable. Video game scripting is actually one of the biggest use cases for Lua for that reason. Lua is also one of the fastest scripting languages, especially if you can get away with using LuaJIT.


The important difference is that not everything in Unreal is GC'd, only UObjects. Internally the rendering, animation, networking and other high-performance subsystems don't use GC because they can't afford the overhead. Being able to opt in to GC where it's useful is great, being forced to use it everywhere can be a hassle.


> Being able to opt in to GC where it's useful is great, being forced to use it everywhere can be a hassle.

True, but just because a programming has language level GC, it doesn't mean it must be used everywhere.

If one doesn't allocate like crazy on the heap, there is no reason the GC needs to work.

Also using value types is always an option.

One also doesn't call malloc() in such high-performance subsystems.


If you're gonna bother with orthodox C++, or Go (suited to 3d games? nobody seems to have tried or lived to tell the tale), or Rust (no libraries anyway, so you may as well use something which wraps C easily and is higher level), why not try Nim?


I think if you were writing C, then Nim might be a good alternative. If you're writing C++, then Nim offers a lot of similar metaprogramming features; but it lacks RAII, which IMO is a monumental drawback. People may dislike C++ for many reasons, but RAII is a killer feature -- there's no question why languages like D and Rust adopted it.


We're slowly getting there though. I hope to release a blog post soon about how it might look like in Nim.



IMO, Go is not sane.

Rust, however is great.


sane like Go

Yet Go uses the oh-so-error-prone return value error checking that has been so successful in C :/


Disclaimer: Not a go apologist, I admire it, but don't use it.

With Go you get a compiler error if you don't do something with that error. You have to explicitly decide to ignore it with `_`. As far as I remember that's quite different from C where you can get an error code, ignore it, and never realize you've missed it.


Not quite. Try this:

    import "os"

    func main() {
	    os.Open("this file does not exist")
    }
This will compile just fine, producing no compiler error or warnings whatsoever. The error is just silently ignored.

Compare to Rust:

    use std::fs::File;

    fn main() {
        File::open("this file does not exist");
    }
This will produce the following warning:

    warning: unused result which must be used
      --> test.rs:14:5
       |
    14 |     File::open("this file does not exist");
       |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
       |
       = note: #[warn(unused_must_use)] on by default
This example is a bit contrived, because if you open a file you probably want to do something with it. But imagine something where the only return value you care about is the error, like say "txn.commit()"


A slightly related note: you can get similar behavior in C (and C++) with compiler extensions. In GCC and clang marking function with '__attribute__((warn_unused_result))' will produce a warning if function is called without using the result. Equivalent for MSVC is '_Check_return_'.

Obviously this is not nearly as convenient as your Rust example, but enables some of its the benefits.


C++17 has (will have?) [[nodiscard]] with the same semantics.


Still better than

try:

  ...
except:

  pass


I wouldn't say so. This screams "I am an idiot", missing handling can easily pass unnoticed.


Hah! It could be worse... imagine your example, but in Java with checked exceptions ;)


A cross platform project in modern C++ that doesn't use CMake? Unsure if want.


It is not "modern" C++. It uses GENie, a fork of premake.


Huh. Okay, that was not immediately apparent from viewing the project— it looked like a hand-written Makefile. Link for the curious:

https://github.com/bkaradzic/GENie

Appreciating that CMake has its warts, it also has a ton of mindshare and has lots of convenient modules for handling common dependencies. What are the motivations to use a Lua-based scheme instead?


I find GENie/premake way simpler to read and write. Also, you have more flexibility since your build scripts have full-fledged Lua capabilities.


Yeah, similar sentiments in the PPT presentation here: https://onedrive.live.com/view.aspx?cid=171ee76e679935c8&pag...

CMake is "too complicated" and "you need to be an expert". Understandable, I suppose. There are certainly specific things in CMake which are pretty terrible, like the add_custom_command/add_custom_target dance, but from the perspective of someone who has had to become an expert in it (via ROS/catkin), I would be unlikely to give it up. There's just way too much stuff it gives you for free, especially when it comes to things like packaging, testing, etc.


Okay, the other thing I would say having examined this a bit is that GENie seems to be much more "project" oriented than CMake. CMake has a concept of projects, but its usual model centers around targets and directories as the main unit to reason about. That is, CMake's most native output format is a Makefile, with adapters to generate IDE projects.

GENie seems to be focused first and foremost on a project/solution-oriented IDE workflow, with the Makefile generator as the one that's tacked on. So I can definitely appreciate that if you're working on a project where everyone's in an IDE anyway, it would make sense to use a generator that has the IDE's concepts as a first class citizen.


Great learning project, but for production just use Unity unless you absolutely, positively (triple-check) cannot. You will be massively more productive.


Umm, no. Unity has lots of problems that make it not ideal for games that have fast action or are developed by a larger team.

Like, the fact that it uses an old version of Mono on not-Windows, which uses a mark and sweep garbage collector. You end up with frequent stop-the-world garbage collection pauses that freeze the screen for seconds at a time. Play any Unity game on the PS4 and you'll see it frequently.

I've also heard that Unity's project asset management doesn't really work for teams that have more than 10 people working together, but that's something I don't have direct knowledge of.


> You end up with frequent stop-the-world garbage collection pauses that freeze the screen for seconds at a time. Play any Unity game on the PS4 and you'll see it frequently.

This is a coding problem, not an engine problem. A game should have almost no dynamic resource allocation.


Then what about Unity leads everyone who uses it to allocate dynamically, somehow, on every platform but Windows?

If everybody makes the same coding problem, and the common denominator is one of their dependencies, it makes you wonder what's wrong with the dependency.


If you're genuinely interest the developers of Inside have a great presentation on how they hit a stutter free 60fps on all platforms, PS4 included.

https://www.youtube.com/watch?v=mQ2KTRn4BMI

I don't know what projects you're referring to specifically on the PS4 but I'd suspect the difference comes down to hardware spec on the PC masking the issue, it probably being the primary platform and/or lack of time to work on optimisation for the port.

It's the use of C#, which generally written in a manner that creates garbage and lack of experience with programming games that causes many people who use it to go wild with allocations. Both intentionally and by accident. These days you can actually get pretty far before it's ever a problem on a gaming PC. Unity is also incredibly accessible so more people with less technical chops are programming games without even opening the profiler.


There's a lot of FUD surrounding Unity, although I cannot comment on PS4, specifically, as I have not shipped for that platform.

On mobile, and/or desktop there are very few empirical reasons not to use Unity. If you have the skills to develop your own engine and tools, then you certainly have the skills to work around Unity or garbage-collection issues.

When I'm hiring a game developer and they'd rather work on engine or tools than making games, this is a red flag. I've seen projects waste person-decades of development effort all because one or two senior devs wanted to do roll their own thing instead of using Unity.

The engine and toolchain that is always has flaws that the utopian engine and toolchain that could be don't have (yet).


>On mobile, and/or desktop there are very few empirical reasons not to use Unity. If you have the skills to develop your own engine and tools, then you certainly have the skills to work around Unity or garbage-collection issues.

I guess. The question is why should I have to work around Unity?

I mean, everything is a trade-off. I understand that for many teams and games, the hassles of Unity are worth the benefits. But that is not every game and every team.

>The engine and toolchain that is always has flaws that the utopian engine and toolchain that could be don't have (yet).

We don't have to compare Unity to utopian engines and tool chains when we can compare it to its competitors like Unreal.


> I guess. The question is why should I have to work around Unity?

When shipping a production-quality game there's almost always going to be something you have to customize or work-around with any engine. What often happens with home-grown engines is that the cost of tools friction or engine implementation is not properly accounted for, because it's kind of fun although it's unproductive.

> But that is not every game and every team.

Agree, but I'm fed up with the amount of FUD around Unity. I've watched teams burn money rather than putting up with some annoyances. I'm an older dev (40+), so I've seen many iterations of devs refusing to use existing tool X in favour of supposedly more convenient but less battle-tested tool Y. This is in game development, and software development, more generally.

There are also holistic benefits to Unity, like 1-2 second recompiles. This is a game-changer in terms of debugging and allowing you to try more iterations of things.

> We don't have to compare Unity to utopian engines and tool chains when we can compare it to its competitors like Unreal.

I think Unreal is a great engine, and I'm admittedly less familiar and therefore productive with it than with Unity.

That said, I would say that for mobile or small-footprint games Unity still has an edge. This is based on the experiences of several studios / devs that I've talked to. They make great headway with Unreal, but then the project bogs down when it's time to actually ship. That said, this could be Unreal FUD from developers who are new to that engine.

To get the benefits of Blueprints in Unity, just buy PlayMaker. It's $65 dollars a seat, and you'll never write another Finite State Machine.


Yet another game engine written in an imperative language, seemingly with the focus on writing something in Orthodox C++ and not necessarily writing something to solve a problem. Just as the discussion on this post shows, the more interesting thing here is Orthodox C++, not yet another game engine which doesn't offer any featues that modern game engines offer.

If you're expecting people to see this as a useful product, please explain why it should be used instead of the existing game engines. Consider listing its features and comparing them, maybe in a table, to popular game engines. Deferred rendering? Multi-threaded rendering? Entity-component system? etc.

Right now, I can see it has a (nice looking) editor, "physics" (nothing's moving, so it's hard to tell), and "animation" (again, nothing's animating in the image). Alas, one needs to learn to use its entirely fresh standard library reimplementation, and likely its strict C++ subset, if one actually wants to be productive with it. Given apparently no active community, no sample (or actual) games written in it, and no paid support, I don't see how that would be feasible.

If you have no interest in actually making a competitive, novel, or useful (to others, compared to existing solutions) game engine, please just say that. In that case, I'd just say this is a neat side project: well done, but try to focus on building something with it to help sell/prove its features.


It is just a simple general-purpose, data-oriented, data-driven, entity-component based, lua scripted game engine written in sane C++ for my own game development needs.

Nobody's trying to sell anything here.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: