Memory for objects is allocated structurally as part of the object, not dynamically, wherever possible. And it's usually very possible. If you can't do that, provide an initialization method. (In almost all cases, I would personally prefer hiding that two-phase initialization within a factory function.)
As for "why not stick with C"--all of the other reasons still hold true, from templates on down. The simple existence of dtors with viable scope guards that are guaranteed to fire when exiting scope is reason enough for me to never write C and to look with a default skepticism on any codebase that thinks its developers are perfect enough not to need them.
The thing that is hard to replicate in C is destructors. Automatic deinitialization when leaving scope in very convenient. It allows you to have multiple exists from the scope without preceding each of them with prologue of dinit_*() calls or creating single exit point and jumping to it.
Coupling allocation with initialization is trivial to do without C++ constructors (which I find to be very poorly designed).
I mean it's kind of a smartass answer, but—don't fail during constructors. Move all possible code that can fail into an initialize method; check explicitly for allocation failure and/or put things on the stack instead of heap when possible; consider failing hard with a stack trace or core dump over catching and processing exceptions before (likely) failing anyway.
Not saying either of those are better than the alternative (I prefer using exceptions and RAII), just pointing out what I've seen in real world projects.
I don't think this is the proposal. The proposal is that the object contains a genuine constructor that only does the bare-bones "safe" stuff, and then it has a separate non-static method that does the might-fail initialization. So:
This also means you can break up your initialization so that you drive the risky pieces from outside the object, rather than monolithically from within.
This has a further benefit for testing, since you can use your major objects without fully initializing the entire world that they depend on.
It was almost the exact proposal specified by my grandparent post except using a pointer rather than an optional.
It's also a technique that is widely used. See for example the cocos2d-x game library.
The benefit of such a technique is that you can then make the constructor private, making it impossible to create an object and not also call the initialize() method.
It is very different from having a static method returning an optional, which guarantees that you can access the optional (with at least an assert in debug mode) only if the object is actually successfully constructed.
Using a separate initialize member function means that you may have objects in a zombie state laying around after a failed construction which lead to all kind of initialization order issues (you might get a pointer to the object, but is it initialized?). Also you need to remember to check the return type, which also need to be meaningful (does it return false on failure? or it returns 0 on success?).
Two phase initialization is a known antipattern which is, unfortunately, widely used and lead to all kind of pains.
Friends do not let friends use 2PI.
edit: sorry, I misread your comment, you were referring to the static function returning a pointer, which as you note is almost the same as the optional version. It forces heap allocation though, which is bad.
I agree that 2PI leads to all kinds of pain, but what I meant is not 2PI. Here's a more complete example:
class Foo
{
public:
static Foo* create()
{
Foo* result = new Foo(); //exceptions disabled so new can return nullptr
if ( result )
{
//configure result here
}
return result;
}
private:
Foo() {}
};
...
Foo* badFoo = new Foo(); // compiler error because Foo() is private
Foo* foo = Foo::create(); //all good, no 2PI and can't forget to call initialize code
if ( foo ) //check for non-null, note, if using an optional you'd also need a similar check
{
...
}
Now the only way to create a Foo object is through the create() function and there is no separated initialize - it all happens in the same place.
This pattern of using a static create method is explicitly designed to avoid 2PI and is very common, especially in codebases that disable exceptions.
Also note that I'm not personally advocating using it, just that it is commonly used to avoid 2PI.
Correct. It means you can't create instances of Foo on the stack (outside of the Foo class).
You can have contiguous Foos, but not in a vector. You can either have another static function to return an array of Foos, or more commonly have some sort of pool allocator and have the create function allocate objects from the pool.
Anyway, yes, there are limitations for using this pattern, so like all things it's a matter of weighing up the tradeoffs.
There are lots of cases out there where the so-called "zombie" state is a perfectly valid one, and may for various reasons be preferable to representing that state externally (for example, with a null pointer). Such an object is simply a placeholder instance of the object that is ready to do work, but not yet actually doing anything. If necessary, it can check its internal state and throw exceptions when not-valid-while-uninitialized methods are called.
It does, and it can be, but in situations where it matters, the static create function typically returns a value from a preallocated pool of memory, so objects are all contiguous and cache friendly.
Even when using a pool allocator, you still have unnecessary indirections which is expensive. One of the benefits of C++ is the ability of being able to allocate subobjects inline with the containing object or array. By forcing indirection, allocating subobjects requires navigating a potentially deeply nested tree.
This is true, and like I said above, it's not a method I prefer to use, it's just something that is commonly seen in projects that disable exceptions in order to avoid 2 phase initialization.
There are definitely things to be aware of before adopting such a pattern, or when trying to optimize code that uses it.
My bad, I thought std::optional is part of C++14, it seems to be part of the next standard C++17, but there's still boost::optional.
About the container issue: if you have objects that might fail during the creation it seems like a bad idea to allow things like:
std::vector<Foo> foos(10);
Having a separate initialization method which might fail - like proposed by others - is another option, but this means your objects need some kind of internal initialization state, and whenever you're handling such an object you never can be absolute sure that it's in a valid state.
I'm quite a big fan of making invalid state not representable in an object and handling failure cases as early as possible.
What the create method returns depends heavily on your use case. If the returned objects can always be allocated on the heap, then a pointer or unique_ptr can be returned.