Hacker News new | past | comments | ask | show | jobs | submit login
C++ Has Become More Pythonic (2014) (preshing.com)
126 points by luu on April 30, 2016 | hide | past | favorite | 121 comments



None of these ideas seem outright specific to Python and I don't think it makes sense to attribute all these changes to Python influence. C# and D, for example, have had most of these features for a few years. One could just as easily say "how C++ is becoming Haskell" or "how C++ is becoming Go" (both of which have many of these features). I think a better title for this would be something along the lines of "Python idioms in C++".


I would say that most of the features in the article (e.g. lambdas) originated in functional languages, not Python. Static type inference, in particular, comes from the functional world. Python's dynamic typing is completely opposite.


>how C++ is becoming Go

Hmm? Go has almost none of these features.


It has tuples, local type inference, for loops on ranges, and anonymous functions/lambdas (which act as closures/"capture" scope). Hardly "almost none of these features".


Go doesn't have tuples (multiple return isn't that). This leaves us with local type inference, for loops on ranges, and anonymous functions/lambdas.

The parent also mentions: binary literals, raw string literals, uniform Initialization, standard algorithms like map, filter and any, and parameter packing. All of which Go doesn't have.

Plus, Go range loops are constrained to built-in types (slices, channels, etc.), unlike C++/Python where you can implement begin()/end() or the iterator protocol respectively and get it for any type.

So, from 10 things mentioned, whereas C++ and Python share ALL of them, Go only has 2 of them and sort of has one more (the range-for).

I'd stick with: "almost none of these features".


Go doesn't even have local type inference: http://ideone.com/RQzz7E. Contrast with a language that actually does: http://ideone.com/LtN1vQ. Type inference, even the local kind, means figuring out the types of variables from how they're used, not just from how they're initialized.


That's a completely arbitrary definition, and it is wrong. Type inference limited to the initialization point is still type inference, and actually most languages before Rust and Swift that have local type inference (C#, D, C++11) are limited to type inference at the declaration/initialization point.


> That's a completely arbitrary definition, and it is wrong.

The definition is very simple - the inference engine must use nontrivial inference rules to reconstruct the types. The rule “given P then P” alone doesn't quite cut it, which means that ”given that 0 is an int, then something that's initialized to 0 is an int“ also doesn't quite cut it. Calling what Go, C# and C++11 have “type inference” is akin to calling Python a statically typed language because it has a trivial type system with exactly one static type.

> actually most languages before Rust and Swift that have local type inference (C#, D, C++11) are limited to type inference at the declaration/initialization point.

Then they just have unidirectional type propagation - which is perfectly fine, just not type inference.


> The definition is very simple - the inference engine must use nontrivial inference rules to reconstruct the types.

Where exactly is this rule coming from ? What is the source of your definition ? And if it is the authoritative definition, why don't you go rewrite the wikipedia page, which is then wrong ? https://en.wikipedia.org/wiki/Type_inference

> The rule “given P then P” alone doesn't quite cut it

"Doesn't quite cut it" sounds like a very precise and scientific definition ! Also your categorization of C# is wrong, even by your own definition, because of function types inference and of subtyping, which makes the algorithm non trivial.

> Then they just have unidirectional type propagation - which is perfectly fine, just not type inference.

Again:

1. This is wrong, even by your own definition. You can have local type inference limited to the initialization point, and have non trivial resolution rules. See this paper by Benjamin Pierce for an example : http://www.cis.upenn.edu/~bcpierce/papers/lti-toplas.pdf

2. Where is this definition even coming from ? In my book, unidirectional type propagation is a form of type inference, and it quite logically follows: The type is inferred. The fact that you chose to draw a line, say, to flow sensitive inference (in the case of Rust and Swift) or to global unification style inference (ala ML) is a completely arbitrary definition, and one that I have to this day never encountered. Indeed, I can't find any online resource that agrees with you. Most language documentations, including C#, C++ and Go, call this type inference. Most researchers call any mechanism where a language infers the type, type inference, even the mechanism that allows you to call generic without specifying the type of the instantiation, as in this paper : https://www.researchgate.net/profile/Erik_Meijer/publication...

I have absolutely never encountered any definition of type inference which draws this line, and for good reasons, because it doesn't make any sense.


You are right about C#: The type of `x => x + 1` can't be said to be anything other than "inferred". I stand corrected.

But I disagree with the rest of your post. From the point of view of type inference, what matters is the nature of the type constraints that the type checking algorithm generates:

(0) Traditional type checking: All type constraints are of the form “T1 = T2”, where both “T1” and “T2” are closed type expressions. There is nothing to infer.

(1) Type propagation: All type constraints are of the form “X = T”, where “X” is a type variable and “T” is a closed type expression. Again, there is nothing to infer, but we might need to propagate “T” between places. Say, from the RHS to the LHS of a variable initialization.

(2) Type inference: Type constraints are arbitrary type expressions, which must be solved using a unification algorithm.

As for why type propagation doesn't count as type inference: http://lambda-the-ultimate.org/node/4771#comment-75771


Go has raw string literals.


Go doesn't have local type inference: http://ideone.com/RQzz7E. This is local type inference: http://ideone.com/LtN1vQ.

And it also doesn't have tuples: you can't store tuples in variables or larger data structures, pass them as function arguments, etc.


Go doesn't have tuples.

Go doesn't have local type inference

For loops on ranges are limited to Go built-in types, you can't create your own iterators.

So yes, almost none of these features are available in Go.


I'm new to Go- but I don't think it has tuples in the python sense[1]. Happy to be corrected here if there's a broader definition you're referring to.

I do wish it had support for slice unpacking like it does for return values. The assignment from returns or ranges is one of the things I really ejoy.

Edit: [1] I'll clarify that I define that as an immutable, indexable data structure that you can unpack in to multiple assignments.


It doesn't have "true tuples" but you can do multiple returns in a tuple-like way and there is syntax for unpacking it. While a pair can't be stored in a single variable, you can just manually unpack and repack them for a similar effect to what tuples can do. For the record, most languages don't have indexable tuples as far as I'm aware and make you unpack them with some form of pattern matching.


> For the record, most languages don't have indexable tuples as far as I'm aware

Most languages with tuples have indexable tuples either via language syntax or library functions. (For example, OCaml practically does, with fst and snd.)

> It doesn't have "true tuples" but you can do multiple returns in a tuple-like way and there is syntax for unpacking it.

There's a large difference between having multiple return values and having first-class tuple values.


Haskell/OCaml style "fst" and "snd" functions can't index arbitrary tuples (they only work on 2-tuples/pairs): http://i.imgur.com/PbhNxmu.png.


> C# and D, for example, have had most of these features for a few years

Come on, now. D is 15 years old, C# 16.

Python is more than 25 years old.


Most of these features were not present in Python until later (mentioned in the article: "In 2001, Python added statically nested scopes, which allow lambda functions to capture variables defined in enclosing functions"). And if we're discussing the original source of these features, Lisp had every single one of these before Python even existed.


With the advantage of compiling to native code and proper use of all CPU cores!


This is the kind of broad generalization that someone that knows a lot about Python but very little about other languages would make. The truth is most of these features have existed in other languages for many years, before C++ adoption and in some cases, before Python adoption.


Not in "some cases" but in "all cases". Python did not introduce anything new in this list.


auto is type inference. That's not "pythonic", that's something that statically typed languages have had for decades. Indeed, it's nothing like Python, where general practice is to pass around things derived from Object and depend on runtime typing failures.

There's also nothing Pythonic about lambdas. Come on, lisp is ancient and it has lambdas.

This article essentially says C++ got stuff that has been around for decades, and Python has stuff that vaguely resembles them, so C++ is learning from Python.


`auto` is just bottom-up type derivation, not really type inference. Type inference lets you recover the type of a variable from how the variable is used, not just from how it is initialized. Implementation-wise, bottom-up type derivation only requires unidirectional type propagation, whereas type inference requires solving arbitrary systems of type equations. The former is a special case of the latter, where all equations are of the form “X = T”, where “X” is a type variable and “T” is an arbitrary type expression.

No type inference: http://ideone.com/fmm92M, http://ideone.com/HzyM2E

Type inference: http://ideone.com/0Gv8fV, http://ideone.com/r9nHUF


It's clearly not type inference in its full generality (or all that close, really), but it seems a bit odd to say "X is a special case of Y; A is doing X; A is doing no Y".


Type inference is performed by scanning the program, generating a system of type equations and solving it. If all type equations are of the form “X = T”, then there's nothing to solve.


Type inference is inferring a type where none has been specified explicitly.

There are certainly more or less advanced forms of it, but solving systems of type equations is not the definition of type inference.


On what type inference is, from a PL researcher: http://lambda-the-ultimate.org/node/4771#comment-75771


There will always be some who insist that only the most advanced form of X is "really" X.


ML-style type inference is hardly “advanced”. And I'm even willing to count more limited forms of inference as seen in Scala, Rust or Swift - what they have is only local, but it's actual inference.

But what you're claiming is the equivalent of having a “number inference” engine that can conclude that “x = 8” from “x = 4 * 2” - that's not “inference”, it's just evaluating a single expression. Actually, what Go has is even less than that, because Go's type checker doesn't need to reduce anything.


It's more of a vocabulary issue. In PL theory "type inference" is a well-defined concept. It means that you have the ability to reconstruct the types of a program without annotations.

No mainstream (imperative) programming language has type inference in this sense (for good reasons). That's why the term is usually used with a different meaning when talking about mainstream programming languages.


In case of a C++ auto, type is specified explicitly - it is an RHS type of an assignment. Type is not deduced here, it is symply propagated.

Auto does not add anything new at all on top of the existing type checking: if for a specific LHS type a checker would normalise both left and right hand types and check for assignability, in case of auto it will assume LHS=RHS without checking anything.

Therefore auto is a subset of type propagation, not type inference.


auto x = 7;

auto y = f();

Where is the type of x specified explicitly in the local context?


In `7`. It's an `int` literal, so `x` has to be an `int`. C++ won't even do you the favor of accounting for the possibility that `x` might have some other type, say, `double`, to which an `int` can be coerced. As for why you would want C++ to infer a different type, consider `std::size_t n = 0;`.


>In `7`. It's an `int` literal, so `x` has to be an `int`

What is that other than inferring the type of x from the type of 7?

Edit: You don't need to convince me that this is extremely primitive type inference. I'm not defending the quality of C++ or Go type inference at all.


The point is that this is not a type inference, it is a restricted form of a type propagation (i.e., a subset of the pre-auto C++ type propagation, not an extension of it).


So your point is that some people do not consider every form of inferring the type of a variable to be type inference. I get that.


No, I am pointing out that the type is fully explicit here and not "inferred" from anything.


Type of x is a very explicit literal type. Type of y is also fully explitit, it is a 'return type of f()'. Not any different from a type of a sub-expression.


The type of f() is explicitly specified somewhere outside the local context. The type of x is not explicitly specified anywhere. It is inferred from the rhs expression.


The type 'return type of f()' is still a type. It does not matter that it is not normalised, most of the C++ types are used in a non normal form. Type 'struct MyKewlMegaStructure' is not any different and not any more "local".


How does that mean that the type of x is not inferred from something outside the local context?


Ok. Then in 'struct _abc x;' we also have a type inference. And in any sub-expression there is also a "type inference".

Although I think this kind of twisting the common term definitions is totally pointless. "Type inference" got a very well defined meaning, which got nothing to do with any kind of a type propagation - the latter term also existing for a reason, to designate a certain sort of type systems, fundamentally different from the inference-based ones.


Then a lot of what people write on this subject seems to be using incorrect terminology, including tons of stuff issued by computer science departments and wikipedia.

After digging a little deeper into the history of this terminology I have to concede that you and catnaroek are right. There was from the beginning in the 1950s a distinction that I didn't know about. So I was wrong.

Thanks to you both for enlightening me.


The type equations aren't all "X = T", though. Consider inference on casts, such as `{1, 2, 3}` to `std::vector`:

    #include<vector>

    float make(float) { return 0; }
    int make(std::vector<int>) { return 0; }

    int main(int, char **) {
        auto x = {1, 2, 3};
        auto y = make(x);
    }
You've also got return type deduction and overloading,

    template <typename T>
    auto id(T val) { return val; }

    int main(int, char **) {
        auto x = 7;
        auto y = id(x);
    }
and even stupid template tricks

    template <typename T, typename U,
              typename = std::enable_if_t<std::is_same<T, U>::value>>
    auto add(T lhs, U rhs) { return lhs + rhs; }

    template <typename T, typename U,
              typename = std::enable_if_t<!std::is_same<T, U>::value>>
    float add(T lhs, U rhs) { return lhs + rhs; }

    int main(int, char **) {
        auto x = 7;
        auto y = add(x, x);
    }
Whilst all the type inference is still one-directional and falls out template expansion, it's still legitimate inference.


This doesn't really address my comment.

You said two things (A, B); I pointed out that they seem incompatible; you doubled down on one of them (A).

Did you intend to give up the other (B), or to refute the notion that they are incompatible?

For clarity, A is "what they are doing is not type inference" and B is "what they are doing is a special case of type inference". Either of these positions seems reasonable to me, but as I noted they seem to conflict.


The lambdas one is the one that got me - the best word I can use for Python's lambda support is "grudgingly."


As a developer who used to build systems in embedded C++ and now spends all of my time building web backends in mostly Python, I have to agree with the premise that C++ is becoming more influenced by Python. That's a good thing, I think. I really think C++ is a wonderful language if properly curated and used by a responsible team of developers.

One tiny thing that would be cool to see built into C++ would be an equivalent to the Python range function. The boost version is nice for now, though.

Finally, I also see this as a nice compliment to Python and the power that it offers as a language.


I do not think there was a single bit of a Python influence in C++ design. People behind these changes are all seasoned PL researchers and they have far better sources of inspiration than just a poor little Python.


"As a developer who used to build systems in embedded C++ and now spends all of my time building web backends"

I'm curious, what motivated your switch from embedded to web?


I loved being an embedded developer. Loved it. I had been out of college for 4 years and thought that the work at startups would be even more exciting. I took a job as a backend developer at my current company because I was really excited about the technology they were working on from a conceptual level, and I still am. I transitioned from working on computer vision/image processing embedded work to natural language generation stuff in AWS. The AI aspect just generally excited me, but so did expanding my skill set in a much wider sense. Today I can talk about embedded concepts all the way up to AWS concepts like lambdas, cloud formation, etc. That's a pretty cool breadth of skill (notice I didn't necessarily say depth :P). I still feel like an embedded developer at heart, though. I'll go back to it someday.


Thank you, interesting. I am curious because I've gone the other direction. I began in the latter 1980s on the IBM PC (literally with IBM). At IBM they gave us a PC, a copy of the Technical Reference Manuals, and a screwdriver. It was really fun to learn the machine literally from the metal upwards. Beyond the screwdriver was Macro Assembler and a C compiler. Avoiding reminiscence, the point is that over the ensuing years my career took me further and further up the stack into software only, from end-user applications to a long time in the server "backend" space. During those years I became ever decreasingly aware of the hardware and internals of the OS. Machines themselves became "virtual instances". Thus is the modern world of highly scalable, distributed 'net and [dread] enterprise computing, and it's all great and exciting in many ways -- but, for me I also felt that my heart was happiest when thinking of things at the lower level. That drove playing with embedded systems, which led ultimately to a complete career shift. All is good. And, yes, we are all so very fortunate to be in an economic segment (technology, hardware and software) that allows us such freedom to choose our paths (and then change them).


I wonder if all these languages features are really necessary. I used C++ during the first ten years of my career and what I missed the most were better standard library features and more third party libraries (we used to write everything ourselves).

Language features are nice to have, but I never really missed any specific feature. Almost every sort of syntactic simplification I wanted to achieve, I could do so with a function, class or a template.


It's not about necessity. It's about convenience. There's really not much you can't do with C++, the only question is how much effort it requires. These changes help reduce effort without sacrificing anything, spare it for minuscule amount of compile time. Besides, you can always use old syntax if you prefer.


Oh, you could totally implement eg range that fndrplayer13 is talking about as a library function.


See the range_v3 library, which is on it's way (albeit slowly) to becoming part of the C++ standard. It provides C++ versions of range, as well as most of the iterator combinators in the itertools library.


After C++11, the bottleneck of the speed of development in C++ is no longer the language. But it has numerous other problems dragging down C++ developers still: std::string is close to useless, no userspace and project-scoped package manager, too long build times, hard to debug templates, etc.


> too long build times, hard to debug templates, etc.

These are symptoms of language issues.


And hopefully somewhat mitigated by modules once they land and (if it happens) everyone migrates.


Asking out of total ignorance: How exactly do modules interact with templates? Is template specialization restricted to the module where the template is defined? If not, I don't see how modules could help much.


Still ongoing design. This is the Microsoft view of them

https://blogs.msdn.microsoft.com/vcblog/2015/12/03/c-modules...

Regarding templates, the approach is similar to the deprecated export keyword, so partially compiled templates and inline functions are stored in the metadata database used by the compiler for modules.


I love how the second and third paragraphs describe modules as if they were some sort of novel, perhaps even futuristic, language feature that no programmer has ever seen before. “Oh, it's totally not like we're catching up with what other programming languages had for decades.”


The problem is that in the C culture practically ignored everything that was done in computing up to the late 70's.

Bjarne was of course aware of modules, being brought on Simula and other programmer friendly languages, but then he had to make C++ code fit into UNIX linkers that only knew about AT&T Assembly and C binary formats.

So no modules there, and along the years you got a culture of developers, specially those that only learn on the job, that never learned about modules and its history.

You see it happen also when cache friendly code and RAII get discussed.

Those things were also possible outside the C family of languages, before they won the market. So most millennials think that they are some special language capabilities only possible in C and C++.


> You see it happen also when cache friendly code and RAII get discussed.

To be honest, I can't talk intelligently about cache-friendliness at all. I know that there exist models of computation (say, for complexity analysis purposes) that explicitly take memory hierarchies into account, but I've never seen them actually used on anything but the simplest data structures and algorithms.

With respect to RAII, I think the main benefit isn't cache-friendliness, but rather deterministic destruction - you can't delay relinquishing a scarce resource that someone else might want to use.

> So most millennials think that they are some special language capabilities only possible in C and C++.

I'm a millennial, and have to admit with shame that I grew up thinking C++ is the best thing ever. But it's possible to recover from it. :-)


> With respect to RAII, I think the main benefit isn't cache-friendliness, but rather deterministic destruction - you can't delay relinquishing a scarce resource that someone else might want to use.

I never said it was related to it, I said it was the other feature many think it is C++ specific, or introduced via C++.

Ada also has RAII via controlled types and Object Pascal also had destructors when it was introduced.

Functional programming languages that allow for rich macros, support trailing lambdas or currying also allow for RAII like implementations using those features.

For example, with-open-file in Common Lisp.


> but I've never seen them actually used on anything but the simplest data structures and algorithms

Take a look at a typical performance-oriented GPU coding, cache is the major analysis parameter there. Data structures are mostly 2D and 1D arrays, but they may be scrambled in a very complex way, and the algorithms are arbitrarily complex.


Thanks for the pointer. Will have a look.


"I like work; it fascinates me. I can sit and look at it for hours." -- Jerome K. Jerome. Especially when it's the high priests of the C++ orthodoxy reinventing the wheel 30 years late.


Idk dude, after programming in swift for some time, I feel like c++ can't really become a modern language.


Agree that swift (and other languages like Rust for that matter) offer some very nice advantages over C++. However, C++ is so well established and has a rich collection of libraries and integrations. You can write a library in C++ and ship it on nearly everything. You can also optimize the parts of your system that are well-suited to a systems-language like C++ and then easily expose them in other languages like Python, Ruby, Go, Swift, etc. via their C-bindings.

We can't say that (yet) about Swift and Rust. In 3-5 years I think this is going to be a very different conversation, though.

In my mind, C++'s flexibility is both its greatest benefit and its greatest danger. You can do almost anything, and there are so many ways to do it. I agree this is a "problem" that's likely not going to be fixed. You can ask your fellow developers to read Scott Meyer's Effective (Modern) C++, you can go to meetups, listen to the wisdom of the steering committee, etc. but at the end of the day it really boils down to the fact that your team needs to be committed to being resilient and responsible. That's true in any language, but much more so in C++.


Rust actually has a really nice embedded library story, and I've already seen a few things in Ruby or Go that have started using Rust libraries for performance. https://github.com/BurntSushi/rure-go for example, just to pick something that's crossed my feeds recently.

Not sure about Swift, I haven't really been paying attention in that space.


That is what eventually made me more focused on JVM and .NET languages, with C++ only being used only for lower level infrastructure code if at all needed.

Writing proper code in C++ requires, as you say, "your team needs to be committed to being resilient and responsible", which just doesn't happen on my little piece of the world.

So using it in personal projects, yeah. At work, not really.


At least two production users of rust are explicitly embedding rust in Ruby. The Node bindings (neon) are really great too.

Even then, in C++ you still need to expose a C interface, so it feels at least on even footing with Rust. But of course, I am biased...


Preshing is a smart guy but this seems to miss the point in a way typical of C++ devs. Python—as with other languages—isn't just a list of features. It also has a guiding philosophy (made explicit in Python's case in https://www.python.org/dev/peps/pep-0020/).

C++ doesn't seem to have a guiding philosophy besides "be a Swiss army knife backwards compatible with C." That's OK but I would argue it prevents C++ from every being "Pythonic."


There are many cultures inside C++, but they basically spit into two.

There is the one that appreciates C++'s abstraction capabilities with ability to go low level when needed. Cares about writing safe code and sees C copy-paste compatibility as a compromise required to get C++ adopted by the industry. Those developers usually appreciate languages like Ada, Modula-3 and Haskell, and would rather kill the pre-processor.

Then there is the culture that somehow had to move from a C compiler into a C++ one, constrain themselves to the C subset of C++ while ignoring the standard library because bloat and write exploit sensitive code as always did in C.

Many on the second culture tend to do micro-benchmark driven development. Each code line is questioned how much fast and memory it takes by gut feeling, without any regard from profiler tools or actual needs from the application users.


If there is one guiding philosophy that C++ has it's abstraction without runtime cost.

The (implied) corollary is that compile time cost, mental cost and social cost of language features do not matter.

As long as there is at least one person on the planet who really understands the ins and outs of a language feature, that is sufficient proof of the language being simple enough. There is no need for anyone to understand the entire language.


That's fair but Rust has the same primary design goal and yet I find programming in it more ergonomic (despite Rust having a cranky borrow-checker that C++ lacks).

I don't know how much of that is due to C++'s historical baggage and how much is due to a greater concern for usability and elegance on the Rust side, with consequent discretion when adding new features. But I don't think it's entirely the former.


C++ got a much stricter guiding philosophy than the Python zen. This is it: "you should never pay for the features you do not use". The entire language is built with this guiding principle in mind.


Right, but pretty often the payment is not obvious. For example, it's often shocking how many non-obvious wasteful copies are being made behind the scenes as output to relatively innocent looking code. That's of course "programmer's fault", not denying that.

So you end up paying for things you didn't intend to use, but used anyways because of some small detail.

Sometimes I feel like the only way to write good C++ is to keep looking at the disassembler output...

C++ is my dayjob.


> That's OK but I would argue it prevents C++ from every being "Pythonic."

This got me thinking, what would one call an idiomatic C++ style (which I suppose is part of what the core guidelines are trying to push)? Cppthonic? Bjarnic? Python is 'Pythonic,' I usually hear idiomatic Ruby described as the 'Ruby Way.' Is idiomatic Rust 'Rustic'?


For Rust at least there's been some bikeshedding about that https://www.reddit.com/r/rust/comments/33auoe/idiomatic_rust....


How about "Herbal" for Herb Sutter's style?

Weed pun intented :)


A lot of your posts show up as [dead]


It means hardly anyone sees them.


I'm sure I'm not the only one who doesn't like how these things are bolted on and shoehorned into the existing constructs, resulting in some ugly-ass syntax. Consider:

  auto triple = std::make_tuple(5, 6, 7);
  std::cout << std::get<0>(triple);
Ugh. And don't even get me started on "myList.push_back(5)" . "push" usually means "add to the front"; why not use "append" ?


Sigh - I know why you got downvoted but I came to post the same thing. I don't program in C++ but specifically the

std::get<0>(triple)

It really doesn't look great. It looks like the template syntax using the <>. I'm sure I would get used to it if I were using it every day but from an aesthetic sense it doesn't win me over.


It is the template syntax.


In Perl, "push" means "add to the back."


Also C++ has Python-like formatting: https://github.com/fmtlib/fmt. Disclaimer: I'm the author of this library.


It certainly has for me.

Here is me translating as literally as I could a bit of nontrivial code (which I originally learned from R's C code):

http://inversethought.com/hg/medcouple/file/default/jmedcoup...

http://inversethought.com/hg/medcouple/file/default/medcoupl...

I'm really happy how almost every Python statement has a nearly equivalent C++ statement. The only one I had trouble with was list comprehensions, but they can be very nearly translated with C++ stdlib algorithms and a few lambdas (kind of looks more like map and filter than a list comprehension, though).

The best part is that the C++ code runs as fast and sometimes faster than the original C code I grabbed this from! (The translation path was C -> Python -> C++.)


It's nice to have a "do this to all that stuff" FOR statement, at last. Remember what it was like declaring iterators over collections in FOR statements before AUTO? That was the original motivation for AUTO. But it's much more useful than that. It becomes the common means of declaring local variables.


You could always use `std::for_each` and Boost.Lambda, even before C++11.


Props to python.

However, what idea has C++ not accepted? It seems for it to be a "new" language, it should at least say "no" to something.


C++ says "no" to readability. Everything that has been added to C to create C++ was added in order to help out the person writing new code. Nothing that was added to C to make C++ is there to help the person trying to read the code. Though I doubt it was deliberate, much of what makes C++ different than C seems actively hostile to the person trying to read the code.


You really find

   auto x = new int[64];
harder to read than

   int* x = malloc(sizeof(int)*64);
?

If so, I think it's just a matter of habituation and imprinting. Whatever you learned first is easier and everything else is hard.


Both C and C++ have readability issues.

  ((void(*)())exec)();
or

  template <typename T>
  struct value_type {
    typedef typename T::value_type type;
  };
  
  template <typename T>
  struct value_type<T*> {
    typedef T type;
  };
Six of one, half a dozen of the other.


I think there's a gap that could be filled by making a nice front-end to C++ with nicer simpler (even Pythonic) syntax.

Drop the semi-colons, convert "dict of int to string' to map<int, std:string> etc.

It would be ok if it only did a subset of C++ at first.


Sounds like http://nim-lang.org/ . Just simplifying syntax is not a great idea for a system programming language though.


It doesn't have yield.

It also doesn't have Concepts yet, as they got pushed back again.


They're working on Concepts, and they're working on yield (search for resumable functions). Maybe they can say "no" to "saying no"? :)


> With range-based for loops, I often find myself wishing C++ had Python’s xrange function built-in.

Built-in to the language would be nice, but C++ is always extremely spartan with its standard library functionality. Thus:

https://gitlab.com/higan/higan/blob/master/nall/range.hpp

37 lines (without the boilerplate header stuff) and you have:

    for(int x : range(20)) ...;  //iterate from 0 ... 19
    for(int y : rrange(myvector)) ...;  //iterate from myvector.size()-1 ... 0
Also supports xrange's offset/stride arguments (it's pretty much exactly Python's xrange object in C++.) And you can add support to do ranges over any class anywhere, eg range(myvector) will pull the length from myvector.size() for you with a simple overloaded function.

Timing tests in extremely critical loops (like convolution kernels eating 100% CPU usage) shows no discernable performance impact at -O2 and above over the traditional C++-style for(int x = 0; x < size; x++) style.


I'm still waiting on Pythonic iterators (e.g. coroutines).



I'll never understand why some people insist on backslashes in paths on Windows.

Windows APIs have always supported using forward slashes, using "C:\\path\\to\\file" is just an ugly mess.


It doesn't work everywhere, 3rd party libraries would have trouble for sure and I think I had some issues even with Windows or some highe-level Microsoft API too.


those are amazing features I didn't expect C++ to have. I wonder if 'auto' is similar to type inference 'var' in C#.


C# does not have type inference: http://ideone.com/fmm92M. Nor does C++ for that matter: http://ideone.com/HzyM2E. This is type inference: http://ideone.com/0Gv8fV, http://ideone.com/r9nHUF. Note how the type checker figures out the list's element type from how the list is used.


You've said this same thing 3 or 4 other times in this thread.

Yes, yes, the formal definition of type inference as supported by languages like Haskell is not fully supported by C++, or Go, or C#, etc. But that's not the definition of it that anyone else in the comments here seem to be using, so can we drop it?

You seem to be knowledgeable about type theory. Is there a formalized term for the subset of type inference that C++11 supports? If so, can you just assume that others are using that term? Not that the discussion isn't interesting, but you're coming off as needlessly contrarian.


Type propagation.


Note how that's an anti-feature.


Whether a feature is a good or a bad thing to have is each programmer's subjective opinion. (But, of course, mine is diametrically opposed to yours: typeful programming would simply be impractical if you had to babysit the type-checker all the time.) OTOH, whether a language has or doesn't have a feature, is a technical fact.


It's not subjective at all. Features are good or bad for productivity. Maybe you have a different goal or have calculated the expected value differently, but with a well defined purpose it's not a subjective question.


> It's not subjective at all. Features are good or bad for productivity.

While productivity can be more or less objectively measured in the long run, the effects of a language feature on productivity vary from one programmer to another. There's no universal basis for evaluating this.

> Maybe you have a different goal

My primary goal is correctness. I don't really believe in 80% solutions. Having an incorrect program is just as good as having no program.

> have calculated the expected value differently

Very differently. My calculation is based on the following objective, unquestionable, technical facts:

(0) Determining whether the internal structure of a program is consistent (in the formal logic sense) involves lots of long, tedious and mostly unenlightening calculations.

(1) Computers are very good at performing long calculations. On the other hand, humans suck at it.

(2) Type structure, at least in sensibly designed languages, is the most efficient way known so far to communicate the intended logical structure of a program to a computer.

With type inference:

(0) The programmer only has to provide just enough information for the type checker to reconstruct the logical structure of the program.

(1) If the pieces fit together, the type checker will tell you exactly how.

(2) If the pieces don't fit together, the type checker will tell you exactly why.


OK, but you didn't specify a "well defined purpose". So what you said is obviously your opinion.

I find true type inference to be very helpful for readability. For example, you can write "let mut x = None;" where writing a type would be just noise.


Funny how Python narrows the perspective of its practitioners. To brand as "pythonic" all those decades old concepts that had been well known and widespread long before Python is, well, among the most amusing symptoms of a fanboyism.


I think Python's, along with languages others mentioned here, popularity is what influenced and partly driven the rapid change of C++. Lisp was around for decades, yet only now, almost right after Python became popular, these old concepts were incorporated.


There's a comment by Fabio Fracassi ( http://preshing.com/20141202/cpp-has-become-more-pythonic/#I... ), member of the German delegation for the ISO C++ committee, contending that view. At roughly the same time that Python has been getting this features, many other languages have also been getting them, and usually without crediting Python influence (but rather other languages').


The problem is that Pythonicity is exclusive. If the features mentioned do correspond between the languages, that's a common property. Although, C++ couldn't be pythonic, if pythonicity describes the necessity to use these features.


I would like to borrow some simpler syntax from Python:

"if (a)", "for (...)" -> "if a", "for ..."

"if ((a > b) && (c > d))" -> "if a > b and c > d" (yes, I know C++ supports "and" if you wish)

Or how about accepting a new-line as a ; at the end of a statement (ie: optional ; at line end)

I also love nested tuple unpacking: "for i, (key, value) in enumerate(dict.items())"


> Or how about accepting a new-line as a ; at the end of a statement (ie: optional ; at line end)

And have to use backslashes or something for multiline statements? Ugh.


Not necessary. Python allows any open bracket to signal that the line is not terminated yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: