Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like C++ is the counter-argument to backwards compatibility. Even Java is loosening it's obsession with it.

Sometimes you just need to move forward.

Python 3 should be studied for why it didn't work as opposed to a lesson not to do it again.



C++ has already broken backwards compatibility a few times, and the way GCC changed their std::string implementation semantics is one of the reasons why nowadays features that require ABI breaks tend to be ignored by compiler vendors.

Backwards compatibility is C++'s greatest asset, I already took part in a few rewrites away from C++, exactly because the performance in compiled managed languages was good enough, and the whole thing was getting rebooted.


> Python 3 should be studied for why it didn't work as opposed to a lesson not to do it again.

I'm curious about this in particular. It seems like the Python 2 to 3 transition is a case study in why backwards compatibility is important. Why would you say the lesson isn't necessarily that we should never break backwards compatibility? It seems like it almost could've jeopardized Python's long-term future. From my perspective it held on just long enough to catch a tail wind from the ML boom starting in the mid 2010s.


Because you end up accumulating so much cruft and complexity that the language starts to fold under its own weight.

Often you hear the advice that when using C++ your team should restrict yourself to a handful of features. ...and then it turns out that the next library you need requires one of the features you had blacklisted and now it's part of your codebase.

However, if you don't grow and evolve your language you will be overtaken by languages that do. Your community becomes moribund, and more and more code gets written that pays for the earlier mistakes. So instead of splitting your community, it just starts to atrophy.

Python 2 to 3 was a disaster, so it needs to be studied. I think the lesson was that they waited too long for breaking changes. Perhaps you should never go too many releases without breaking something so the sedentary expectation never forms and people are regularly upgrading. It's what web browsers do. Originally people were saying "it takes us 6 months to validate a web browser for our internal webapps, we can't do that!" ...but they managed and now you don't even know when you upgrade.


Part of the issue is that you can't just generalize it to "breaking changes." Ruby, very similar to Python, underwent a similar break between Ruby 1.8 and Ruby 1.9, but didn't befall the same fate.

The specifics really matter in this kind of analysis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: