Update: The cost is a good point. It wasn't that high. We actually updated the article to reflect this. It was two engineers, working full-time on it, for three weeks.
A few minutes of downtime wouldn't be that bad, but it would harm the company's prestige. Unlike AWS and Netflix, we are not based on massive consumption, but on a few customers. Losing one good customer can make a difference. We believe it worth the investment of time.
You are right. AWS DMS was our very first choice to try out. It is very easy to use, deployed within your VPC and most of the problems we mention in the article are already solved. Unfortunately, we experienced errors during our tests and the logging mechanisms were not quite helpful, so we failed to find out the problem and make it work.
That's a good point. We mention latency as "replication lag" and we have devoted a paragraph to the drift that comes as the result of this latency. In our case, we measured the latency to be <1s, and it was totally acceptable. As for the performance, triggers can become a problem with enough write traffic. In a different use case, Bucardo would be eliminated from a potential solution due to this.
We didn't mean to be arrogant with the "done right" statement. In the background story we explain how we performed the same migration once again in the past, and we end up with data loss. So this was the time that actually "did it right". Many tutorials on the Internet that describe a similar process have flows that also lead to data loss.
AWS can upgrade your database automatically, but with some downtime. AWS also provides DMS for migrations, which didn't work well in our case. So it was rather a simple problem at first, which turned to be a very complex one in the end.
Exactly. With so many people out there advertising themselves as "software engineers" but being "coders", companies fail to produce maintainable code without a bunch of strict rules. Coding manifestos are a must these days, otherwise you end up with a codebase that looks like the Tower of Babel.
We seek for someone who can help us to Kubernetize our infrastructure. Prior experience is a big plus, but not strongly required. We are based in Athens, but remote work is also possible from a similar timezone.
It's indeed a good reason for apps to pin their dependencies; those that did so presumably didn't have their builds broken by this issue. I don't feel that pinning dependencies necessarily makes a whole lot of sense for libraries, since it creates inflexibility and dependency duplication for app developers. But opinions vary.
Is this a novel definition of "library"? To me, library code is called by my code while my code is running. So, most libraries are dependencies, but lots of dependencies are not libraries. Test, code coverage, or documentation tools might be dependencies, or as npm would say instead devDependencies, but they are not libraries. Similarly, webpack is what we might call a build tool, which is also not a library.
I think it's just a semantic distinction, not a substantive one. The relevant thing is indeed that Webpack is generally used as part of another build process.
OK let's go with your definition. You're saying that "endpoints" may pin their dependencies, but any package that is depended upon by some other package should not in turn pin its own dependencies. ITT, I don't see support for this proposition. 'rigaspapas observed that the problems described in TFA could have been avoided through pinning. Presumably you're thinking of other problems that could be caused by pinning, but why would you think those problems are any worse than causing the program to completely fail to accomplish anything?
Diamonds are more common because node unlike many environments is absolutely tolerant of them. Maybe this "wastes" some disk space, but otherwise this is not a problem. It is orthogonal to actual problems like bugs in particular package versions.