RDS is an example of something that is more expensive but can be better value - because a lot of the managed service stuff saves the time of those looking after it.
15 years ago I worked an infrastructure department with 50+ employees - these days a lot of that work that we used to do back then is taken care of by AWS.
Scaling the infra to meet peak capacity (which might be two days in a year) because you can only run on hardware you have, having data centres and data centre engineers, are all costs that go away with cloud, even though you are paying more for the compute you do use.
Is this similar to what Sidney Dekker says in Drift Into Failure. That any "root cause" is more likely a narrative tool than a reflection of objective reality in a sufficiently complex system.
Does ECS support mounting configuration files (without needing the configuration file to be on the host)?
Being able to mount secrets and configmaps into the container file system (without having to modify the container image to provide an entrypoint) definitely seemed to be one major advantage of kubernetes over ECS a few years back.
You can setup something to mount configuration files stored in say S3 -> into an EFS volume that you attach to all ECS tasks, for example.
The problem with ECS is that everything, including service discovery, is an integration with another AWS service. Making it even more difficult to ever migrate, but it does support much of what folks use Kubernetes for.
The comparison doesn’t include the (perhaps confusingly named) openshift library. All the ansible kubernetes modules rely heavily on it because its support for dynamic client (where you just want to apply a manifest and don’t know in advance that it’s a Deployment and a Service and a ConfigMap) is first rate.
The difference between datadog and doing it yourself is that datadog is a well thought through product rather than a cobbled together set of various tools
Having a single interface for everything makes life so much easier across a number of different teams
Search is fast and easy to use for logs and traces
Being able to see what a user actually clicked on in their session is absolutely game changing for support teams
I’m not a huge fan of the bill but it’s so much better than anything we could do ourselves without a team of engineers dedicated to observability (which would cost far more than datadog)
That's for private link. Direct transfers in the same VPC are still $0.02/GB (in+out). Privatelink has to be used together with an LB, which is not free (hourly+byte processed)
It's almost always worth the lawsuit, it's cheaper for a company to pay an employee off than fight them in court.
The actual numbers of people who take abusive companies to court is low, just look up the statistics of companies who constructively dismiss women after pregnancies, compared to the number who actually get sued.
It's a thing to do carefully; especially in a small industry. Unless the situation is pretty messed up it's generally better to take a small hit than become known as someone who litigates against their employer.
On the other hand, full respect to anyone who stands up for themselves when the situation warrants it.
Ansible - I first discovered it from a comment here, thought 'who needs a new config management tool' then realised Michael Dehaan had also written cobbler so thought it was worth a crack. Since then my python skills have improved through reading and improving the ansible code base, I've written ansible-lint, ansible-inventory-grapher and ansible-review, and I've been on two long distance conference trips as a result.
Revert your playbooks and roles to the version of your last good deployment, and redeploy. With good version control, role version management and idempotent library modules, this should be functionally equivalent to a rollback.
There are plenty of caveats to the above (like the fact that the yum module won't downgrade [1], and you'll need reversible DB migrations) but that's basically the procedure.
This isn't quite accurate. It won't uninstall or remove things that a previous version put into place, unless you explicitly remove them before installing them as part of your playbooks/roles.
Exactly. This whole "declare your environment" thing with Ansible doesn't work.
I've completely mixed experiences with Ansible. Yes, it's easy to get started, but it's certainly annoying having to create playbooks for removing stuff to get a clean state.
To be frank, my experience with configuration management has been a mix between "YES! THIS IS WHAT WE NEED!" and "...but it still doesn't adhere to immutable states." That's been true with Chef, Puppet, and Ansible, for me. I haven't experimented with other techs.
Depends how you write your playbooks/roles. You can write a role that will both add and remove depending on the value of a variable in your inventory. Then tweak the inventory and re-run.
15 years ago I worked an infrastructure department with 50+ employees - these days a lot of that work that we used to do back then is taken care of by AWS.
Scaling the infra to meet peak capacity (which might be two days in a year) because you can only run on hardware you have, having data centres and data centre engineers, are all costs that go away with cloud, even though you are paying more for the compute you do use.