That's cool. When I envision banks, I see stuffy suits and offices, getting stuck waiting months to work on something because of beaurocracy, etc. Maybe I'm listening too much to stereotypes.
Can you give any examples of tech stack decisions? Like rejected x in favor of y? (Within reason, I get that you can't divulge company info)
How about B2 combined with Cloudflare? Egress from B2 to Cloudflare is free and egress from Cloudflare to the internet is free for their free plan, and perhaps competitive on their not free plan?
Yeah... I'm not really sure where this idea of not being able to use the tools available out of the box to deploy apps and do networking comes from. I deploy YAML. If I feel like my YAML is too big, I DRY it using kustomize. I can use --prune if I'm worried about stuff sticking around in the cluster. For networking, I... don't do anything? We get DNS built in. Just use the service name. What else is there to do?
External DNS, certificate management, and a whole bunch of other stuff if you're not using a cloud provider's managed Kubernetes (e.g., network attached storage, load balancers, ingress controller, etc).
Label selectors are hard. They might require knowledge as advanced as high-school geometry to understand. The ability to draw a Venn diagram isn't free, you know!
That sounds neat. I feel like I'd take that offer. 4x8hr for 80% of my current salary. I guess we'd just have to make sure the rest of the company can be productive during the 5th work day if their teams aren't doing it. Can't be blocking their work etc.
Apparently they used to be quite different problems, but with today's focus on "performance per watt-hr" as the most important metric, they've merged together. GCP's new AMD CPUs are popular because they're x86 but do more per watt. And all the big cloud platforms are looking into ARM64 chips for the same reason. Arm architecture is crucial in phones. Voila.
If I recall correctly, GAE is an example of something they made specifically to be a cloud product. Products like Compute Engine, GCS, Bigtable, and Pub/Sub are things developed internally and then sold publicly once they realized others might find them useful. Perhaps the products developed first for internal use weren't developed with features like measuring billing usage in real time in mind.
> Based on this experience, we decided to lower the default value of "max instances" to 100 for future deployments. We believe 100 is a better trade off between allowing customers to scale out and preventing big billing surprises.
This is good to hear. I use Cloud Run a lot for personal projects and I always set concurrency to 80, max instances to 1, memory to 128Mi (unless it's something beefy that needs the memory), and CPU to 1. If I need to scale it up, or I decide to open it up to actual usage, I'll do it when I recognize the need.
> It would mean a catastrophic interruption to the customers application exactly when its the most popular/active. And theres no practical way to determine whether the customer is “trying it out” or running a key part of their business on any particular resource.
This could be ameliorated by using namespacing techniques to separate prod from dev resources. For example, GCP uses projects to namespace your resources. And you can delete everything in a project in one operation that is impossible to fail by just shutting down the project (no "you can't delete x, because y references it" messages).
Aggressive billing alerts and events, that delete services when thresholds are met, could be used only in the development namespace. That way, fun little projects can be shut down and prod traffic can be free to use a bit more billing when it needs to.
Can you give any examples of tech stack decisions? Like rejected x in favor of y? (Within reason, I get that you can't divulge company info)