This is very good use case of micro-transactions. If AWS makes $100 off Redis, they should be pay back X% to Redis project, from which the money is distributed to contributors based on how important their contributions were. Also Redis project is also supposed to pay back to the software components and 3rd party libraries it uses, so C project gets a fair share of the pie contributed back to them as well.
Wrote something on similar lines[1]. To understand a song completely I needed to dig deep in the artist's life & philosophy, the same with lyrics. I stopped trying to relate and match my existing life experience with the artist and explicitly tried understanding what the artist was trying to convey, dissolving my own understanding of what I thought it should sound like. Never went back to listening music the old way.
Is that really a problem in Cloud environments where you would typically use a Cluster Autoscaler? GKE has "optimize-utilization" profile or you could use a descheduler to binpack your nodes better
Wouldn't the network cost be absurd in such case? Not only the pod-to-pod communication cost skyrocket, all the heartbeats, health checks, metrics, daemonsets pinging each other will probably end up costing more than the CPU and Memory
GKE Autopilot is pretty much useless, very few cases where it actually turns out cheaper than simply using Cluster Autoscaler + Node autoprovisioning. Not only is the pricing absolutely absurd, they don't even allow normal K8s bursting behavior (requests need to be equal to limits) which means you not only end up paying more than regular K8s cluster but now also need to highly overprovision your pods
Datadog allows you to export all of this but I don't see how that's any useful. You can't really port Datadog dashboard to let's say Grafana easily. The query languages they use do not have 1:1 mapping, the way dashboards are organized and the different visualization tools you get are not same either
they surely doesn't help with the "migration" (application layer) part but they help you manage things with code, which is better than nothing (you can do some heavy-lifting to make it work in another provider afterwards)
that's the reason terraform is not really a solution for what I speak about in the blog post.
> unless your a whale they give zero fucks about giving you any flexibility on price.
Even the flexible pricing they offer ends up being a sham. It just seems better on paper but you end up paying nearly the same because they have a really complicated billing model where they give you free stuff with Infra hosts, once you switch away from this model you stop getting those freebies, so your Infra hosts might cost less now but everything else is more expensive now. The house always wins!
It's 30% less power at the SAME speed. It's not clear what power comparisons are with Oryon running at max clock.
Also, Oryon is a SoC in between M2 and M2 Pro. It's comparing itself to M2 Max for certain workloads and then M2 for others. It should be comparing itself to M2/M2 Pro only which means efficiency would be a bit better for Apple Silicon.
Also, it's not clear how they're coming up with the power. Is it the entire laptop? Just the SoC? Package power with RAM? Only the core?
Qualcomm's slide put Oryon chips at 50-55 watts but M2 Pro/Max runs at 35w - 40w max. I think we should take Qualcomm's slides with skepticism until real machines get shipped.
> We have built Cuber for scaling Pushpad. Cuber has been used in production for over a year and it is stable and reliable. We had 100% uptime and we saved 80% on cloud costs.
Pretty sure the 100% and 80% refer to using Cuber for Pushpad. So its probably more about using Kubernetes than about Cuber.
to be fair cloud is (on the whole) about 5x the price compared to bare metal. Depending on several factors which make it really hard to compare (apples to oranges and all that).
For my use-case, which does not require HA, is relatively B/W intensive etc; K8S on bare metal can be 11x cheaper (with the same amount of hands-off, click to deploy, no messing around) as GCP.
So, fundamentally, there can be enormous gains w.r.t price vs cloud, 80% cost reduction leaves an infinitesimally small margin though; if they really are providing all the same benefits as a public cloud.
This is not a bare-metal vs Cloud comparison, the Cuber PaaS is supposed to run on anything including cloud based on their docs. So when they say "save 80% on cloud costs" I am assuming they mean for eg. save 80% on using Cuber on GCP compared to deploying same workload on GKE.
It would be very absurd to claim something like - I was using i9-13900KS but realized I could run the same workload on my raspberry pi, but hey I also used this packaging tool in the process therefore I saved 80% costs because of the packaging tool.