Hacker Newsnew | past | comments | ask | show | jobs | submit | Dunedan's commentslogin

Depends on various factors and of course the amount of money in question. I've had AWS approve a refund for a rather large sum a few years ago, but that took quite a bit of back and forth with them.

Crucial for the approval was that we had cost alerts already enabled before it happened and were able to show that this didn't help at all, because they triggered way too late. We also had to explain in detail what measures we implemented to ensure that such a situation doesn't happen again.


Nothing says market power like being able to demand that your paying customers provide proof that they have solutions for the shortcomings of your platform.

Wait, what measures you implemented? How about AWS implements a hard cap, like everyone has been asking for forever?

What does a hard cap look like for EBS volumes? Or S3? RDS?

Do you just delete when the limit is hit?


It's a system people opt into, you can do something like ingress/egress blocked, & user has to pay a service charge (like overdraft) before access opened up again. If account is locked in overdraft state for over X amount of days then yes, delete data

I can see the "AWS is holding me ransom" posts on the front page of HN already.

A cap is much less important for fixed costs. Block transfers, block the ability to add any new data, but keep all existing data.

2 caps: 1 for things that are charged for existing (e.g. S3 storage, RDS, EBS, EC2 instances) and 1 for things that are charged when you use them (e.g. bandwidth, lambda, S3 requests). Fail to create new things (e.g. S3 uploads) when the first cap is met.

Does that mean fail to create rds backups? And that AWS needs to keep your EC2 instance and RDS instance running while you decide if you really want to pay the bill?

How about something like what runpod does? Shutdown ephemeral resources to ensure there's enough money left to keep data around for some time.

RunPod has its issues, but the way it handles payment is basically my ideal. Nothing brings peace of mind like knowing you won't be billed for more than you've already paid into your wallet. As long as you aren't obliged to fulfil some SLA, I've found that this on-demand scaling compute is really all I need in conjunction with a traditional VPS.

It's great for ML research too, as you can just SSH into a pod with VScode and drag in your notebooks and whatnot as if it were your own computer, but with a 5090 available to speed up training.


Yes, delete things in reverse order of their creation time until the cap is satisfied (the cap should be a rate, not a total)

I would put $100 that within 6 months of that, we'll get a post on here saying that their startup is gone under because AWS deleted their account because they didn't pay their bill and didn't realise their data would be deleted.

> (the cap should be a rate, not a total)

this is _way_ more complicated than there being a single cap.


> I would put $100 that within 6 months of that, we'll get a post on here saying that their startup is gone under because AWS deleted their account because they didn't pay their bill and didn't realise their data would be deleted.

The cap can be opt-in.


> The cap can be opt-in.

People will opt into this cap, and then still be surprised when their site gets shut down.


The measures were related to the specific cause of the unintended charges, not to never incur any unintended charges again. I agree AWS needs to provide better tooling to enable its customers to avoid such situations.

>How about AWS implements a hard cap, like everyone has been asking for forever?

s/everyone has/a bunch of very small customers have/


I am never going to use any cloud service which doesn't have a cap on charges. I simply cannot risk waking up and finding a $10000 or whatever charge on my personal credit card.

And for amazon that's probably fine, people paying with personal credit cards are not bringing in much money.

I'm not sure usability is moving in the right direction with KDE. Over the past years, more and more applications started to hide menus by default, sometimes adding hamburger menus instead.

There is also a "new way" (I believe QtQuick-based) for applications to create popups, which results in them not being separate windows anymore. System Settings makes prominent use of them for example and those popups just behave entirely different than one is used to. As far as I know it's not even possible to navigate these popups with the keyboard.


ASML doesn't sell chips, you're probably thinking about TSMC.


Ahh yes so I am. thanks for the correction


FYI: "Signal backup servers" currently seems to mean either Google Cloud Storage or CloudFlare R2 according to https://github.com/signalapp/storage-manager/blob/e45aaf5bd1...


> There are a couple of problems with the existing backup:

>

> 1. It is non-incremental.

I wonder if that's differently with the newly announced functionality. Their announcement doesn't sound like it:

> Once you’ve enabled secure backups, your device will automatically create a fresh secure backup archive every day, replacing the previous day’s archive.


@greysonp verified they're indeed incremental for media: https://news.ycombinator.com/item?id=45170515#45175402


I suspect the human worker still had a headset to listen in to the orders at the drive-through and just intervened when she heard that order.


That vastly depends where you live and what you use electricity for. Most of Europe for example uses much less energy [1], although that will probably change as heat pumps are becoming more and more widespread.

[1]: https://en.wikipedia.org/wiki/European_countries_by_electric...


I think this is just consumption divided by population, so very easily influenced by e.g. having little population and many data centers: I doubt the average person in Iceland is spending 10k+ bucks on electricity annually.


> […] so I am surprised he has done that and expected stability at 100C regardless of what Intel claim is okay.

Intel specifies a max operating temperature of 105°C for the 285K [1]. Also modern CPUs aren't supposed to die when run with inadequate cooling, but instead clock down to stay within their thermal envelope.

[1]: https://www.intel.com/content/www/us/en/products/sku/241060/...


I always wonder: how many sensors are registering that temp?

Because CPUs can get much hotter in specific spots at specific pins no? Just because you're reading 100, doesn't mean there aren't spots that are way hotter.

My understanding is that modern Intel CPUs have a temp sensor per core + one at package level, but which one is being reported?


There's no way on Earth Intel hasn't thought of this. Probably the sensors are in or near the places that get the hottest, or they are aware of the delta and have put in the proper margin, or something like that.


I haven't said they didn't think about it, I'm just asking due to sheer ignorance.


Yes, I have read the article and I agree Intel should be shamed (and even sued) for inaccurate statements. But it doesn't change the fact it has never been a good idea to run desktop processors at their throttling temperature -- it's not good for performance, it's not good for longevity and stability, and it's also terrible for efficiency (performance per watt).

Anyway, OP's cooler should be able to cool down 250W CPUs below 100C. He must have done something wrong for this to not happen. That's my point -- the motherboard likely overclocked the CPU and he failed to properly cool it down or set a power limit (PL1/PL2). He could have easily avoided all this trouble.


I raised that exact same issue to AWS in ~2015 and even though we had an Enterprise support plan, AWS response was basically: well, you problem.

We then ended up deleting the S3 bucket entirely, as that appeared to be the only way to get rid of the charges, only for AWS to come back to use a few weeks later telling us there are charges for an S3 bucket we previously owned. After explaining to them (again) that this way our only option to get rid of the charges, we never heard back.


I'm so angry about that license change, as it makes impossible to use Vagrant anymore. For example the version of Vagrant in Debian is stuck to the pre-license change commit and Debian doesn't publish Vagrant boxes for new releases anymore. I've yet to find a replacement, which works as seamlessly across different operating systems.


Yeah, I was actually participating in a soft-fork for a while but I think the project ran out of steam. My guess is that it's very hard to pay attention to something that you don't use every day, so they let it drift. But if you ever hear of someone starting up again, let me know

I'm aware that I, too, could be the someone but like I said it's hard to dedicate all the time and energy when the last time I used vagrant was years ago

I also just remembered that I haven't revisited the forks list to see if there's some meaningful activity https://github.com/hashicorp/vagrant/forks?include=active&pa...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: