Hacker Newsnew | past | comments | ask | show | jobs | submit | more rasguanabana's commentslogin

On one hand you have people setting wildcards in IAM policies for development phase (and forgetting to close them down afterwards). It’s hard to figure out permissions beforehand - this would help in these cases, but still taking IAM setup deduced from your traffic does not mean this setup is secure.

On the other hand you have complex architectures and no real overlap in their authorization patterns. It’s impossible to automate creation of secure "sandbox" setup for your specific use case.

You can’t really delegate security of your architecture to a single service - you need to address it yourself. Security can be implemented only in the service, not as a service.


Ya I've done that hah. I think an ok plan might be to set up infrastructure with open IAM rules and write all of the backend and frontend acceptance tests for an app. Then close all IAM rules and open them one by one until all of the tests pass again.

Maybe AWS could provide a way to track access attempts and then have an interface for the user to grant them one by one. I understand that this might be challenging to design, but I view these sorts of challenges as the "real work" of computer science, otherwise there's just nothing there.

I encounter that a lot when I have a preconceived notion of the heart of a strategy (including edge cases), only to find that it wasn't addressed, and in fact wasn't even mentioned.

Yes these things are hard, but Amazon has billions and billions of dollars.


This is a thriving service market at the moment. DivvyCloud is an example: https://aws.amazon.com/solutionspace/financial-services/solu...


Hard to blame non-technical people when even programmers happen to use md5.

Anyway, why such documents don’t refer to auxiliary standards with big ass year of creation somewhere on top? That might ring a bell if you’re about to use something from 3 decades ago and you have a hint it’s security related.


There's a big difference between still using a very-common hashing algorithm that happens to no longer be secure, and discussing the use of programs that haven't actually existed in decades.


Oh I was referring to the other part of article about document recommending usage of algorithms such as 1024 bit RSA and SHA-1.


Fortunately email was designed back in a day when mail servers could fail. Don’t worry - proper SMTP servers will retry patiently.


> Isn’t Cassandra basically DynamoDB 2, but open source?

Yet you need to deploy it somewhere and maintain it. DynamoDB just works and that’s why people will happily pay extra because they can put effort on something more meaningful that maintaining a database.


That isn't relevant to the parent's question. They were complaining about the lack of open source alternatives to DynamoDB. Cassandra qualifies.


Apparently Amazon is offering a hosted version of Cassandra now too.


It’s a real pain to handle subprocesses in Python. If you need to automate certain command based workflows, Bash scripts are much easier, both to write and read, until they reach certain size. At my previous job it was a daily task and scripts involved tricky stuff related with Subversion, Git and builds (complex CI/CD, generally).

The problem is that very few people know shell scripting well, but once you get to know it, it’s not that bad, in quite specific cases.

Always use Shellcheck, though.


I have no data supporting this (ironically, just intuition), but examples you give might only be a small chunk of attempts that brought a successful outcome. In other words, when one tries to achieve something, they'll initially fail and only after some more trial and error they have enough data to succeed.

I'd agree that ideas start from human intuition, but above would suggest that intuition is often wrong, so people need to apply it at least a few times in order to get it right. I think that's the point of the article.


Thank you for suggestion, I have added a simple example.


I have to think that through. Search pages are tricky as you can't simply go to a certain page. They're identified by tokens, which aren't known a priori - you just get adjacent ones with search results.

A reasonable solution would be ability to set max results and disabling next/prev. All desired results would show up in the search directory (if you can assume that results beyond some point are useless).


I think the most obvious way to expose that remote behaviour in the filesystem is to have the directory /search/2/ appear only after the directory /search/1/ has been stat'd.


There's no authorisation at the moment. Plain requests.get is used.


Maybe using Fourier/wavelet/whatsoever transform would be the way to go, just like in digital watermarking techniques. Both high capacity and robustness would seem easier to achieve.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: