I've been working on production-grade AWS deployments for over 10 years, so it's a bit hard to frame my experience/skill set. That being said, if you need help with almost anything AWS - architecture, infrastructure, performance, scale - feel free to reach out. Although versed in Well-Architected/multi-region/multi-account, I strive to supply the simplest solution for your particular scenario.
I've been working on production-grade AWS deployments for over 10 years, so it's a bit hard to frame my experience/skill set. That being said, if you need help with almost anything AWS - architecture, infrastructure, performance, scale - feel free to reach out. Although versed in Well-Architected/multi-region/multi-account, I strive to supply the simplest solution for your particular scenario.
Generally speaking this isn't something Amazon S3 customers need to worry about - as others have said, S3 will automatically scale index performance over time based on load. The challenge primarily comes when customers need large bursts of requests within a namespace that hasn't had a chance to scale - that's when balancing your workload over randomized prefixes is helpful.
FWIW The optimal way we were told was to partition our data was to do this:
010111/some/file.jpg.
Where `010111/` is a random binary string which will please both the automatic partitioning (503s => partition) and manual partitioning you could ask AWS. Please as in the cardinality of partitions grows slower at each characters vs prefixes like `az9trm/`.
We were told that the later version makes manual partitioning a challenge because as soon as you reach two characters you've already created 36x36 partitions (1,296).
The issue with that: your keys are no more meaningful if you're relying on S3 to have "folders" by tenants for example (customer1/..).
I have had the same experience within the last 18 months. The storage team came back to me and asked me to spread my ultra high throughput write workload across 52 (A-Za-z) prefixes and then they pre-partitioned the bucket for me.
S3 will automatically do this over time now, but I think there are/were edge cases still. I definitely hit one and experienced throttling at peak load until we made the change.
By the way, that happens quite frequently. I regularly ask them about new AWS technologies or recent changes, and most of the time they are not aware. They usually say they will call back later after doing some research.
I've been working on production-grade AWS deployments for over 10 years, so it's a bit hard to frame my experience/skill set. That being said, if you need help with almost anything AWS - architecture, infrastructure, performance, scale - feel free to reach out. Although versed in Well-Architected/multi-region/multi-account, I strive to supply the simplest solution for your particular scenario.
I've been working on production-grade AWS deployments for over 10 years, so it's a bit hard to frame my experience/skill set. That being said, if you need help with almost anything AWS - architecture, infrastructure, performance, scale - feel free to reach out. Although versed in Well-Architected/multi-region/multi-account, I strive to supply the simplest solution for your particular scenario.
> If I'm reading this correctly, then AWS Support dropped the ball here but this isn't a bug in lambda. This is the documented behavior of the lambda runtime.
AWS Support is generally ineffective unless you're stuck on something very simple at a higher level of the platform (e.g. misunderstanding an SDK API).
Even with their higher tier support - where you can summon a subject matter expert via Chime almost instantly - they're often clueless, and will confidently pass you misleading or incorrect information just to get you off the line. I've successfully used them as a very expensive rubber ducky, but that's about it.
I've been working on production-grade AWS deployments for over 10 years, so it's a bit hard to frame my experience/skill set. That being said, if you need help with almost anything AWS - architecture, infrastructure, performance, scale - feel free to reach out. Although versed in Well-Architected/multi-region/multi-account, I strive to supply the simplest solution for your particular scenario.
I've been working on production-grade AWS deployments for over 10 years, so it's a bit hard to frame my experience/skill set. That being said, if you need help with almost anything AWS - architecture, infrastructure, performance, scale - feel free to reach out. Although versed in Well-Architected/multi-region/multi-account, I strive to supply the simplest solution for your particular scenario.
I've been working on production-grade AWS deployments for over 10 years, so it's a bit hard to frame my experience/skill set. That being said, if you need help with almost anything AWS - architecture, infrastructure, performance, scale - feel free to reach out. Although versed in Well-Architected/multi-region/multi-account, I strive to supply the simplest solution for your particular scenario.
I've been working on production-grade AWS deployments for over 10 years, so it's a bit hard to frame my experience/skill set. That being said, if you need help with almost anything AWS - architecture, infrastructure, performance, scale - feel free to reach out. Although versed in Well-Architected/multi-region/multi-account, I strive to supply the simplest solution for your particular scenario.
Location: US EST
Remote: Only
Technologies: IAM, VPC, Cloudwatch, EC2/ASG/ELB, Lambda, API Gateway, RDS, Redshift, DynamoDB, CodeDeploy, Cognito, Athena, S3, Cloudfront, Kinesis, SQS, SNS, IoT Core, Sagemaker, ElastiCache, MediaConvert
Email: hn@cldcntrl.com