libcurl also has AWS auth with --aws-sigv4 which gives you a fully compatible S3 cliënt without installing anything! (You probably already have curl installed)
Somewhat related, I just came across s5cmd[1] which is mainly focused on performance and fast upload/download and sync of s3 buckets.
> 32x faster than s3cmd and 12x faster than aws-cli. For downloads, s5cmd can saturate a 40Gbps link (~4.3 GB/s), whereas s3cmd and aws-cli can only reach 85 MB/s and 375 MB/s respectively.
I prefer s5cmd as well because it has a better CLI interface than s3cmd, especially if you need to talk with non-AWS S3-compatible servers. It does few things and does them well, whereas s3cmd is a tool with a billion options, configuration files, badly documented env variables, and its default mode of operation assumes you are talking with AWS.
This is awesome! Been waiting for something like this to replace the bloated SDK Amazon provides. Important question— is there a pathway to getting signed URLs?
FYI, you can add browser support by using noble-hashes[1] for SHA256/HMAC - it's a well-done library, and gives you performance that is indistinguishable from native crypto on any scale relevant to S3 operations. We use it for our in-house S3 client.
For now, unfortunately, no - no signed URLs are supported. It wasn't my focus (use case), but if you find a simple/minimalistic way to implement it, I can help you with that to integrate it.
From my helicopter perspective, it adds extra complexity and size, which could maybe be ideal for a separate fork/project?
Signed URLs are great because it allows you to allow third parties access to a file without them having to authenticate against AWS.
Our primary use case is browser-based uploads. You don't want people uploading anything and everything, like the wordpress upload folder. And it's timed, so you don't have to worry about someone recycling the URL.
I use presigned urls as part of a federation layer on top of an s3 bucket. Users make authenticated requests to my api which checks their permissions (if they have access to read/write to the specified slice of the s3 bucket), my api sends a presigned url back to allow read/write/delete to that specific portion of the bucket.
> What I would also love to see is a simple, single binary S3 server alternative to Minio
Garage[1] lacks a web UI but I believe it meets your requirements. It's an S3 implementation that compiles to a single static binary, and it's specifically designed for use cases where nodes do not necessarily have identical hardware (i.e. different CPUs, different RAM, different storage sizes, etc.). Overall, Garage is my go-to solution for object storage at "home server scale" and for quickly setting up a real S3 server.
There seems to be an unofficial Web UI[2] for Garage, but you're no longer running a single binary if you use this. Not as convenient as a built-in web UI.
checksumming does make sense because it ensures that the file you've transferred is complete and what was expected. if the checksum of the file you've downloaded differs from the server gave you, you should not process the file further and throw an error (worst case would probably be a man in the middle attack, not so worse cases being packet loss i guess)
> checksumming does make sense because it ensures that the file you've transferred is complete and what was expected.
TCP has a checksum for packet loss, and TLS protects against MITM.
I've always found this aspect of S3's design questionable. Sending both a content-md5 AND a x-amz-content-sha256 header and taking up gobs of compute in the process, sheesh...
It's also part of the reason why running minio in its single node single drive mode is a resource hog.
Effingo file copy service does application-layer strong checksums and detects about 4.5 corruptions per exabyte transferred (figure 9, section 6.2 in [1]).
This is on top of TCP checksums, transport layer checksums/encryption (gRPC), ECC RAM and other layers along the way.
Many of these could be traced back to a "broken" machine that was eventually taken out.
In my view one reason is to ensure integrity down the line. You want the checksum of a file to still be the same when you download it maybe years later. If it isn't, you get warned about it. Without the checksum, how will you know for sure? Keep your own database of checksums? :)
If we're talking about bitrot protection, I'm pretty sure S3 would use some form of checksum (such as crc32 or xxhash) on each internal block to facilitate the Reed-Solomon process.
If it's verifying whether if it's the same file, you can use the Etag header which is computed server side by S3. Although I don't like this design as it ossifies the checksum algorithm.
This is actually not the case. The TLS stream ensures that the packets transferred between your machine and S3 are not corrupted, but that doesn't protect against bit-flips which could (though, obviously, shouldn't) occur from within S3 itself. The benefit of an end-to-end checksum like this is that the S3 system can store it directly next to the data, validate it when it reads the data back (making sure that nothing has changed since your original PutObject), and then give it back to you on request (so that you can also validate it in your client). It's the only way for your client to have bullet-proof certainty of integrity the entire time that the data is in the system.
Thats true, but wouldn't it be still required if you're having a internal S3 service which is used by internal services and does not have HTTPS (as it is not exposed to the public)? I get that the best practice would be to also use HTTPS there but I'd guess thats not the norm?
Theoretically TCP packets have checksums, however it's fairly weak. So for HTTP, additional checksums make sense. Although I'm not sure, if there are any internal AWS S3 deployments working over HTTP and why would they complicate their protocol for everyone to help such a niche use case.
I'm sure that they have reasons for this whole request signature scheme over traditional "Authorization: Bearer $token" header, but I never understood it.
AWS has a video about it somewhere, but in general, it’s because S3 was designed in a world where not all browsers/clients had HTTPS and it was a reasonably expensive operation to do the encryption (like, IE6 world). SigV4 (and its predecessors) are cheap and easy once you understand the code.
Because a bearer token is a bearer token to do any request, while a pre-signed request allows you to hand out the capability to perform _only that specific request_.
On the other hand S3 uses checksums only to verify expected upload (on the write from client -> server) ... and suprisingly you can do that in paralel after the upload - by checking the MD5 hash of blob to ETag (*with some caveats)
You need the checksum only if the file is big and you're downloading it to disk, or if you're paranoid that some malware with root access might be altering the contents of your memory.
Or you really care about the data and are aware of the statistical inevitability of a bit flip somewhere along the line if you’re operating long enough.
These are nice projects. I had a few rounds with Rust S3 libraries and having a simple low or no dep client is much needed. The problem is that you start to support certain features (async, http2, etc.) and your nice nodep project is starting to grow.
Good to see this mentioned. We are considering running it for some things internally, along with Harbor. The fact that the resource footprint is advertised as small enough is compelling.
You know what would be really awesome? Making a fuse-based drop-in replacement for mapping a folder to a bucket, like goofys. Maybe a node.js process can watch files for instance and backup, or even better it can back the folder and not actually take up space on the local machine (except for a cache).
I found the words used to describe this jarring - to me it makes sense to have an s3 client on my computer, but less so client side on a webapp. On further reading, it makes sense, but highlighting what problem this package solves in the first few lines of the readme would be valuable for people like me at least
I have a good suspicion this has been written with help from an LLM. The heavy use of emojis and strong hyper confident language is the giveaway. Proof: my own repos look like this after they’ve had the touch of cursor / windsurf etc. still doesn’t take away if the code is useful or good.
tbh - english is not my mother-language so i do help myself with copy and typos ... but, if it feels uncomfy please feel free to open PR - I want it to be as reasonable as possible
It's a typescript client it seems. While you can bundle it in a webapp, typescript application goes beyond just web applications, this is why I was confused.
This is good to have. A few months ago I was testing a S3 alternative but running into issues getting it to work. Turned out it was because AWS made changes to the tool that had the effect of blocking non-first-party clients. Just sheer chance on my end, but I imagine that was infuriating for folks who have to rely on that client. There is an obvious need for a compatible client like this that AWS doesn’t manage.
I tried to go this route of using Bun for everything (Bun.serve, Bun.s3 etc), but was forced back to switch back to NodeJS proper and Express/aws-sdk due to Bun not fully implementing Nodes APIs.
The other day I was toying with the MCP server (https://github.com/modelcontextprotocol/typescript-sdk). I default to bun these days and the http based server simply did not register in claude or any other client. No error logs, nothing.
After fiddling with my code I simply tried node and it just worked.
But even in the case of the official aws-sdk they recently deprecated v2. I now need to update all my not-so-old Node projects to work with the newer version. Probably wouldn't have happened if I had used Bun's S3 client.