I made a site to use LLMs to help me with reverse engineering. The output is surprisingly readable, even with C++ classes. Let me know any feedback you might have: https://decompiler.zeroday.engineering/
This is great! With Ghidra I had to look for the corresponding libs of a very specific RiscV vendor, your SRE did it by itself. You should have your own HN thread in front page!
Up to you, but currently your polling endpoint just has a boolean, which is likely super easy to cook on the server side but also leads the user left wondering "uh, is this thing on?" in ways that any kind of percentage might not. IOW, how long, exactly, should any sane person wait for it to be {"status":true}?
Also, you have your ELB misconfigured because trying to upload a binary that is takes more than 30 seconds to upload causes the actual POST to puke. I'm sure that's great for hello-world.exe but is absolutely hilarious for any real binary
And that's what I'm getting at, and where I'd like the community to improve in discussions. In what context do you need it, and how much, and what would your alternatives be?
Because, the amount of different contexts linux is being used in, and the different threat levels are vastly different.
For example, I'm aware that the industrial and embedded world does wild things at times. Because it's hard to establish redundancy and replacability there. Because the system is attached to a $750k lathe. However, that thing is not networked, and physical access is controlled by people with guns. Do whatever you need to keep this thing running, as horrid as it may be.
On the other hand, I have a fleet of loadbalancers and their job is to accept traffic from all criminals in this world, and then some legitimate users as well. I can reset them to base linux and have them back operational in 10 minutes or so. Things modifying loaded code in memory outside of some very specific situations like service startup on these systems is terrifying and entirely not necessary.
So I would be very happy with a switch to turn that off, even though some other use cases wouldn't need it or wouldn't be able to use it at all.
I've used wormhole once to move a 70 GB file. Couldn't possibly do that before. And yes, I know I used the bandwidth of the relay server, I donated to Debian immediately afterwards (they run the relay for the version in the apt package).
I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.
Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.
Hetzner.de has 1 gbps unlimited or 10 gbps with a 20TB limit on their bare metal servers. And those can be bought very cheap if you don't need any special hardware.
It's unlikely they would let you run it full tilt the entire month. I'm not aware of any VPS providers that have a true unlimited data plan. Would love to be proven wrong.
I'm suffering from fatigue from all the political commercials in which every single Democrat apparently single-handedly reduced the price of insulin. As if government-mandated pricing were a good thing.
If something is overpriced, somebody should jump in and take advantage of a business opportunity. If nobody is jumping in, perhaps the item is not overpriced. Or perhaps there is some systemic issue preventing willing competitors from jumping in. Imagine if somebody tackled the real issue and it unclogged the plumbing for producers of all sorts of medicine beside insulin at the same time.
If a government mandates the sale of an item below the cost of production, they drive out all producers and that product disappears from the market. That is, unless they create some government subsidy or other graft to compensate the government-appointed winners. Any way you slice it, it is a recipe for disaster.
If parties are allowed to compete fairly with each other, somebody will offer a cheaper price. This is already the case with AWS. Consumers may decide that the cheaper product is somehow inferior, but that is not a problem that lawmakers should interfere in.
Interesting you should choose insulin, as it's made by ~3 companies, and 2002-2013 the price went up 6x, while the price of the inputs dropped. ISTR that right after that it went up another 3x to over $300/vial. Thankfully, I only needed a vial once every few months, it was for my cat.
"Evergreening", a process where the drug manufacturers slightly change the formula or delivery when one patent is running out, to gain a new patent, then stop manufacturing the old formula.
Not saying I want to see AWS bandwidth prices regulated (though I think they could come down and still make a massive profit). But in the case of insulin, the industry has left little choice but government intervention.
Except in insulin’s case all they did was cap out of pocket costs, meaning insurance takes up the rest of the bill…which means the rest of us pay for it - and worse yet, it effectively stops any pressure on those companies to lower prices. That’s both political pressure and market pressure. Why the hell would anyone care or use cheaper insulin now?
> If something is overpriced, somebody should jump in and take advantage of a business opportunity
insulin is off patent. anyone can in theory manufacture it, but the ROI is just not worth it even at the current prices. Manufacturing it is not easy, there are humongous amounts of regulations, you will probably need to do a couple of clinical trials too... so you end up with an oligopoly that are incumbents that nobody wants to challenge, and prices that are all aligned.
You disliked my idle thought so much that you needed to reply twice? :)
The various factors causing strong lock-in effects, their dominance, and the insanely high pricing of moving data out of AWS - I wouldn't be surprised if they got their antitrust moment within a few years.
Sorry. It wasn't personal. I just thought you deserved more than my initial terse response and some explanation of what bothered me: Layers of stupid laws on top of stupid laws that impede rational behavior instead of encouraging it.
>I'm beginning to think that the only feasible solution is changing the law.
Do you also think we should legislate the price of BMWs? You're not forced to buy AWS, there's plenty of alternatives, and the prices that AWS charges is well known. I'm not sure why the government should be involved other than a vague sense of "I want cheap stuff".
> You’ll get at least 20 TB of inclusive traffic for cloud servers at EU and US locations and 1 TB in Singapore. For each additional TB, we charge € 1.00 in the EU and US, and € 7.40 in Singapore. (Prices excl. VAT)
I also used to use Time4VPS, however they have gradually been rising prices and the traffic I'd get before being throttled would be less than that of Contabo.
Not yet. The "Dilation" protocol (which is about 80% implemented) is intended to support WebRTC as a transport layer. IIRC it requires a public server to tell you about your external IP address, but magic-wormhole already has a server that could play that role. Once a side learns its own address, it can send it to the peer (via the encrypted tunnel, through the relay server), and then the WebRTC hole-punching protocol tries to make connections to the peer's public address. When both sides do the same thing at the same time, sometimes you can get a direct connection through the NAT boxes.
We don't have that yet, but the two sides attempt direct connections first (to all the private addresses they can find, which will include a public address if they aren't behind NAT). They both wait a couple of seconds before trying the relay, and the first successful negotiation wins, so in most cases it will use a direct connection if at all possible.
Do you do NAT hole punching, and/or port traversal like uPNP, NAT-PMP? I think for all but the most hostile networks the use of the relay server can be almost always avoided.
Yes, it relies on two servers, both of which I run. All connections use the "mailbox server", to exchange short messages, which are used to do the cryptographic negotiation, and then trade instructions like "I want to send you a file, please tell me what IP addresses to try".
Then, to send the bulk data, if the two sides can't establish a direct connection, they fall back to the "transit relay helper" server. You only need that one if both sides are behind NAT.
The client has addresses for both servers baked in, so everything works out-of-the-box, but you can override either one with CLI args or environment variables.
Both sides must use the same mailbox server. But they can use different transit relay helpers, since the helper's address just gets included in the "I want to send you a file" conversation. If I use `--transit-helper tcp:helperA.example.com:1234` and use use `--transit-helper tcp:helperB.example.com:1234`, then we'll both try all of:
* my public IP addresses
* your public IP addresses
* helperA (after a short delay)
* helperB (after a short delay)
and the first one to negotiate successfully will get used.
> since otherwise you just scp or rsync or sftp if you don't have the dual barrier
True, but wormhole also means you don't have to set up pubkey ahead of time.
Can you turn the magic wormhole into an API for receiving a JSON payload directly into your magic wormhole ontop of whatever youre running in a fastAPI to route that incoming wormhole listener?
There's a `wormhole send --text BLOB`, which doesn't bother with a bulk-data "transit" connection, and just drops a chunk of text on the receiving side's stdout.
You can also import the wormhole library directly and use its API to run whatever protocol you want. That mode uses the same kinds of codes as the file-sending tool, but with a different "application ID" so they aren't competing for the same short code numbers. https://github.com/magic-wormhole/magic-wormhole/blob/master... has details.
A technique like this is used to do "invites" in Magic Folder, and also in Tahoe-LAFS. That is, they speak a custom protocol over just the Mailbox server in order to do some secrets-exchanging. They never set up a "bulk transport" link.
There is also a Haskell implementation, if that's of interest.
I love to learn about "non-file-transfer" use-cases for Magic Wormhole, so please connect via GitHub (or https://meejah.ca/contact)
All of them require an account on the other machine and aren't really suitable for quick transfer one-off file transfer from one computer to another that you don't own.
If I have a direct network connection I tend to go with:
python3 -m http.server
or
tar ...| nc
Neither of which is great, but at least you'll find them on many machines already preinstalled.
Not really.. the closest approximation would be if both sides set their `--transit-helper` to an unusable port like `tcp:localhost:9`. That would effectively remove the relay helpers from the negotiation list, leaving just the direct connection hints.
But you can't currently force that from one side: if you do that, but the other side doesn't override it too, then you'll both include their relay hint in the list.
Note that using the relay doesn't affect the security of the transfer: there's nothing the relay can do to violate your confidentiality (learn what you're sending) or integrity (cause you to receive something other than what the sender intended). The worst the relay can do is to prevent your transfer from happening entirely, or make it go slowly.
Today appears to be the day you can run an LLM that is competitive with GPT-4o at home with the right hardware. Incredible for progress and advancement of the technology.
Where the right hardware is 10x4090s even at 4 bits quantization. I'm hoping we'll see these models get smaller, but the GPT-4-competitive one isn't really accessible for home use yet.
Still amazing that it's available at all, of course!
It's hardly cheap starting at about $10k of hardware, but another potential option appears to be using Exo to spread the model across a few MBPs or Mac Studios: https://x.com/exolabs_/status/1814913116704288870