Hacker News new | past | comments | ask | show | jobs | submit | netsec_burn's comments login


I made a site to use LLMs to help me with reverse engineering. The output is surprisingly readable, even with C++ classes. Let me know any feedback you might have: https://decompiler.zeroday.engineering/


This is great! With Ghidra I had to look for the corresponding libs of a very specific RiscV vendor, your SRE did it by itself. You should have your own HN thread in front page!


What kind of file should be uploaded?


The allowed types are a bit misleading. Any binary is accepted, any architecture. You can upload shared objects, ELF executables, PE binaries, etc.

I like to give it bomb executables (reverse engineering challenges) to test it.


> Any binary is accepted, any architecture.

One should be careful tossing around the word "any" in relation to executable formats, for there are seemingly an unbounded number of them: https://github.com/1Password/onepassword-sdk-go/blob/v0.1.5/...

Up to you, but currently your polling endpoint just has a boolean, which is likely super easy to cook on the server side but also leads the user left wondering "uh, is this thing on?" in ways that any kind of percentage might not. IOW, how long, exactly, should any sane person wait for it to be {"status":true}?

Also, you have your ELB misconfigured because trying to upload a binary that is takes more than 30 seconds to upload causes the actual POST to puke. I'm sure that's great for hello-world.exe but is absolutely hilarious for any real binary


I can answer the writing to /proc one. It is sometimes useful to hotpatch running programs with /proc/pid/mem.


And that's what I'm getting at, and where I'd like the community to improve in discussions. In what context do you need it, and how much, and what would your alternatives be?

Because, the amount of different contexts linux is being used in, and the different threat levels are vastly different.

For example, I'm aware that the industrial and embedded world does wild things at times. Because it's hard to establish redundancy and replacability there. Because the system is attached to a $750k lathe. However, that thing is not networked, and physical access is controlled by people with guns. Do whatever you need to keep this thing running, as horrid as it may be.

On the other hand, I have a fleet of loadbalancers and their job is to accept traffic from all criminals in this world, and then some legitimate users as well. I can reset them to base linux and have them back operational in 10 minutes or so. Things modifying loaded code in memory outside of some very specific situations like service startup on these systems is terrifying and entirely not necessary.

So I would be very happy with a switch to turn that off, even though some other use cases wouldn't need it or wouldn't be able to use it at all.


Or the LilyGO T-Embed CC1101.


CC1101 boards(at least the cheapest ones) have problems with shared SPI bus(SD card and Subghz module)



I recently learned this too, just a few months ago. Ended up making a frontend so I could do it automatically: https://decompiler.zeroday.engineering/


I've used wormhole once to move a 70 GB file. Couldn't possibly do that before. And yes, I know I used the bandwidth of the relay server, I donated to Debian immediately afterwards (they run the relay for the version in the apt package).


(magic-wormhole author here)

Thanks for making a donation!

I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.

Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.

Thanks for using magic wormhole!


> move to a slower-but-flat-rate provider

As I'm sure you're aware: https://www.scaleway.com/en/stardust-instances/ "up to 100Mbps" for $4/month


Hetzner.de has 1 gbps unlimited or 10 gbps with a 20TB limit on their bare metal servers. And those can be bought very cheap if you don't need any special hardware.


32.4 TB for $4, or approximately 700 times cheaper than AWS. Neat.


It's unlikely they would let you run it full tilt the entire month. I'm not aware of any VPS providers that have a true unlimited data plan. Would love to be proven wrong.


Bare metal love


It's clearly a VPS, not bare metal.


I wonder what it would take for AWS to lower their outbound BW pricing to something that's not insane.

I'm beginning to think that the only feasible solution is changing the law.


I'm suffering from fatigue from all the political commercials in which every single Democrat apparently single-handedly reduced the price of insulin. As if government-mandated pricing were a good thing.

If something is overpriced, somebody should jump in and take advantage of a business opportunity. If nobody is jumping in, perhaps the item is not overpriced. Or perhaps there is some systemic issue preventing willing competitors from jumping in. Imagine if somebody tackled the real issue and it unclogged the plumbing for producers of all sorts of medicine beside insulin at the same time.

If a government mandates the sale of an item below the cost of production, they drive out all producers and that product disappears from the market. That is, unless they create some government subsidy or other graft to compensate the government-appointed winners. Any way you slice it, it is a recipe for disaster.

If parties are allowed to compete fairly with each other, somebody will offer a cheaper price. This is already the case with AWS. Consumers may decide that the cheaper product is somehow inferior, but that is not a problem that lawmakers should interfere in.


Interesting you should choose insulin, as it's made by ~3 companies, and 2002-2013 the price went up 6x, while the price of the inputs dropped. ISTR that right after that it went up another 3x to over $300/vial. Thankfully, I only needed a vial once every few months, it was for my cat.

"Evergreening", a process where the drug manufacturers slightly change the formula or delivery when one patent is running out, to gain a new patent, then stop manufacturing the old formula.

Not saying I want to see AWS bandwidth prices regulated (though I think they could come down and still make a massive profit). But in the case of insulin, the industry has left little choice but government intervention.


Except in insulin’s case all they did was cap out of pocket costs, meaning insurance takes up the rest of the bill…which means the rest of us pay for it - and worse yet, it effectively stops any pressure on those companies to lower prices. That’s both political pressure and market pressure. Why the hell would anyone care or use cheaper insulin now?


So all the politicians pat themselves on the back without fixing the real problem. Instead, they just add one problem on top of another.


> But in the case of insulin, the industry has left little choice but government intervention.

drugs are not made without government approval or something. the FDA tells you what you can or cannot do.


I think you’re forgetting that it is regulatory capture that has made medicine cost so much in the US in the first place.


> If something is overpriced, somebody should jump in and take advantage of a business opportunity

insulin is off patent. anyone can in theory manufacture it, but the ROI is just not worth it even at the current prices. Manufacturing it is not easy, there are humongous amounts of regulations, you will probably need to do a couple of clinical trials too... so you end up with an oligopoly that are incumbents that nobody wants to challenge, and prices that are all aligned.


Is it, though? Even a poor country like Brazil can afford to give out insulin for free


interesting. so the solution is to pile on more regulation yes? if a little is garbage and destroys the system then more must be better right?


Please don't suggest more laws like this. If you don't like AWS pricing, use something else. That's the only real way to develop alternatives.


You disliked my idle thought so much that you needed to reply twice? :)

The various factors causing strong lock-in effects, their dominance, and the insanely high pricing of moving data out of AWS - I wouldn't be surprised if they got their antitrust moment within a few years.


Sorry. It wasn't personal. I just thought you deserved more than my initial terse response and some explanation of what bothered me: Layers of stupid laws on top of stupid laws that impede rational behavior instead of encouraging it.


>I'm beginning to think that the only feasible solution is changing the law.

Do you also think we should legislate the price of BMWs? You're not forced to buy AWS, there's plenty of alternatives, and the prices that AWS charges is well known. I'm not sure why the government should be involved other than a vague sense of "I want cheap stuff".


Contabo also might be an option: https://contabo.com/en/vps/

Throttling after 32 TB: https://help.contabo.com/en/support/solutions/articles/10300...

Some commentary: https://hostingrevelations.com/contabo-bandwidth-limit/

I wouldn't say that they're super dependable or that the VPSes are very performant, but for the most part they work and are affordable.

Alternatively, there's also Hetzner, as sibling comments mentioned: https://www.hetzner.com/cloud/

They do have additional fees, though:

> You’ll get at least 20 TB of inclusive traffic for cloud servers at EU and US locations and 1 TB in Singapore. For each additional TB, we charge € 1.00 in the EU and US, and € 7.40 in Singapore. (Prices excl. VAT)

I also used to use Time4VPS, however they have gradually been rising prices and the traffic I'd get before being throttled would be less than that of Contabo.


I remember at one point reading about webrtc and some kind of "introducer" server that would start the peer-to-peer connections between clients.

Does wormhole try something like that before acting as a relay?


Not yet. The "Dilation" protocol (which is about 80% implemented) is intended to support WebRTC as a transport layer. IIRC it requires a public server to tell you about your external IP address, but magic-wormhole already has a server that could play that role. Once a side learns its own address, it can send it to the peer (via the encrypted tunnel, through the relay server), and then the WebRTC hole-punching protocol tries to make connections to the peer's public address. When both sides do the same thing at the same time, sometimes you can get a direct connection through the NAT boxes.

We don't have that yet, but the two sides attempt direct connections first (to all the private addresses they can find, which will include a public address if they aren't behind NAT). They both wait a couple of seconds before trying the relay, and the first successful negotiation wins, so in most cases it will use a direct connection if at all possible.


Seems like the only way to ensure wormhole to scale is to only to use relay server to setup direct connections.

I know this requires one of the ends to be able to open ports or whatever but that should be baked into the wormhole setup.


Maybe hole punching or similar might be worth examining?


Do you do NAT hole punching, and/or port traversal like uPNP, NAT-PMP? I think for all but the most hostile networks the use of the relay server can be almost always avoided.


It took this far down in the comments to get to some inkling of the meat of this.

It relys on some singular or small set of donated servers?

NAT <-> NAT traversal is obviously the biggest motivator, since otherwise you just scp or rsync or sftp if you don't have the dual barrier.

Is the relay server configurable? Seemed to be implied it is somewhat hardcoded.


Yes, it relies on two servers, both of which I run. All connections use the "mailbox server", to exchange short messages, which are used to do the cryptographic negotiation, and then trade instructions like "I want to send you a file, please tell me what IP addresses to try".

Then, to send the bulk data, if the two sides can't establish a direct connection, they fall back to the "transit relay helper" server. You only need that one if both sides are behind NAT.

The client has addresses for both servers baked in, so everything works out-of-the-box, but you can override either one with CLI args or environment variables.

Both sides must use the same mailbox server. But they can use different transit relay helpers, since the helper's address just gets included in the "I want to send you a file" conversation. If I use `--transit-helper tcp:helperA.example.com:1234` and use use `--transit-helper tcp:helperB.example.com:1234`, then we'll both try all of:

* my public IP addresses * your public IP addresses * helperA (after a short delay) * helperB (after a short delay)

and the first one to negotiate successfully will get used.

> since otherwise you just scp or rsync or sftp if you don't have the dual barrier

True, but wormhole also means you don't have to set up pubkey ahead of time.


Both sides behind NAT is surely the most common use case by a mile? Do you keep stats?

I would have thought NAT hole punching was a basic requirement for something like this...


Can you turn the magic wormhole into an API for receiving a JSON payload directly into your magic wormhole ontop of whatever youre running in a fastAPI to route that incoming wormhole listener?


There's a `wormhole send --text BLOB`, which doesn't bother with a bulk-data "transit" connection, and just drops a chunk of text on the receiving side's stdout.

You can also import the wormhole library directly and use its API to run whatever protocol you want. That mode uses the same kinds of codes as the file-sending tool, but with a different "application ID" so they aren't competing for the same short code numbers. https://github.com/magic-wormhole/magic-wormhole/blob/master... has details.


Yes.

A technique like this is used to do "invites" in Magic Folder, and also in Tahoe-LAFS. That is, they speak a custom protocol over just the Mailbox server in order to do some secrets-exchanging. They never set up a "bulk transport" link.

There is also a Haskell implementation, if that's of interest.

I love to learn about "non-file-transfer" use-cases for Magic Wormhole, so please connect via GitHub (or https://meejah.ca/contact)


> scp or rsync or sftp

All of them require an account on the other machine and aren't really suitable for quick transfer one-off file transfer from one computer to another that you don't own.

If I have a direct network connection I tend to go with:

    python3 -m http.server
or

    tar ...| nc
Neither of which is great, but at least you'll find them on many machines already preinstalled.


The wormhole transit protocol will attempt to arrange a direct connection and avoid transferring data through the relay.


Is there a switch to fail rather than fall back on relay?


Not really.. the closest approximation would be if both sides set their `--transit-helper` to an unusable port like `tcp:localhost:9`. That would effectively remove the relay helpers from the negotiation list, leaving just the direct connection hints.

But you can't currently force that from one side: if you do that, but the other side doesn't override it too, then you'll both include their relay hint in the list.

Note that using the relay doesn't affect the security of the transfer: there's nothing the relay can do to violate your confidentiality (learn what you're sending) or integrity (cause you to receive something other than what the sender intended). The worst the relay can do is to prevent your transfer from happening entirely, or make it go slowly.


Not yet, I'm writing it. Will be available by the end of the month, feel free to follow the PR: https://github.com/flipperdevices/flipperzero-firmware/pull/...


How? They are prohibited from using it in the license.


Today appears to be the day you can run an LLM that is competitive with GPT-4o at home with the right hardware. Incredible for progress and advancement of the technology.

Statement from Mark: https://about.fb.com/news/2024/07/open-source-ai-is-the-path...


> at home with the right hardware

Where the right hardware is 10x4090s even at 4 bits quantization. I'm hoping we'll see these models get smaller, but the GPT-4-competitive one isn't really accessible for home use yet.

Still amazing that it's available at all, of course!


It's hardly cheap starting at about $10k of hardware, but another potential option appears to be using Exo to spread the model across a few MBPs or Mac Studios: https://x.com/exolabs_/status/1814913116704288870


Or maybe using Distributed Llama? https://github.com/b4rtaz/distributed-llama


It's not really competitive though, is it? I tested it and 4o is just better.


Disclaimer: I tested llama3-8B, 3.1 might even as a small model be better, but I so far I have not seen a single small model approach 4o, ime.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: