Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
WireGuardNT, a high-performance WireGuard implementation for the Windows kernel (zx2c4.com)
622 points by zx2c4 on Aug 2, 2021 | hide | past | favorite | 182 comments


For reference, I've never seen the built-in Windows VPN protocols exceed ~70 Mbps in any scenario. Maybe it's possible with a crossover cable between two Mellanox 100 Gbps NICs, using water-cooled and overclocked CPUs, but not over ordinary networks with ordinary servers.

I have gigabit wired Internet to a site with gigabit Internet. Typical performance of SSTP or IKEv2 is 15-30 Mbps. That's 1.5% to 3% max utilisation of the available bandwidth, which is just... sad.

It's not the specific site either, other vendor VPNs can easily achieve > 300 Mbps over the same path.

It's a year and a half into the pandemic, there are record numbers of people working from home, and Microsoft is the world's second biggest company right now.

Meanwhile, volunteers put together a protocol in their spare time that is not only more secure but can also easily do 7.5 Gbps!

That needs to be repeated: At least ONE HUNDRED TIMES faster than the "best" Microsoft can offer to their hundreds of millions of enterprise customers that are working from home.

Someone from Microsoft's networking team needs to read this, and then watch Casey Muratori's rant about Microsoft's poor track record with performance: https://www.youtube.com/watch?v=99dKzubvpKE


Not surprising at all, it is just not worthwhile doing from project management perspective, regardless what a bunch of people on Internet think about it.


Or Microsoft doesn't always make perfectly ideal project management decisions.


Or Microsoft doesn't have a VPN team any more, and hence no project managers to make management decisions for them.

I'm not even kidding that much, the DirectAccess team appears to have been disbanded and all of the open issues were unofficially put in the "will not fix" bucket. I suspect the Always On VPN team is one guy, but probably not working on it full-time.


True, however we can only evaluate that when having full knowledge of the decision process, development costs and business value.


I regularly saturate a gig internet connection to my Colo a few states over using the built in windows IPsec client just using a standard laptop.

Not that it's a particularly amazing VPN stack but 15-30mbps says you just ran into a corner case issue regardless which VPN stack it is.


"... with a crossover cable..."

Many years ago, I once brought a crossover cable from home to the office to do some data transfer from a workstation to a company-issued laptop. The IT department issuing the laptop, being lovers of all things Microsoft, claimed crossover cable was "obsolete" due to auto-sensing used by Windows.

I am just another dumb end user, I do not work in IT, but I still get faster data transfer between two computers with crossover cable than by going through a third computer, or God forbid, over Wifi.

Sounds like crossover cable is not "obsolete" after all. Who would have thought.

Microsoft's customers, e.g., IT departments, are arguably complicit in the sad "state-of-the-art" you describe. The best software I have ever used was written by volunteers. Money can't buy everything. As Microsoft has shown, it can certainly buy customers.


As a sibling comment alluded to; the _crossover_ cable was obsolete, not the the ethernet cable. You can usually use a straight ethernet cable with modern devices, you don't need a crossover cable. The auto-sensing they were talking about is what's built into the NIC, and it detects how the pairs of pins in your cable are being used.


Have you tried connecting two computers with just a patch cable? With the auto-sensing Ethernet ports, it works as if the cable were a crossover cable.


I believe this is only true for gigabit - though almost any device today should be?


Auto-MDIX was starting to become the norm on nicer hardware when GbE started gaining adoption, and the MDI layer of GbE effectively obsoletes the concept of MDIX by specifying that pairs must always be probed. This was sort of required due to GbE requiring four pairs while Fast Ethernet required two, it is sort of expected that a GbE interface will encounter improper cables and it needs to detect that to degrade to Fast Ethernet.

So for GbE it's all but guaranteed, for Fast Ethernet it depends on how much money the device vendor was willing to spend on the interface, basically. Later laptops should be pretty reliable.

Or course none of this has anything to do with Windows, it all happens at a hardware level which can sometimes make investigating problems a bit painful.


Not all auto-MDIX ports are gigabit, but almost all gigabit ports have auto-MDIX.


I'll admit that I don't know if it would have worked then. And it has only been recently that I have got two computers which both have gigabit ports. I don't remember ever using a crossover cable as I always had a switch. I do remember having to manually assign IP addresses in that configuration as it didn't have a DHCP server to assign them.


Seems like folks replying and voting may have assumed I was always using recently purchased hardware. That would be an incorrect assumption. Sure, there's auto-sensing in some newer hardware and Windows may support it, but that does not mean crossover cable does not work, too. They both work. Neither is obsolete, but only one works with older hardware.

Wonder why the parent comment I was replying to mentioned crossover cable in particular. If it's obsolete why mention it.


You are confusing two different conversations. Your IT department are the ones that used the language "obsolete", because they likely knew that the laptop (which they provided) supported auto-sensing and therefore there was no need for them to provide you with a special cable to achieve a direct PC-to-PC ethernet connection.

Whereas the parent comment probably only used the language "crossover" because they were trying to be explicit about the fact that they are talking about a direct PC-to-PC ethernet connection. Not because crossover wiring is actually necessary to make that configuration work.

Furthermore, support for auto-sensing has nothing to do with the OS, or Microsoft.


First, I provided the cable. They were commenting on the idea of using a crossover cable, not a request for one.

Second, you are guessing what the commenter meant by crossover cable. I think he meant crossover cable. There is nothing to suggest otherwise.

Third, I never said auto-sensing had anything to do with the OS or Microsoft. I said the IT department loved Microsoft. You got confused and made a connection between the two.

This thing with Microsoft Windows is that it encourages the user to upgrade their hardware. Whereas I prefer NetBSD as a personal OS, and it does no such thing. Not every computer I own has auto-sensing nor a particularly fast NIC.

The questions I raised are 1. whether crossover cable still works (with both older and newer hardware) and 2. whether it is faster than alternatives.

Is it slower. IME, no.


I am just trying to explain why your comment is grey. To be clear, there is no speed increase from using a crossover cable instead of a straight-through cable together with auto sensing.


But is it slower. I never said it was faster than using auto-sensing. I said it was faster (for me) than using a third computer or using Wifi.

Plus you are (again) ignoring the situations where it's an older computer that does not have auto-sensing.

True or false: Crossover cable is more versatile for direct data transfers and is not any slower than using auto-sensing.

AFAICT, there is nothing wrong with crossover cable. If there was, methinks the parent commenter wouldn't be mentioning it on HN.

I do not see grey because I use a text-only browser. It's all the same color (except italics), just how I like it. :)


Very impressive performance:

> While performance is quite good right now (~7.5Gbps TX on my small test box), not a lot of effort has yet been spent on optimizing it

> Jonathan Tooker reported to me that, on his system with an Intel AC9560 WiFi card, he gets ~600Mbps without WireGuard, ~600Mbps with wireguard-go/Wintun over Ethernet, ~95Mbps with wireguard-go/Wintun over WiFi, and ~600Mbps with WireGuardNT over WiFi.

Congratulations to Simon and Jason! Very happy WireGuard user here.


People always compare bandwidth which is important.

Has anyone done any comparisons how latency is affected between various VPN implementations?


Wireguard adds fixed latency as it doesn't do any buffering except for the one packet it is currently working on.

The only exception is initial handshake which adds 1rtt for the first packet being sent.


> adds fixed latency

I don't know much about WireGuard, but generally speaking large fixed latency during one step can lead to huge variance in end to end latency due to queuing.


That one piece is for initialization. I'm not sure about wireguard, but I've typically seen that abstractions of ethernet stacks like VPNs will either throw away packets to be sent before initialization is complete, or will structurally simply not allow anything to be queued at that point in the first place (so like if it's a user space implementation, use type safety somehow so you can't even call send_packet() until it's done with init). In those modes the fixed latency of init doesn't cause buffer bloat issues.


Add jitter to the list. Consistency is probably more important for end-user perception than average latency.


Yes, I am gonna reinstall wireguard on my raspberry pi again. This is amazing news. And I will try and getting my windows server ryzen pc to be a router so I can benchmark all four configs.


If you're running a recent enough kernel, it's technically already there, maybe without the userspace tools.


Have you checked out some of the other options for remote access to your Raspberry Pi, like Tailscale (no affiliation) and inlets https://johansiebens.dev/posts/2020/11/quake-iii-arena-k3s-a...?


The Wireguard team are simply brilliant. It's incredible how they have developed low-level, cross-platform solutions across Linux, OpenBSD, FreeBSD and now Windows.

I think they are truly exceptional programmers. It's hard to think of people who have come anywhere close to such an achievement.


This is exciting to me. I have tripped over every VPN technology listed on Wikipedia at one point or another during my career. Always open to something better.

I think IPSec or OpenVPN are probably the opposite of what WG is offering here... Microsoft's SSTP offering is actually not causing me any major frustration at the moment. I almost like using it. But, seeing these other comments telling tales of 600 megabit VPN wifi experiences... I'll check it out for sure.


I had an sstp tunnel refuse to establish a few weeks ago. WireGuard was fine. Turns out the provider was MITMing tcp/443 traffic


It's interesting that such "provider" (I assume corporate network, rather than consumer ISP) allows traffic on WireGuard UDP port.


It was a U.K. university building

They asked what ports we wanted (sigh), we said all, I suspect the ports were opened but MITM wasn’t disabled. Oddly it either passed through u mileages or it broke connections completely without inserting a fake cert.

Even more oddly GitHub was blocked but stackoverflow was allowed.

As we turned up in the vans on the Saturday morning and the recee team hadn’t clocked this (to be fair most https sites worked fine) we couldn’t get it changed - or the process for changing it was far too much given the other stuff we had to set up and the ease of the workarround.


Wouldn't they need a cert/custom CA on your box to do that?


They need their custom CA on your box for your machine to accept the traffic by default, otherwise you'll get a big ugly untrusted cert error on every https/ssl connection, but some apps will let you ignore those (eg: curl --insecure)

I worked at a company that did this and it was a massive headache, every time I wanted to set up a new VM things would fail until I remembered I had to install their CA. I was an intern at the time, and they gave me some work that required an app that I couldn't configure to use their CA for the life of me. After a lot of failed troubleshooting I and ended up just running a SSH server on my home PC and creating a SOCKS proxy through that.


Yes. My university was using Fortigate back in the day and it had 3 behaviours

-Allow with no mitm (trusted sites)

-Block with no way around it(all residental IPs, pornsites)

-Allow but mitm the connection. The browser would present the classic ERR_UNKNOWN_ISSUER warning that most people would ignore. I couldn't figure out what criteria decided that a certain site needs the mitm treatment.


Ah yes, exactly the kind of intellectual freedom to explore and tinker you'd want to flourish at a university. Better block it! Wild guess: US?


>Wild guess: US?

Hungary.

My theory is that it was installed to curb filesharing and then it snowballed into generic blocking of various things on the university network.

>intellectual freedom to explore and tinker you'd want to flourish at a university.

Ah that sounds sweet. Reminds me of the anecdotes I read from the pioneer age of computing that people tell here sometimes. Well, the place I studied at was nothing like that. >_>


Eh, if you care about privacy you shouldn't be browsing porn without a VPN or tunnel to a trusted server, "intellectual freedom" or not


In theory, if the tls connection wasn't tampered with, i.e the cert issuer is a party you trust, it shouldn't be a problem apart from the dns query.


I was referring to residential IPs.


Yep, by far that was the biggest PiA because I was hosting a lot of things at home.


They could just block all tcp/443 traffic that they couldn't MITM, that's not uncommon in those kinds of setups.


WireGuard is so good, sometimes I forget I am on vpn and only realize it when downloading a large file that my speed is capped by my home speed.


I am curious. Which VPN do you use for daily activities?


Mullvad is a popular WireGuard provider and well respected by HN users. I usually get about 700-800mbps on my home gigabit connection when using WireGuard and Mullvad.

There are at least several other providers out there. Ideally you can find one where their nearby servers use the same IX as your ISP and/or peer with your ISP.


"NT" suffix for WinNT port looks somewhat classical, I like it.


On one hand I'm super excited for the performance and convenience of in-kernel WireGuard (huray!)

On the other I'm sad that once it's accepted into kernel, it won't be possible to add interesting changes (e.g. obfuscation, forward erasure correction, etc).

I'm torn apart :P


In some networks, I only have outgoing tcp ports 80 and 443.

Does anyone have experience with udp2raw or udptunnel?


This is just insane.

Everything except WWW is blocked, so everything must pretend to be WWW???!!!

So can anyone explain the purpose of the "source port" and "destination port" fields in the TCP header? :-)


Seems you missed the "big web revolution" between 2000 and now. Corporate/school/uni firewall madness filters everything but http(s), so each and every protocol has to be somehow http(s). That is why there are almost exclusively webmail providers anymore. Why stuff like videoconferencing must be http-based. Why we do everything by emulating better protocols via polling a webserver for xml or json responses.

Yes, it is insane.


By that logic someone has to explain why WireGuard is over UDP and the purpose of the "protocol" field of IP header :-)

It's called ossification. Same reason why TLS 1.3 has to pretend it's TLS 1.2, and QUIC must be over UDP.


It is insane. But probably some ICMP is allowed too - otherwise TCP tends to break in subtle ways... it is possible to whitelist some/necessary ICMP traffic though.



That is a first step, but what you really want to do (and what most commercial VPN solutions are doing) is tunneling the VPN via https. Preferably with a real-looking web server at one end, so that "firewall solutions" have something to look at and scan for evil keywords. Oh, and you might have to disguise the traffic pattern as something that looks like http, so frequent reconnects. And the payload has to be web-like so that a MitM-proxy sees something like http, so an one direction is a POST, the other is the respective response, and if you cannot do persistent connections because the proxy kills them you'll have to poll at least in one direction.

Yes, this is supremely ugly, but unfortunately entrenched and expected by now. I've been to customers with intrusive filtering/scanning setups like that who then recommended their favourite commercial VPN vendor to get around that filtering...


Will it be possible to fall back to the userspace implementation to use obfuscation software like shadowsocks? Or will it be deprecated?

Unfortunately the recent popularity means that almost all DPI software recognize the wireguard handshake.


Jason previously stated in mailing list (couldn't find the thread now) that obfuscation is not a goal of WireGuard.


I'm aware of that. However there are obfuscation software e.g shadowsocks that wraps wireguard or any other connection.


To be honest WireGuard over Shadowsocks is neither common nor recommended, coz essentially it's TCP-over-TCP which will wreak havoc on TCP congestion control.

Unless you mean WireGuard over Shadowsocks UDP transport, but that is even less common.

(Disclaimer: I wrote the official Go port of Shadowsocks https://github.com/shadowsocks/go-shadowsocks2)


Thank you for your work :D

Sometimes a bad solution is better than no solution at all.


Actually I'd rather WireGuard adopt some obfuscation mechanisms, but as Jason stated it's a non-goal and the focus is to get WireGuard into mainstream OS kernels. So the only hope is to provide some obfuscation transport underneath.

Tunneling WG over SS would be inefficient for obvious reasons. Maybe there should be a more lightweight solution.


almost all DPI software recognize the wireguard handshake.

Why should that matter? How does the DPI software get your keys? Isn't WireGuard data flow completely opaque to anyone or anything between endpoints?

If the DPI software blocks WireGuard packets, that's an entirely different discussion. It gets into the area of "technical solutions" to fight "administrative policy".


> It gets into the area of "technical solutions" to fight "administrative policy".

Yes, that's exactly the point. Sometimes that's the best course of action available to you. If the userspace implementation were to be deprecated that could pose difficulties.


Why would it be an issue? Can't you specify localhost as the endpoint and use the proxy to send it where it needs to go? What is the difference between the implementations?


I like to visualize networks as a series of tubes in my head. Maybe I'm misunderstanding something but I'm imagining a kernel driver that acts as a separate network interface proxying to localhost as a klein bottle[0] esque object

[0]: https://en.wikipedia.org/wiki/Klein_bottle


Performance will take a hit then, which is unfortunate.


Can anyone elaborate on how this is implemented? Are they using WFP in some way?


anyone know the WSL story here ? will WSL hook into WireguardNT ?


I would like to see 2FA (app or security key) support built into WireGuard. Otherwise, it is perfect as compared to the OpenVPN mess.


WireGuard itself doesn't even handle its existing authentication fully -- you are expected to exchange peer public keys out of band. There are several projects that try to tackle this public key exchange. I think what you're asking for, indirectly, is support for certificate authority style authentication similar to how SSH CAs work, so that wireguard could authenticate tunnels using certificates with signed pubkeys instead of statically configured pubkeys themselves for each peer.

If the wireguard core included any kind of timed partial delegation of authority through key signatures (similar to what SSH allows now with cert-authority/CertificateFile), that'd be enough to build SMS/HOTP/TOTP 2FA, security keys, and much more on top of it.


Think of wireguard as the plumbing. There will be a plethora of things available on top of wireguard that will enable all sorts of easy authentication options. (For example, TailScale.)


How this plumbing is expected to be implemented? For example Cloudflare Warp uses Wireguard for its VPN solution, but all key exchanges and other stuff happens via HTTPS REST calls. Is it expected for any non-trivial implementation to build a different "control" protocol? For me it sounds like a dangerous approach. While wireguard protocol will be safe and audited, those additional proprietary protocols will hinder cross-platform usage (for example you had to use reverse-engineered Cloudflare Warp implementation for Linux until recently and I guess that BSDs will use it forever) and might expose security vulnerabilities on their own.


Yes, you are exactly right. Wireguard is a typical example of a thing I'd call myopic-cryptographer-protocol. Solve one problem in the minimal fashion that can be called proof-of-concept, do it in a maybe-more-secure way and call it done. Everything else, like proper key distribution and user management, which you need for a real-world deployment that isn't just a personal toy, is left as an exercise to the reader. All the readers will screw up in different ways, after which the myopic cryptographer will explain that his protocol was of course perfect and there is nothing wrong with it, you are just holding it wrong...


That's one way to look at it.

Another would be "Do one thing and do it well", which is Unix philosophy.

https://en.m.wikipedia.org/wiki/Unix_philosophy


Another might be that existing VPN solutions will gut the bottom of their stack and use WireGuard for that, thereby reducing complexity and increasing performance.


I think there's already some that do that iirc.


I think this was the point of WG. OpenVPN/IPSec/etc are beasts since they include all of the above. OpenVPN is TLS based! That makes adding it into the kernel difficult, and you need kernel access or exotic stuff like VPP/DPDK for speed.

I do agree with your overall sentiment, though. The next step is to make a higher level protocol on top.


Not all parts of a system are going to age equally well.

You are going to want to try to avoid that problem, while making your tool still useful. Wireguard hits the sweet spot particularly well.


If people want Wireguard to be a complete multiplatform audited free enterprise VPN solution they need to donate more. A lot more.


Nobody really needs enterprise VPN crap, that stuff does far too much weird stuff that is totally unrelated to what a VPN should do, like patch management, malware scanning, firewalling, and other useless box-ticking.

What we do need is a proper replacement for roughly the things OpenVPN plus PAM can do. A VPN plus some user and key management.


It is a completely reasonable requirement of a large organization to ensure an endpoint meets certain criteria before being allowed access to an internal network.

I'm not arguing that Wireguard has an obligation to tackle that problem themselves. I'm arguing against your assertion that VPN access should be completely decoupled from ensuring endpoint security.


Lets be honest, any malicious endpoint can easily bypass those 'endpoint security' checks.

All they're good for is checking that unpatched (but not yet exploited/evil) endpoints can't connect to the network, which is of marginal benefit compared to allowing them to connect but requiring they patch before accessing risky resources (like the internet or email).


Not all compromised machines are the same. Is the user of the machine an insider threat or not? Does the login user have admin rights to the machine or not? And what you said in the last sentence is exactly how some VPN solutions can be configured: Limited access to network resources for updates and management, and only when fully matching version/anti-malware/etc. requirements can you connect to all resources.

Anyway. Like I said, I think Wireguard is amazing - I used PiVPN (which can be installed on any .deb distro) to set up a simple gateway for my laptop and phone to be always connected for DNS and local-network access. I'm very grateful for its architecture and simplicity in that regard.


I think there's an external issue here though. There might be regulation or some sort of liability concern regarding known unpatched clients connecting to the network. In that context simple checks make a lot of sense.


Even if they donate more that won't happen. It's antithetical to the premise of the whole project. WireGuard is fine where it is, it's amazing.


Related: Does anyone know of a PKI-on-WireGuard implementation? Specifically I'm looking for a system that lets clients join the WireGuard network by presenting a CA-signed certificate.


I mean, re-inventing IPsec is probably inevitable anyway.


Sadly I haven’t found anything. All there is are those curv25519 key pairs last i looked which is a real pain to manage at any scale, can’t be setup with a ttl etc. That’s probably main value proposition of products like Tailscale tbh


And will, themselves, lead to a new kind of incompatibility mess...

"Oh, I'm afraid the EasySecureAuth wireguard server doesn't support the AndroidWireClient client when using a Yubikey version 1. Either use version 2 or switch to iOS."


Tailscale solves all these problems, including SSO.

Can you tell I’m a very happy customer?


Do you have any Windows systems in your network? I am looking to restrict RDP sessions to a closed Wireguard network.


We did for a couple new people and then replaced them with Macs (for irrelevant reasons).

But while they were on Windows, Tailscale worked perfectly on their machines too.


Adding features like this that should be implemented on a different layer is the perfect way to turn it to the OpenVPN mess


Isn’t that just a roundabout way of asking for PSK support (which it already has)?


I think you'll have to use other options for that. I don't see them ever implementing 2FA as that is outside the goals of the project. They want to keep it has slim, performant, and on target as possible.


WireGuard is not MFA, but the user's private key could probably be stored in a smart-card instead of on disk. Software changes would need to be made so the key is read from the card instead of specified in the wgx.conf file.

To achieve true MFA, it would need either a password, TOTP, or SMS in addition to the stored keys.


Nope, can't. Storing a Wireguard key on a Smartcard isn't possible, because current cards do not support the key format and algorithms Wireguard uses. Only RSA and ECDSA on NIST curves are available on Smartcards. And "reading" the key from the card would make the card useless, the important feature of a smartcard is that it doesn't ever make the key available for reading. Instead, the key is used for signing or decrypting _on the card_ only. If you can really read a key off of a smartcard, sue the manufacturer and never use the key or card again.


Wireguard uses perfect forward secrecy, so wouldn't signing the ephemeral session once with the hardware key do the job? Or do they need some more advanced operations that the devices don't expose?


No, you just need a signature. But an Ed25519 signature, which current commercially available smartcards just cannot do.

You could be hacking something together with a Nitrokey or maybe Yubikey, those can do Ed25519 signatures. But generally, you would need to fiddle a lot with the implementation, because currently signatures are done in the kernel module, and you'd need to get that into the USB-device for signing and back again. Not impossible, but not implemented yet.

Another way would (theoretically) be to implement different signature algorithms for the wireguard key exchange, ideally some that common smartcards do support. But wireguards author left out cryptographic agility on purpose, so any work in that direction will be incompatible with the original implementation, or at least a very ugly kludge.


WireGuard does not use Ed25519. Indeed, it does not use any public-key signature algorithms at all. The long-lived static key (the peer's public key, their identity) is a Curve25519 ECDH key.


Of course, there are smartcards that could do this, you're just not allowed to have them. Plenty of smartcards nowadays are just flash and an ARM core which theoretically could be programmed arbitrarily. These tend to be used for credit cards, etc. Of course they might have acceleration units for specific algorithms like NIST ECDSA but I'd be surprised if Ed25519 couldn't be accommodated.

Unfortunately they're all NDAware, so they may as well not exist. ...But of course I've written about my extensive issues with the smartcard industry before.


You'll have to write some glue code, but if all you need is standard Ed25519 signatures, current-gen Yubikeys can do this. Somebody's implemented a python library that does that here https://github.com/tschudin/sc25519


> Storing a Wireguard key on a Smartcard isn't possible, because current cards do not support the key format and algorithms Wireguard uses. Only RSA and ECDSA on NIST curves are available on Smartcards.

There are programmable smartcards on which you can implement your own algorithm. ZeitControl sells cards you can program in a BASIC dialect: https://www.zeitcontrol.de/de/produkte/basiccard/basiccard-p...


Technically if the card could sign fast enough, you could sign packets on the card.


WireGuard seems to use symmetric key crypto for packet encdec. The card would need to sign only the handshake, which occurs "every few minutes"[1] and "is done based on time, and not based on the contents of prior packets"[1].

1: https://www.wireguard.com/protocol/


You only need to sign the packets in the key exchange on the card. The normal payload packets are protected by symmetric algorithms based on the ephemeral symmetric key generated in the key exchange, no need (and no use) to involve the smartcard there.


Pritunl has wireguard support and works well for this


What is WireGuard, is it a new protocol? Or a new algorithm for implementing an existing thing? (Or something else)


I think you could reasonably look at WireGuard as a repudiation of previous VPN protocols, almost from root to branch.

For instance, WireGuard reconsiders what the role of a VPN "protocol" actually is, and in WireGuard the protocol itself delivers a point-to-point secure tunnel and nothing else, so that the system is composable with multiple different upper-level designs (for instance, how you mesh up with multiple endpoints, or how you authenticate).

Another reasonable way to look at WireGuard is that it's the Signal Protocol-era VPN protocol (WireGuard is derived from Trevor Perrin's Noise protocol framework).

Notably: WireGuard doesn't attempt to negotiate cryptographic parameters. Instead, they've selected a good set of base primitives (Curve25519, Blake2, ChaPoly) and that's that; if those primitives ever change, they'll version the whole protocol.

If you haven't played with it, WireGuard is approximately as hard to set up as an SSH connection. It is really a breath of fresh air.


Wireguard isn't so different from previous protocols establishing encrypted tunnels. Functionally it's IPSEC tunnel mode with all the complexities of IPSEC removed. With a bit of multipoint goodness (ala DMVPN) sprinkled in.

The reason why it's hyped is because it's a non-encumbered, gratis, libre, fast replacement for OpenVPN.

Yes, it doesn't handle algorithm negotiation. So if there's something wrong with the algorithms it's chosen, then we'll need a Wireguard 2. That's a design choice that trades off one thing (protocol independence and resilience) for another (simplicity and ease of implementation).


No, I think this is essentially wrong. It's hyped because it:

(a) Doesn't have selectable or negotiable algorithms and constructions.

(b) Exclusively uses modern constructions everybody trusts.

(c) Has a minuscule implementation footprint, designed in part to avoid dynamic allocation altogether, that is straightforward to audit.

(d) As a result of all of this, it is very fast.

(e) As a result of all of this, software security and cryptography engineers generally trust it more than any alternative protocol.

(f) As a result of all of this, it is absurdly simple to configure and get running.

Yes, IPSEC does a bunch of stuff WireGuard doesn't do. Yes, that's the tradeoff WireGuard made. Making that tradeoff is (a) the point of WireGuard and (b) the reason people like it so much.


I am not challenging that Wireguard is a great technology, but I disagree it is faster than IPsec: it is fast compared to slow IPsec implementation such as the one you have in Linux.

However, AES is hw-accelerated in most systems those days and as a result, using IPsec with AES-256-GCM is usually much faster than Wireguard [1]. Note that if Wireguard was using AES instead of Chach20-Poly1305 I am sure it would be on par, plus I am confident we'll see hw acceleration for Chacha20-Poly1305 in the future too.

So I'd say right now if you need absolute max performance, a good IPsec implementation is much faster than Wireguard.

[1] for example, just running 'openssl speed -evp chacha20-poly1305' vs 'openssl speed -evp aes-256-gcm' on my laptop gives a ~2x speed advantage to AES.


This is a big reason why ZeroTier is moving to AES for its symmetric crypto. It's not only a lot faster but much more power efficient. The blazing speeds with ARX ciphers are only achievable using vector or other parallel constructions that light up the whole ALU, using many times more power than AES hardware.

Using AES with GMAC I can clock from 2-4GiB/sec/core on typical laptops and over 1GiB/sec on phones. The Apple M1 does almost 5GiB/sec/core. Gen10 and newer Intel CPUs with VAES have produced benchmarks in excess of 10GiB/sec/core, which means a single core could theoretically saturate 100gig fiber if it were just doing crypto.

Of course nothing stops CPU makers from adding ARX accelerator instructions, but I have yet to see any proposed. If constructions like ChaCha and BLAKE2/BLAKE3 get popular enough I could see this happening.


Post numbers on a good IPsec implementation? We have numbers for WireGuard. Of course, it's easy to do that because there's just a couple primary implementations, not 100 terrible ones like in IPsec. So, pick the best one.

We don't have to derive the answer to this question from first principles. It's an empirical question.


Here is an example for VPP, ~8Gbps/core of IPsec forwarding with AES-256-GCM and IMIX traffic on Skylake @2.3GHz: https://docs.fd.io/csit/master/report/vpp_performance_tests/...

Note that thanks to AES-NI vectorization (an example of hw acceleration I was referring to) it reaches more than 16Gps/core on the same test on Icelake.

Those numbers can grow up to 50% for big packets (1500-bytes and higher).

With a high performance stack, IPsec (and Wireguard for that matter) workloads are limited by crypto performance, not packet processing performance, and the perf difference between IPsec with AES-256-GCM and Wireguard is basically the perf difference of AES-256-GCM vs Chacha20-Poly1305 of your platform.


This is DPDK-style user-mode direct/raw networking, isn't it?


Yes. But VPP also supports Wireguard, and when doing apple-to-apple comparisons, the performance difference between Wireguard vs IPsec AES-256-GCM is close to 2x. See https://fosdem.org/2021/schedule/event/sdn_calicovpp/attachm... slides 23 and 26: 5Gbps of Wireguard vs 9.5Gbps of IPsec.

And the main reason is the cipher: one is hw-assisted (AES-NI on x86), the other is not.

Again, I do think Wireguard is nice because it is a clean sheet design with good choices and it "just works". However when I hear "Wireguard is faster than IPsec" it is not true in my experience, and can be easily explain by the cipher choice.


>I am confident we'll see hw acceleration for Chacha20-Poly1305 in the future too.

The speed gains wouldn't be as significant. AES uses S-Box computations that do well when hardware accelerated, whereas ChaCha/Salsa20 are designed to use more typical CPU instructions for bitwise operations.


Speed gains maybe not, however on current x86 platform there is a 2x perf difference between AES-256-GCM and Chacha20-Poly1305, so even if we get "only" 2x I'd be delighted.


> (d) As a result of all of this, it is very fast.

No, it's very fast because the ChaCha/Salsa20 stream cipher uses common CPU instructions and runs fast in purely software, whereas AES requires things like S-Box computations which is slow in software but fast when implemented as accelerated instructions in hardware. There are IPSEC software stacks using AES acceleration that runs just as fast, not to mention IPSEC hardware offload.

OpenVPN is slow due to architectural constraints, but IPSEC doesn't suffer from that at all. IPSEC tunnels with PSK is also absurdly easy to configure, either on Linux, or a router, what it doesn't offer is native NAT traversal.


You are the first person who has ever told me that IPSEC was absurdly easy to configure. Share a configuration that illustrates the point?


What is commonly called IPsec is actually two separate protocols, IPSec itself and ISAKMP/IKE for key management.

IPSec is somewhat similar to how wireguard work actually, it relies on IPs and static encryption keys. Not too hard to configured, see for example the manual keying documentation of slackware: https://book.huihoo.com/slackware-linux-basics/html/ipsec.ht...

ISAKMP/IKE is then used on top to manage the IPsec keys and parameters. This is where a lot of the complexity comes in, tons of parameters, modes, etc. etc.

So if all you want is to secure communication between two IPs and can securely exchange key material out of bands, manually keyed IPsec is not very complicated.


IPSEC without IKE is not "similar to how wireguard works actually." Wireguard does actual key exchange and has security properties such as Forward Secrecy that you don't get using a hardcoded IPSEC symmetric key.

Also, even the IPSEC config without IKE is way more complicated than a Wireguard config, with seriously sharp edges. Just look at that config you linked to. No one should ever need to know what AH and ESP are, but if you don't you very easily can configure IPSEC in an insecure manner.


Is that your absurdly simple configuration? Can I assume the contents of that web page are where you rest your case on WireGuard's `wg.conf` vs. IPsec?


> Doesn't have selectable or negotiable algorithms and constructions.

NSA likes that.

https://blog.cryptographyengineering.com/2015/10/22/a-riddle...


Does it use key management like SSH or more like certificates with TLS?


It's like SSH, with no Trust-on-First-Use option. Unlike more complicated pre-existing protocols, how you handle key distribution is explicitly out of scope for the protocol.


No. It dumps that on you to figure out for yourself.


Which is great. It decouples responsibilities. WG gives you a secure, well-performing tunnel. Key management is outside its scope. There are many solutions to that problem, no point in forcing a "WG sanctioned" one on people.


And yet the crowd that pushes WireGuard is the same crowd that pushes the idea that you can't give people a toolkit of crypto stuff, you have to give them a turnkey end-to-end system - indeed the fact that there's e.g. no algorithm selection in WireGuard is touted as a selling point. But how is making the user do their own key exchange/management any different? If you break the key management you break the cryptosystem.


The end-user doesn’t need to do their own key exchange/management.

Case in point: https://tailscale.com/

Why should WireGuard bake all that stuff into the core protocol and at the same time make it overly complicated? Donenfeld knows zero about your organization and he doesn’t pretend to do so either. Are you an entusiast home user, a startup of six persons in a garage or are you IBM with over 300000 employees? All of those can use WireGuard but will have wildly different needs when it comes to authentication and deployment. There’s no sane one-size-fits-all solution for all kinds of organizations and use-cases.


> Why should WireGuard bake all that stuff into the core protocol

Because all that stuff is security-critical. All that stuff needs to be incorporated into any audit of the system. Indeed it's probably where the vulnerabilities are going to be.

> Are you an entusiast home user, a startup of six persons in a garage or are you IBM with over 300000 employees? All of those can use WireGuard but will have wildly different needs when it comes to authentication and deployment. There’s no sane one-size-fits-all solution for all kinds of organizations and use-cases.

There needs to be a system that can scale there, especially if the intent is to replace OpenVPN which slotted neatly into standard PKI. If this system pushes more users onto a handful of centralised providers, which seems like what's implicitly being encouraged, then that's not going to end up being good for security.


This argument doesn't make any sense. The more you couple to the underlying protocol, the harder it is to audit. Having a well-defined and predictable boundary between concerns in the system makes each component, and the system as a whole, easier to assess.


> Having a well-defined and predictable boundary between concerns in the system makes each component, and the system as a whole, easier to assess.

Then why are modular cryptosystems (where e.g. the symmetric cipher algorithm is pluggable) a bad idea?


Having a predictable boundary and having modular crypto systems are two things that can sound similar, but are really pretty opposite.

To take the popular example: OpenSSL has a plethora of extensions. If there’s a thing you want to do, odds are the spec has been extended to cover that use case.

One result of that is that many code paths exist that aren’t part of everyday usage for most users, and so those code paths get less love (and more bugs): this makes things like Heartbleed radically more likely.

Another result is that parties using the system to communicate need to agree which modules/extensions they’re going to use. This kind of negotiation has been a punching bag for vuln after vuln, because it turns out some options are going to end up having weaknesses, and thus attackers can make their lives easier if they focus on tricking parties into downgrading to weaker modules.

By contrast, having Wireguard exclusively handle point-to-point tunnel behavior, without any negotiation of modules or extensions or similar, both simplifies the code paths and avoids runtime negotiation. Wireguard provides a boundary beyond that: it does not handle things like IPAM or a central authentication story, leaving those for another system to own. That system is then free to likewise provide a simple interface for whatever it’s doing, and gleaning all the same benefits.


> Wireguard provides a boundary beyond that: it does not handle things like IPAM or a central authentication story, leaving those for another system to own. That system is then free to likewise provide a simple interface for whatever it’s doing, and gleaning all the same benefits.

Right, but that system actually needs to be implemented, and the two need to be integrated together, and that part is where I suspect the vulnerabilities are likely to be, because the interface between two systems developed separately is always the most likely point for bugs and misunderstandings to creep in.

People talk about WireGuard having fewer vulnerabilities than OpenVPN and that may be true as far as it goes, but it's missing the fact that you can't simply replace OpenVPN with WireGuard - you would have to replace it with WireGuard plus some certificate management system plus some integration between them. And if everyone builds the last part themselves, it will almost certainly have security vulnerabilities.


It's more like client certs with tls that are signed by the server's key I believe.


Nope. You have a private and public key per connection.


Generally, you have a keypair per host, not per connection.


That's not honest. Wireguard on Linux is really hard to install. I managed to install the "server" side after a lot of huff and puff but I've given up on the "client" side. On windows, the "client" side really is as easy as you say though. It's LOVELY


You'll have to say more about the challenges you found with it, because we do a _lot_ of WireGuard here, in a bunch of different ways, and given a valid `wg0.conf` file (which is just the keys and the addresses for the tunnel), I've never hd to do much more than `wg-quick up wg0.conf` to make it work, on our servers, on my NUC, on Amazon Linux EC2 instances, and on VMs.


> and given a valid `wg0.conf`

And there it is. Idk about OP but this is what tripped me up. There are many guides online, but the native documentation assumes you have a pretty deep understanding of the network stack, authentication, and VPNs.

The online guides all kinda make assumptions about your network set up and if it’s different in anyway your attempt will fail and you won’t know why; as the error codes are kinda generic and somewhat meaningless to someone without in-depth networking experience.


But the `wg0.conf` thingy has to be set up on Windows too, right? So It's not clear why it would be easier on Windows than on Linux.

As far as I'm concerned, the most difficult thing I've encountered with Wireguard wasn't related to WG itself but to the fact that I'd set it up on three different systems, each with its own configuration style for bringing up the network.

Ubuntu server - netplan

Arch server - systemd-networkd (directly)

Arch "desktop" - NetworkManager


The client on Windows might be easier to use, again, I’m not the OP. I had trouble with it in general both on Windows and Linux.


The networking part especially may require some work.

There can be many environments and situations, and sometimes a component would block packets to be forwarded from one network or user to another.

You need to know a bit about networking to debug it.


Wireguard is a UDP-based VPN protocol that focuses on simplicity and security. Its Linux implementation is a mere 4000 LOC and the protocol has been formally verified. OpenVPN is over 100,000 lines of code PLUS OpenSSL.

https://www.wireguard.com/talks/lpc2018-wireguard-slides.pdf


It's point-to-point.


wireguard is a VPN technology that is now integrated into the Linux kernel, and is available on all major platforms.

It distinguishes itself from other VPNs by not having knobs to twiddle. Should a security issue arise, it will be necessary to replace it with a wireguard2 or such. This also means that it's very hard to get it wrong in config; either it works or it doesn't, and if it doesn't, you haven't got it working yet.

It's very fast and very nice to work with.


Wireguard is pretty much half of what you'd expect from a VPN. It does the low-level part (encryption, packetization, session setup, NAT traversal, etc. -- the “actual VPN”) brilliantly, but everything around key distribution is left to external systems. (Tailscale is a popular choice, but by no means the only one.) E.g., you can't connect to vpn.example.com with user foo and password bar and that's it; there needs to be an Ed25519 public/private keypair set up on both sides, an IP address range (essentially a routing table), and so on.

Of course, if you want to connect two static networks, wg-quick is all you need. But for the typical “remote worker VPN”, it's pretty much a (great) building block.


> but everything around key distribution is left to external systems.

That's what I'd like, since authentication is usually a pain to set up and with Wireguard, there's none to be done. This also means it's totally stateless and is great for mobile devices where a connection might be broken and crsated again frequently.


I’ve been begrudgingly using Tailscale because it’s so damn simple, but hate that I have to authenticate through Google. I recently noticed they’ve added a “sign in with GitHub option,” but I don’t see any easy way to migrate my account (and nodes). Many of the clients are PiHoles I’ve sent off to my family as gifts, so physical access is a PITA. The only way I’ve found to reliably clear the Tailscale settings is to `apt purge Tailscale`, which would cause me to lose Tailscale SSH access. Looking at the hassle of the remote reinstall- I’m thinking to SSH in with Tailscale, then establish a reverse SSH tunnel to maintain remote access - I think I may finally give Innernet [0] a go.

[0] https://github.com/tonarino/innernet


You can also use Microsoft now (both "personal" accounts like used for Xbox/Outlook and "Enterprise" accounts like Microsoft365 and other AAD based accounts)

I'm sure if you asked them about switching auth methods they would help with that.


That’s a good point. I’ve sent them an email and I’ll see what they say.

EDIT: since I’m still in the edit window, here’s what Tailscale came back with (great response time!).

>We can fairly easily switch between auth providers where the usernames are an email address, like Microsoft or Google or Okta.

>For GitHub the username is different, GitHub uses your Profile name. Any email addresses associated with your GitHub profile are not available.

>Unfortunately there isn't a straightforward way to migrate an existing Tailnet with its devices from Google to GitHub. We generally recommend making a new Tailnet with GitHub and re-authenticate devices using GitHub one at a time. For remote devices, this is more challenging.

>If you want to try it, a suggestion: 1. you can create a Reusable authkey at https://login.tailscale.com/admin/settings/authkeys for the new GitHub Tailnet 2. Over ssh to a node currently on the Google Tailnet, you can: `tailscale up --force-reauth --authkey=tskey-0123456789abcdef` 3. You'll lose the SSH session. The device will make a new Node key and be issued a new IP address on the new GitHub Tailnet. 4. You can look up its new IP address on https://login.tailscale.com/admin/machines of the GitHub Tailnet, and should be able to ssh to the new address.


That sounds heavenly. I had never really thought about that immutability like concept, makes a lot of sense for security oriented software.


Except FreeBSD which is used for pfSense - a popular firewall. They're working on it though. There was a bit of drama about it a few months ago when a shoddy implementation was merged.


FreeBSD can use the userspace implementation just fine; what almost made it out but was caught before it could actually be released was an in-kernel module for it.


>> What is WireGuard?

According to Linus: "...compared to the horrors that are OpenVPN and IPSec, it’s a work of art."


It's a VPN protocol whose USP is being dramatically simpler than OpenVPN, which should mean that it is both easier to use and more secure (and consensus seems to be that it generally delivers on both of those fronts).


> It's a VPN protocol whose USP is being dramatically simpler than OpenSSL

What? WireGuard is a VPN protocol (and implementation), while OpenSSL is an implementation of TLS. They're not competing with each other, and you can't compare them.


Yep, got the wrong Open* software. Not sure how I managed that.


GP probably meant OpenVPN


i think it was meant to be openvpn instead on openssl


dramatically simpler than IPsec.

IPSec is Internet Layer, while TLS/SSL (OpenSSL) are Application Layer


the main reason IPsec is more complex is because it has more features, like multiple CHILD_SA under same tunnel each with different transform and traffic selectors, and also much more authentication choices


I honestly think "ipsec is too complex" is overdone. Yes, you need to know your networking basics and understand routing but that's probably a good thing when setting up a VPN. Then you pick your crypto primitives from e.g. https://www.keylength.com/en/compare/ and you are basically done.

But no, it's the typical groupthink of 'old is bad' so instead of reading two pages of documentation and having native support across all major platforms people would rather re-invent the wheel.


>IPSec is Internet Layer

Technically not, since IPSec can also be tunneled over UDP which then turns it into an application layer protocol.


I don't think it works like that. Vxlan can tunnel ethernet frames over UDP, but that doesn't make Ethernet an application layer protocol.


These semantic issues are why "layers" are a terrible way of classifying network protocols


The wikipedia article for it is actually quite informative and not hard to read for the layperson. https://lmwtfy.joe.gl/?q=wireguard

Also Ars had a great article on it as well if you want a readable but more in depth version https://arstechnica.com/gadgets/2018/08/wireguard-vpn-review...


Any thought if Windows will embed this natively similar to how Linux pulled WireGuard into the kernel?



Licensing issues aside, do we really want to rely on Microsoft to keep it up to date? I can imagine it becoming quickly outdated, particularly in enterprise skews.

I think it's best left to the Wireguard team and not Redmond.


There is no "outdated", wireguard has no extensibility on purpose. You might just have to wait for wireguard2. And security patches will be delivered in the usual Microsoft fashion, 8 tuesdays after the exploit started circulating.


In Jason's post, he says:

>While performance is quite good right now [...] not a lot of effort has yet been spent on optimizing it, and there's still a lot more performance to eek out of it, I suspect, especially as we learn more about NT's scheduler and threading model particulars. [emphasis added]

Are you suggesting that these performance improvements will be contained in 'wireguard2'? Surely there will be improvements to the codebase, even if they don't involve fixing defects that undermine fundamental security assumptions.


No, I think not. I guess that is an area where one would miss out without updates, but on the other hand, performance is already "good enough" for most endpoints. Of course, for operating a VPN concentrator you always want more performance, but then again, you won't do that on windows I guess.


Windows can load stuff like this dynamically and doesn't require everything to be compiled into kernel.


fancy! i'll ask my friends on windows to test this


I've had good experiences using Tunsafe compared to the official client. Get full gigabit speeds from a decent VPN provider.


Still patiently waiting for a WireGuard implement to appear on Asuswrt-Merlin.net


While the driver can be licensed under GPLv2 (all kernel drivers needs to be signed by Microsoft*, and VirtIO is a precedent¤ that you can do it), I'm not sure if the header should be licensed under GPLv2, mainly because it would stifle Wireguard adoption.

* In ordinary conditions. Test-sign mode does exist.

¤ ... for example, these Red Hat versions: https://www.catalog.update.microsoft.com/Search.aspx?q=Red%2...


The header is dual-licensed under GPLv2 and MIT.


You can get them here: https://fedorapeople.org/groups/virt/virtio-win/direct-downl... packaged in a nice iso, ready to use for iso store of your hypervisor.

(It might be also slightly newer; v204 is 100.85.104.20400).


VirtIO changed license from GPL to BSD so that it could be signed by Microsoft. See here: https://github.com/virtio-win/kvm-guest-drivers-windows/comm...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: