SHA384 has many of the same properties and is less ambiguous if you care about interoperability. Most libraries require you to truncate 512-256 yourself (not difficult, but still work).
I have to say it feels a bit weird to deduct points (so to speak) from a highly regarded cryptographic hash function because it doesn't outright prevent one particular, broken MAC generation scheme, but I guess the argument has some merit.
While I think it's harmless to say that SHA-512/256 is stronger than SHA-256 (as they otherwise provide the same theoretical level of security), I still think it's wrong to claim that SHA-512/256 is also stronger than SHA-512, which has a vastly greater theoretical security margin.
Susceptibility to length extension would also have disqualified SHA2-512 from SHA-3, where that property was a requirement, so it seems like the cryptographic community has come to conclusion about this.
The "security margin" of a full SHA2-512 digest, over its truncated SHA2-512/256 alternative, is not meaningful in practice.
If you want to use full-width SHA2-512, go ahead. SHA2-512/256 is safer.
Devil's advocate: 10 years from now if SHA-3 is dominant and HMAC has faded into obscurity, how hard will it be to get programmers to understand the difference between hash function and MAC? Keeping in mind that they barely understand today.
> Password handling (Was: scrypt or PBKDF2): In order of preference, use scrypt, bcrypt, and then if nothing else is available PBKDF2.
What's the reason to prefer scrypt over bcrypt? And, what's the reason to prefer both over PBKDF2? (Asking because I see quite a few bits of software that use PBKDF2.)
> Asymmetric signatures (Was: Use RSASSA-PSS with SHA256 then MGF1+SHA256 yabble babble): Use Nacl, Ed25519, or RFC6979.
Could you make a recommendation for or against using GPG, since that's by far the most common approach for asymmetric signatures? (Obviously such a recommendation would need to point at specific key/algorithm choices to use or avoid.)
> Client-server application security (Was: ship RSA keys and do custom RSA protocol) Use TLS.
scrypt is asymptotically much more expensive to crack.
And, what's the reason to prefer both over PBKDF2?
scrypt is asymptotically much more expensive to crack.
bcrypt is asymptotically marginally more expensive to crack than PBKDF2, but not enough to matter; I'm guessing tptacek's point here is that bcrypt has more library support available (despite PBKDF2 being the de jure standard). I wouldn't say there's a strong argument in either direction.
Could you make a recommendation for or against using GPG, since that's by far the most common approach for asymmetric signatures?
Avoid if possible. The code was written by a colony of drunk monkeys, in an era before anyone understood the basics of modern cryptography; I'm really not sure which is worse between gnupg and OpenSSL. Of course, GPG is the standard for encrypted email, just like SSL/TLS is the standard for web sites, so you may have no choice...
> > Could you make a recommendation for or against using GPG, since that's by far the most common approach for asymmetric signatures?
> Avoid if possible. The code was written by a colony of drunk monkeys, in an era before anyone understood the basics of modern cryptography; I'm really not sure which is worse between gnupg and OpenSSL. Of course, GPG is the standard for encrypted email, just like SSL/TLS is the standard for web sites, so you may have no choice...
Do you have any recommendations for high-level alternative for GPG, i.e. something that can secure data easily at rest. Is there something that wraps e.g. NaCL with file IO in a nice commandline utility?
At work we are occasionally using 7zips encryption ability, but somehow I don't really have high confidence in it. But at least the UI is simple enough.
> Avoid if possible. The code was written by a colony of drunk monkeys, in an era before anyone understood the basics of modern cryptography; I'm really not sure which is worse between gnupg and OpenSSL. Of course, GPG is the standard for encrypted email, just like SSL/TLS is the standard for web sites, so you may have no choice...
Are there any viable FOSS implementations of the OpenPGP standard other than GPG? Detached GPG signatures seem to be the most common mechanism to validate software distribution and similar.
Go's crypto/openpgp package (http://golang.org/x/crypto/openpgp) is probably one of the closest "full" implementations, although it's a library and not a CLI binary.
> scrypt is asymptotically much more expensive to crack.
How so? The cost of password cracking generally scales completely linearly, is there an attack on bcrypt that makes the cost sublinear? I would agree that scrypt probably has a better constant factor on typical hardware.
scrypt is deliberately designed to use a large amount of memory when calculating hashes, which makes it much more difficult to parallelize using a GPU (which don't have much RAM available), hence making it much much slower to attack.
scrypt cracking is still embarrassingly parallel even if it's hard to run on a GPU. I understand the term "asymptotically more" to refer to big O notation, where constant factors like that are ignored.
Well it's hard to state what exact n is here, but you'd probably be increasing both the calculation time and memory requirements at 2^n, so you need roughly 1/k of an entire computer to calculate even as Moore's law marches on and factors increase, but PBKDF2 parallelizes more and more.
scrypt has nice properties that bcrypt doesn't, and gets those properties by design; it turns out that in practice right now bcrypt has some nice properties too, though they seem accidental. We're using scrypt at Starfighter, even though we have to go through a (very minor) bit of trouble to get it. They're all fine though.
If using GPG means you can delegate away all your crypto design, use GPG. What you should not do is roll your own by co-opting all of GPG's design decisions, some of which are not great.
You should use BoringSLL, LibreSSL, Go crypto/tls, or OpenSSL, in roughly that order.
> scrypt has nice properties that bcrypt doesn't, and gets those properties by design; it turns out that in practice right now bcrypt has some nice properties too, though they seem accidental.
Can you elaborate on what you mean by "nice properties"?
> If using GPG means you can delegate away all your crypto design, use GPG.
Using which key types, ciphers, etc? The most common recommendation seems to be for 4096R keys and SHA2 hashes; the latter is consistent with the recommendations posted here, but the former seems to disagree with the comment to not use RSA for asymmetric crypto.
Fair point, but if I were to use RSA (sure, better avoiding it at all), I'd still go into a 4096 bits key, hoping to outsurvive the algorithm long enough for a safe migration.
Anyway, I don't get your (as in most security experts) aversion to long keys and multiple algorithms. As an engineer, I see cryptography taking a very small amount of the resources, but holding a huge share of the risk of any security application. My guts are always pointing into moving more resources into crypto.
Optimized ASM means code that only very few people are able to review, and only with considerable time and effort.
If security is the primary concern, I would argue that optimized ASM becomes a liability.
I had to read parts of OpenSSL to figure out how some of the utilities worked. Let me say that it's wonderful that people are trying to write a more readable version and leave it at that.
If speed was a motivating factor, DNSSEC would be using fast curves instead of archaic RSA. The reality of DNSSEC is that it's built around the performance concerns of 1997.
One of the reasons nobody has mentioned yet is that bcrypt ignores everything after the 56th character in the password. Not many people have passwords that long, but it's still relevant.
As I understand it, this depends on the what the attacker has. If they have a FPGA or GPU, you need many rounds of PBKDF2. If you use scrypt, which was designed by cperciva to negate some of the advantages of FPGA and GPU, you don't need so many rounds to keep it hard to crack. Use https://hashcat.net/oclhashcat/ to benchmark.
Considering the recommendations for NaCL, what is the current status of it? There is NaCL proper and its webpage has link to a 2011 version. Then there is TweetNaCL which seems more recent with a 2014 release. And finally there is libsodium which is not from DJB. What is the recommended version to use? I'd guess TweetNaCL because it is most recent, but idk.
On a slightly related note, I just noticed that there is also µNaCL for embedded use that seems really cool.
The current state of it is that Nacl (pronounced: "turnips") circa 2011 is just fine, Tweetnacl is just fine, and if you have packaging concerns, you can use libsodium --- but stick to the constructions that are also in Nacl/Tweetnacl, because libsodium took things a little further than I think they should have.
Anything that libsodium does, or allows clients to do, that Nacl doesn't allow you to do.
Nacl isn't an open source project or helpmate for application programmers; it's an academic effort to design the best misuse-resistant crypto interface for programmers. I like libsodium, but it is not that.
Curve25519 != Ed25519. Curve25519 is only really useful for straightforward DH. IIRC, point addition isn't defined for Curve25519 meaning it's useless for creating signatures.
> If your threat model is criminals, prefer DH-1024 to sketchy curve libraries. If your threat model is governments, prefer sketchy curve libraries to DH-1024. But come on, find a way to one of the previous recommendations.
I realize that the library is probably available via my package manager, but it'd be nice if the install page (http://nacl.cr.yp.to/install.html) linked to an archive over HTTPS and had some signatures to compare hosted elsewhere.
AES-GCM allows the caller to supply additional authenticated data (AAD) -- data that is only authenticated but not encrypted. However NaCl's authenticated encryption mode doesn't seem to provide anything like this: http://nacl.cr.yp.to/secretbox.html
So when I have AAD, what should I do when using NaCl? Add it as part of the message to crypto_secretbox(), or should I authenticate this data separately?
Unfortunately that interface is rather dangerous because of the 64-bit nonces - it is essentially only useful for encrypting multiple messages over a single connection.
The lack of justifications makes this as useful as anybody else out there claiming "use X. Don't use Y".
Eg:
> Avoid: AES-CBC, AES-CTR by itself, block ciphers with 64-bit blocks --- most especially Blowfish, which is inexplicably popular, OFB mode. Don't ever use RC4, which is comically broken.
Why not 64-bit blocks? What's wrong with them? How do they affect us?
Mind you, I'm not saying the statement is incorrect, but with no justification for it, I'm not convinced why I should avoid them.
I mean this sincerely and not as snark: if this is a question you have to ask, just use Nacl; don't design with ciphers yourself. Since there is a "right answer" to this question and a "wrong one", "convincing" doesn't seem like a good use of anyone's time.
The right way to learn about cryptography is to start by learning how to break it. If that's something you're willing to sink time into, try this thing we set up:
Hey Thomas, does anyone at Matasano still review submissions if someone wants to submit them for a particular programming language? I'm looking to establish myself as the luminary crypto nerd of the furry fandom :3
I don't know why you got downvoted. Maybe it's the furry thing. Cryptopals is still ongoing (there's a set 8 in the works, all elliptic curve attacks). As for posting the solutions: we're doing that, too, in the abstract, but we're all busy and every time we bring it up a bunch of people say "noooo don't post solutions".
We are still working on new sets, though obviously the rate of new sets is pretty low. The mailing list is basically unmonitored at this point, but everything we've got is on the site. (This is a vast improvement on the previous state, where we regularly failed to set out challenges to people who emailed us, due to overload.)
"Because that is what people who know more about this than all of us combined have come to that conclusion"? Sounds snark but I mean that's what it boils down to anyway.
You can cite meta-analysis, like in medicine. "The people who studied this concluded that using this method has a higher chance of bad side effects and lethal complications than using the suggested method".
Cryptographic constructions using block ciphers generally rely on the block cipher never having the same input twice with the same key in order to satisfy security models.
If you're feeding effectively random data into the block cipher (like if you're using CBC), then because of the birthday paradox, you get at most about 2^32 blocks (far fewer in practice at a good security level) per key if you have 64-bit blocks. This is low enough to be annoying for designers or problematic for suites that don't rekey correctly.
However, because CTR (or GCM) mode uses sequential inputs to the cipher, I think that a 64-bit block size would not be a problem there. At that point, the reason not to use 64-bit block ciphers is because they're all older, weaker, and less-supported than AES-128.
Since this is heading towards the top of HN, I figure it's worth responding to the specifics here:
AES-GCM
As tptacek says, this has pitfalls on some platforms. I also dislike exposing AES cores to malicious data, which is my primary reason for preferring a hash-based MAC construction.
Avoid: key sizes under 128 bits.
My recommendation for 256-bit symmetric keys isn't because I think AES-128 can be broken mathematically; rather, it's because AES implementations have a history of leaking some of their key bits via side channels. This is less of an issue now than it was five years ago (implementors have found and closed some side channels, and hardware AES implementations theoretically shouldn't have any) but given the history of leaking key bits I'd prefer to have a few to spare.
Avoid: userspace random number generators
Thomas and I have argued about this at length; suffice to say that, as someone who has seen interesting misbehaviours from kernel RNGs I'd prefer to use them for seeding and then generate further bits from a uesrspace RNG. (Thomas's counterargument, which has some validity, is that he has seen interesting misbehaviours from userspace RNGs. This largely comes down to a question of whether you think the person writing your userland crypto code is more or less prone to making mistakes than the average kernel developer.)
avoid RSA
Thomas is correct to imply that a random RSA implementation is more likely to be broken than an average elliptic curve implementation. This is true for the same reason as a random program written in python is more likely to have bugs than a random program written in Brainfuck: Inexperienced developers usually don't even try hard problems. On the other hand, for any particular developer, an RSA implementation they write is more likely to be correct than an elliptic curve implementation they write.
I also continue to be wary of mathematical breakthroughs concerning elliptic curves. Depending on the amount of new research we see in the next few years I might be comfortable recommending ECC some time between 2020 and 2025.
use NaCl
This is not entirely a bad idea. The question of "implement yourself or use existing libraries" comes down to the availability of libraries and whether the authors of the library are more or less prone to making errors than you; "random developer vs. NaCl developers" is straightforward and doesn't have the same answer as "random developer vs. OpenSSL developers".
you discover that you made a mistake and your protocol had virtually no security. That happened to Colin
Just to clarify this, the (very embarrassing) bug Thomas is referring to was in the at-rest crypto, not the encrypted client-server transport layer.
Online backups (Was: Use Tarsnap): Remains Tarsnap. What can I say? This recommendation stood the test of time.
* If you're concerned about attacker data hitting the AES core, Salcha20+Poly1305 doesn't have that problem, and is generally preferable to AES-GCM in every scenario anyways. There is no scenario I can think of where you can do CTR+HMAC and can't do Salcha20+Poly1305. If you have to stick with standards-grade crypto, GCM is your best bet.
* The track record of userspace RNGs vs. kernel RNGs speaks pretty loudly. In any case, we should be clear that you're advocating for "bootstrap with /dev/urandom and then expand in-process", not, like, havaged or dakarand. We're closer on this than people think.
* I'm not even talking about people writing their own RSA. Do I need to say that? If so, recommendation #1: don't write your own RSA. I'm saying that all else equal, if you're using good libraries, still avoid RSA, for the reasons I listed.
* In fairness, the CTR problem you had is also a threat to GCM. This used to be why I recommended CBC a few years ago: because we kept finding gameover CTR bugs in client code, and not so often CBC bugs. My opinion on this has changed completely in the last year or so.
If you're concerned about attacker data hitting the AES core, Salcha20+Poly1305 doesn't have that problem, and is generally preferable to AES-GCM in every scenario anyways.
Right. And I'm optimistic about Salcha20 and Poly1305, but I'd like to see a few more years of people attacking them before I would be willing to recommend them.
we should be clear that you're advocating for "bootstrap with /dev/urandom and then expand in-process"
Right. Or to be even more precise: Use HMAC_DRBG with entropy_input coming from /dev/urandom.
Also: For $DIETY's sake, if you can't read /dev/urandom, exit with an error message. Don't try to fall back to reading garbage from the stack, hashing the time and pid, or any other not-even-remotely-secure tricks. Denial of service is strictly superior to falsely pretending to be secure in almost all conceivable scenarios.
One problems I have with most cryptographic libraries, like OpenSSL and NaCl as recommended here, is their extensive use of globally mutable variables. I can't understand how that seems a good idea in 2015.
I've reviewed the code of several of these libraries (I won't say which ones I have which levels of confidence in), and: short summary: if you want to be the site that reincarnates 1990s RSA bugs or 2000's-era curve bugs, go ahead and use a TLS library nobody else uses.
PolarSSL and MatrixSSL definitely seem far off the beaten path, but many projects use GnuTLS (both as one of the more well-known non-OpenSSL codebases and because it has a GPL-compatible license). I'd be interested to know if you're concerned about it in particular.
There was a GnuTLS vulnerability introduced in 2000 was discovered in 2014 due to an audit. To summarize there was a refactoring that had no accompanying test coverage that had the effect of inverting a check.
Bugs happen to everyone, but the process that led to this one is really concerning. (OpenSSL certainly has bad process too but as the GP mentions, more people are hammering on it.)
This blog post has more (including an LWN article about it):
Every security library has had vulnerabilities, and I'd be more concerned about libraries that don't (since it implies nobody is looking). Does GnuTLS seem significantly more prone to vulnerabilities than other implementations?
Another option is wolfSSL (https://wolfssl.com/wolfSSL/Home.html) which is GPL-compatible, but also has a commercial license option. They have an OpenSSL compatibility layer, but are not a derivative of OpenSSL.
My experience with their software has been very positive, and they have avoided the majority of recent insecurities. Plus they have great support for anyone working on open source projects.
I used to stick up for NSS, whose code I find much more intelligible, but people who are much better acquainted with NSS than I am strongly disagree with me on it, and recommend instead working on OpenSSL.
> Avoid: constructions with huge keys, cipher "cascades"
Can anyone please explain what's wrong with e.g. 4096 bit keys (instead of 1024 bit) and piling 2-3 different or same encryption passes? Performance implications are obvious; what are security implications?
This is in the context of symmetric keys, so I'm guessing "huge keys" is a reference to the fact that "448-bit crypto" is a giant red flag because it screams "we're using blowfish".
See I just write 1/5th of a recommendation and leave it open-ended so Colin or 'pbsd can make it look like I was smart to begin with. Yeah... Blowfish... that's what I meant... :)
Well, in the more general case "huge symmetric keys" is a flag for "doesn't understand crypto", but 448-bit blowfish keys are the most common place I see this happening.
What is your opinion on Threefish then? Is there something fundamentally wrong with bigger keys/blocks, or is it just that known big key/block schemes are not useful?
Mostly it's just an indicator that the person doesn't understand the security concepts. If you believe a 4096 bit AES key will do you any good, there's probably other fundamental issues that you've misunderstood.
> There is a class of crypto implementation bugs that arises from how you feed data to your MAC, so, if you're designing a new system from scratch, Google "crypto canonicalization bugs".
I get a whole bunch of links about javax.xml.crypto.dsig throwing exceptions, which wasn't terribly illuminating.
Make sure the data fed to your MAC is unambiguous. Or rather, make sure the data fed to your MAC is done in such a way that you cannot have different messages appear the same to the MAC encoder.
For instance, say you sort and concatenate your options without a delimiter. Then ["ab", "cd"] will have the same MAC as ["a", "bcd"], as in both cases the actual data fed to the MAC will be "abcd". This is a very bad thing.
What if I need to send up encrypted logs from a number of clients? I tried to use nacl for this, but in its opinionated style, it holds that I have to have a sender private key to authenticate my logs, and it won't decrypt unless I provide the corresponding public key on the other end.
I don't want authentication here - there's no way for me to manage these keys; I just want to prevent someone from reading my logs off the disk...
Do you want symmetric encryption? NaCl does that too, it's just a section bellow the asymmetric ones on its documentation.
But I'm not sure you completely thought this out. If somebody can read your disk, and if that includes software configuration, the only way to make it impossible for people to read your logs is by using asymmetric crypto. And yes, that'll require using different keys on the writing and reading software.
"Asymmetric encryption (Was: Use RSAES-OAEP with SHA256 and MGF1+SHA256 bzzrt pop ffssssssst exponent 65537): Use Nacl.
You care about this if: you need to encrypt the same kind of message to many different people, some of them strangers, and they need to be able to accept the message asynchronously, like it was store-and-forward email, and then decrypt it offline. It's a pretty narrow use case."
For each key you use, pick 1 format of messages for it to authenticate. Document that format. Version-control that documentation along with the code that uses it. If the format changes in a non-back-compat way, pick a new key (so try to use a backwards-compatible format). Ensure the documented messages make sense (try not to have a "fire this person" message without knowing who is that person) - timestamps and/or nonces can really help here.
If you can't pick just 1 format, you can say have the first 16 bytes of the message be a UUID, and document each UUID-format (with the same documentation rules as if you are not using a UUID).
Seriously, that and "don't mix secret and unauthenticated things" together covers 90% of all vulnerabilities.
The message here is avoid low-level crypto - if you find yourself having to mess around IV's or choosing modes and padding then you are far more likely to screw something up.
NaCl/libSodium provide higher level interfaces where the underlying primitives are removed from the developer which makes it much more difficult to implement bad crypto (at least as far as the individual constructs go...protocol design may still get you)
Nope. This is literally just something I was going to twerp-storm, and then I thought, "I don't want to be that guy on Twitter" (any more than I already am), and so I found the least official place I could to put it.
And what about zero-knowledge password proofs in general? (I tend to agree that PAKE is bad idea, but I'm not sure if my reasons are same as yours)
In my opinion one should create encrypted channel essentially without any authentication and then do authentication inside of such channel, with ZKPP being one of the interesting ways of how to do that (with "plug password into scrypt and use the result as EdDSA secret key" being particularly straightforward solution), which obviously assumes that you have threat model where exposing password to server is meaningful security concern (usually it is not).
I've seen many systems where ZKPP is the right thing to do (such systems usually involve offline operation with multiple users using same device), but their authors came up with some weird-ass construction with bunch of symmetric primitives that is anything but secure.
The same thing that happens to 2048 bit RSA users if yet another weakness is found on it. Or the same thing that happens on the users of the NIST curves if some weakness is found (or disclosed).
This article does plenty right but gets a few things wrong. Overlooks a few others. I'm going to hit on a few of these in order I see them.
"Avoid cipher cascades." I've pushed and successfully used cascades in highly assured work for years. Cryptographers talk down about it but "meet in the middle" is best attack they can cite. So, they're full of it & anyone who cascaded might have avoided many algorithm/mode breaks. My polymorphic cipher works as follows: three strong algorithms applied out of almost 10 potentials; algorithms are randomly selected with exception that each pass is a new algorithm; separate keys; separate, initial, counter values; process driven by a large, shared secret. Breaking it without the secret requires breaking all three and no cryptographer has proven otherwise despite massive speculation.
I'll briefly mention scrypt because it's ironically great advice. I asked cryptographers for over a decade to deliver a slow-by-design hash function that couldn't be sped up. They, for years on end, criticized me (see Schneier's blog comments) saying it was foolish and we just need to iterate a fast one. I expected problems and hackers delivered them. I had to homebrew a cryptosystem that input a regular HMAC scheme into another scheme: (a) generated a large, random array in memory, (b) did impossible to speed up operations on random parts of it, (c) iterated that excessively, and (d) finished with a proper HMAC. Array size always higher than GPU or FPGA onboard memory in case opponents used them. Eventually in a discussion, a Schneier commenter told me about scrypt and I finally got to ditch the inefficient homebrew. A true outlier in the crypto field.
Avoid RSA: bad advice for commercial if NSA is opponent. All his risks are true. NaCl is great and my default recommendation. Yet, he doesn't mention that NSA has another reason for pushing ECC: they own 26 patents on it that they license conditionally on the implementation details along with ability to restrict export. We know what NSA's goal for crypto is and therefore I avoid ECC commercially like the plague. I just used RSA implementations and constructions pre-made by experts with review by experts. Esp GPG, as NSA haven't even broken it. They use it internally, actually.
For asymmetric signatures, see above. All points apply. I'll just add that, for post-quantum, there's been tremendous process in Merkle signatures with things such as unlimited signatures. Their security just depends on a hash function, there's no known quantum attacks on them, and they're doing pretty good against classical attacks, too. So, I'm following and doing private R&D on standardizing Merkle signatures plus hardware to accelerate it on either end.
He says use OpenSSL and avoid MatrixSSL, PolarSSL, etc. He said some vague stuff about their quality. Problem: anyone following the git comments of OpenBSD team that tore through OpenSSL knows that IT WAS S*. It was about the worst quality code they've run into with so much complexity and potential to be exploited that the NSA would be proud of it. I'd be surprised if Matrix, Polar, etc are worse and less structured than that. If OpenSSL is really the best, then we're in a bad situation and need to fund a clean-slate design by experts like Galois and Altran-Praxis.
Although I'm focused on problematic points, his last piece of advice deserves special mention: use TLS. These protocols have proven difficult to implement properly. TLS and their ilk have had many problems along with massive effort to smash them. Against that backdrop, it's actually done pretty well and using it like he suggests is best option for COTS security. Medium to high assurance systems can always use variants custom-designed for that level. Most don't need that, though.
The oddball TLS libraries do not have poorer "code quality" than OpenSSL, though they are not perfect and have received far, far less scrutiny than OpenSSL, so if you have to bet on which is going to have memory corruption, OpenSSL isn't a sure bet.
But my concerns aren't about code quality. They're cryptographic.
> Random IDs (Was: Use 256-bit random numbers): Remains: use 256-bit random numbers.
256-bit random identifiers are overkill. 122 random bits (as in a GUID) should still be more than sufficient. Size is important for IDs because people whine about the storage overhead. A 256-bit identifier requirement may unfortunately convince some people that it's better to use much smaller, non-random identifiers, and that'd be a shame.
The 256 bit advice is golden if only to encourage people to not use GUIDs in these scenarios.
GUIDs are unique--not necessarily unguessable. Any implementation may be using a CSPRNG but in general you shouldn't rely on that (unless its your implementation and its a documented behaviour.)
Honestly I've found this (perhaps pedantic) mistake to be highly correlated with other badness/sloppiness.
GUIDs are awesome, and can be used in plenty of places near crypto, like OAuth 1.0-style nonces, IDs for public keys... just don't use them for their "randomness".
Of course you have to be aware of your implementation. On Windows, UuidCreate returns unguessable GUIDs. (COM security depends on this property.) libuuid provides similar guarantees if /dev/urandom is available.
But anyway, my point wasn't that you should necessarily use GUIDs for unguessable IDs (although that's fine if you're using real randomness), but that 256 bits is overkill and that 128-ish is good enough.
Note that "SHA-512/256" is a separate algorithm, not to be confused with "SHA-512 or SHA-256" which are two other less secure algorithms.