Yes. And with suitable hardware Bitlocker actually uses the SSD encryption which is anyways enabled, so there's zero performance hit. The drawback is that not all modern hardware supports it [1,2]. For example the Samsung 960 series do not support this. Since Apple controls the whole stack, I would assume they are also doing this, but I'm not sure.
> BitLocker.... In what way does it come up short?
By forcing an escrow key linked to Microsoft and whomever owns the TPM on your computer.
By definition, backdoor keys and hidden users who can access encrypted content is just absolutely, horribly wrong. And there's no way to turn it off... Well, I'm sure someone will say there's 10 regkeys to change that might fix it on a specific version.
Still does nothing regarding the "trust" with the TPM.
They actually don't; not for Macs anyway - although that might be different for the models with TouchID. FileVault on the Mac is entirely software-based.
The TPM is pwned, by default. It's closed, secret, and as the AMT issuws showed, has a lot of software running in it with questionable security.
That's why the whole excercise is meaningless if you leave the keys on the device, and why you should put them on external hardware TPMs or key vaults. Even a YubiKey is better.
Now you just need a system that supports reading keys from such a device during boot.
False. You are given a non-default option to upload a backup of your Bitlocker key to Onedrive. By what evidence are you claiming Microsoft gets to decrypt the drive if this option isn't selected?
TPM and AMT are two entirely different technologies with entirely different classes of security concerns. The Intel management engine (which runs the AMT software) is effectively a separate CPU that runs full programs and has direct memory/hardware access, while the TPM is not.
The TPM is a PKI device, nothing more. It cannot take over your computer.
>MS encrypts your stuff with your AND microsofts key. Look for this "feature" by a if you forget your password, and log into OneDrive.
This statement is misleading. When setting up bitlocker, you have the option of saving your recovery key to onedrive. It's not mandatory or even the default choice.
Not to get into a discussion regarding what constitutes a "Release" for any specific project, whether it's tagging, pushing, announcing[0], updating documentation, creating release notes, publish a release blog post and so on.
A final build of 1.3 was tagged with an accompanying changelog and announcement post. I found it weird that it had no more ceremony, nor any prior submission on hn, and as it had been announced through the kubernetes-announce mailing list for 17 hours, I figured its existence would be interesting to the community, so I submitted it in good faith.
In any case, kudos to everybody working on it and congratulations on the release, whether it's this week or the next.
My understanding is that with the timing of the US holiday, it made more sense to hold off on the official announcement for a few days. So that's why there aren't more announcements / release notes etc; and likely there won't be as many people around the community channels to help with any 1.3 questions this (long) weekend.
You should expect the normal release procedure next week! And if you want to try it out you can, but most of the aspects of a release other than publishing the binaries are coming soon.
While interesting in and of it self, it seems to have seen no changes since mid-2013, looking at its GitHub repository (https://github.com/greedy/scala)
> This branch is 211 commits ahead, 10086 commits behind scala:2.12.x.
Is there any new context / development that makes this extra interesting right now?
> if you find something on SO that does exactly what you need then yes, just paste it.
I think you should be very careful about doing this without an explicitly stated license with reasonable proof of author copyright for the code snippet. It's one of the things that can easily compromise the legal security of a codebase.
I think this is helpful more in helping future developers reading the code have more context about what the code is, why it's there, and where it came from, rather than anything to do with ownership claims.
Often times code taken from SO solves the problem but noticeably is structured differently from the rest of the codebase--often with good reason--and this helps the new dev go look back at the SO conversation and see what problems this is solving / if it can be removed.
I think there is a huge difference in what would be considered "fair use" in a commercial, value-generating software product. Fair use as I understand it usually applies to entirely personal use, or quotes used in an editorial, analysis, or critique.
If something I wrote, even just a few lines, is used to generate profit for someone else, and I did not permit that or place it in the public domain, I'm not sure a fair use claim would hold up. IANAL.
Fair use can apply to commercial or for-profit enterprises. You'll find that commercial enterprises are often disadvantaged by some of the factors.
For example, the first factor (purpose and character; transformative nature of the work) is often less transformative in commercial settings. The fourth factor (the effect of the use upon the potential market) is also often more challenging for commercial enterprises to get past.
However, there are plenty of commercial enterprises that rely on fair use regularly. News media is a very common commercial product that relies on fair use.
Copyright and fair use law is pretty much the same for commercial and personal products. If you can make a compelling case to a judge around those four-factors, it can be fair use.
However, when taking clippings from SO and using them directly in a software, you're going to have very hard time making a fair use claim — wether it's for a personal or commercial project.
It certainly raises some good questions, and the Printrbot is a great machine at the price point (I have a Printrbot Simple RepRap clone myself). His looks pretty well calibrated as I've seen a lot of Z-wobbling artefacts on the prints from several Printrbot (Simple Makers, not necessarily the metal one).
Worth nothing is that things like the Z-scar being on the side of the cup instead of under the handle is something that's decided by the slicer (the software that translates the 3d model into printer commands).
Different open source slicers that are regularly used produce different locations for these scars -- and I'm not sure if it's actually possible to position/hint this Z-scar manually in any of them. They often do try to be "smart" about it, but the end result may vary.
So I just took the coffee cup STL and loaded into the latest Cura myself, rotated in 90 degrees so it was standing upright. Without changing anything else, I ended up getting the layer changes on a few difference places over the print (I'm seeing this just by analyzing the G-Code using http://gcode.ws), depending on the layer height. It starts to the right of the handle and after a while seems to sort-of stabilize almost on the opposite side of the handle and on the right side of the actual handle, before the handle separates from the cup body.
All in all, it will likely have come out decent, but certainly not under the handle as you have. This I guess is because Cura will start the new layer close to where it finishes it's previous layer by default. In other words: it depends on infill percent/pattern and model location on the build platform.
I figure this is because you've added support. This causes the print head to get a "closest" start of the new layer exactly under the handle (because that's where the support material are). So going as far as calling it "lucky", no. The support placement actually helps Cura put the Z-scars in a good place on this model. On another model, it might be the exact opposite.
(PS: I use Cura for all my slicing and I like it a lot -- but I don't feel like the software actually gives me much control about the Z-scar placement)
> Printrbot (Simple Makers, not necessarily the metal one).
I've heard from the couple of owners that Metal ones are much better than Simple Makers. here is the quote
"but Printrbot Simple Metal is very reliable, I had one I’ve been carrying around in my car unpacked (I just dump it behind the seat, same with the tablet) and I just put it on the desk and it prints. Every time. Auto-bed leveling does wonders."
And another one from the same guy
"Just to be clear, I’m talking about Printrbot Simple Metal (not the previous wood version – that one is realty bad: it de-calibrates from day to day simply because of humidity "
You cannot just "buy a second 2xlarge" and then recombine them into a 4xlarge. The 2xlarge reservations and pay attention here need to match on their purchase date and hour. I.e they have to be bought within the same clock hour on the same day.
If you reserved a 2xlarge at 2014-12-01T22:59:59Z, you couldn't even combine it with one bought at 2014-12-01T23:00:01Z. Yes Amazon is that picky about it. While you might be able to muscle that through a sales contact, it's not even available to customers on business level support, so anything more than a day apart and you're likely out of luck in any case.
I've never ever received a message from AWS when they've had outages that have been affected us significantly. On the contrary, there's been multiple cases where we've experienced issues, contacted them and it's taken a few hours before they realize they're actually having infrastructure problems. Many of these don't even get an entry on their service status pages. So there's still a lot of room for improvement on AWS's side of things as well.
This likely has a lot to do with how they do detection.
Swift uses the .swift extension, which is pretty unique, so detection is as simple as checking for the file extension with a very low risk of a bad classification.
Some of the older, more popular languages uses more common file-suffixes, which may be shared between multiple programming languages, which has a negative impact on classification accuracy.
The power button is what I call "hard" allow all, i.e. it turns off completely matrix filtering.
The "all" cell (top left corner of the matrix) is a "soft" allow all, i.e. it allows everything except those hostnames and types of request which are not specifically blacklisted.
From an end-user perspective it seems to behave very transparent as well. In what way does it come up short?