Hacker Newsnew | past | comments | ask | show | jobs | submit | nkvoll's commentslogin

Isn't BitLocker (https://en.wikipedia.org/wiki/BitLocker) how Windows supports full disk encryption by default (shipped with Vista and later)?

From an end-user perspective it seems to behave very transparent as well. In what way does it come up short?


Yes. And with suitable hardware Bitlocker actually uses the SSD encryption which is anyways enabled, so there's zero performance hit. The drawback is that not all modern hardware supports it [1,2]. For example the Samsung 960 series do not support this. Since Apple controls the whole stack, I would assume they are also doing this, but I'm not sure.

[1] https://technet.microsoft.com/en-us/library/hh831627(v=ws.11...

[2]https://helgeklein.com/blog/2015/01/how-to-enable-bitlocker-...


> BitLocker.... In what way does it come up short?

By forcing an escrow key linked to Microsoft and whomever owns the TPM on your computer.

By definition, backdoor keys and hidden users who can access encrypted content is just absolutely, horribly wrong. And there's no way to turn it off... Well, I'm sure someone will say there's 10 regkeys to change that might fix it on a specific version.

Still does nothing regarding the "trust" with the TPM.


Apple uses TPM-like hardware for their encryption as well - and that's what the OP was asking about. The same solution.


They actually don't; not for Macs anyway - although that might be different for the models with TouchID. FileVault on the Mac is entirely software-based.


Yeah, on touchbar MPBs they store keys in the TouchID coprocessor which does the same as TPM on other machines.


Do they escrow across reboots? I know they don't on iOS.


What do you mean linked to Microsoft? And if your TPM, the hardware device that stores keys, is pwned, the whole exercise is meaningless anyways


The TPM is pwned, by default. It's closed, secret, and as the AMT issuws showed, has a lot of software running in it with questionable security.

That's why the whole excercise is meaningless if you leave the keys on the device, and why you should put them on external hardware TPMs or key vaults. Even a YubiKey is better.

Now you just need a system that supports reading keys from such a device during boot.


You can use YubiKey to store BitLocker decryption key.


And? That still allows MS to decrypt the drive.


False. You are given a non-default option to upload a backup of your Bitlocker key to Onedrive. By what evidence are you claiming Microsoft gets to decrypt the drive if this option isn't selected?


TPM and AMT are two entirely different technologies with entirely different classes of security concerns. The Intel management engine (which runs the AMT software) is effectively a separate CPU that runs full programs and has direct memory/hardware access, while the TPM is not.

The TPM is a PKI device, nothing more. It cannot take over your computer.


MS encrypts your stuff with your AND microsofts key. Look for this "feature" by a if you forget your password, and log into OneDrive.

Their encryption by definition, is already backdoored to MS. Game over.

And that's nothing about the stupidity of the TPM itself.


>MS encrypts your stuff with your AND microsofts key. Look for this "feature" by a if you forget your password, and log into OneDrive.

This statement is misleading. When setting up bitlocker, you have the option of saving your recovery key to onedrive. It's not mandatory or even the default choice.


It's not available on Windows Home, for one.


FWIW, HiDPI support does not work on Linux either. All icons, color palette, custom cursors are about half the size they should be on my 4k monitor.

Scrolling is pretty choppy and it doesn't support smooth/inertial scrolling.

So 3/5 of his listed issues exist on the Linux version.


Is this wayland/X what desktop environment? What gpu? What driver?


Same experience in both Wayland and X11. Intel built-in graphics, i915 driver.


So is your gpu just too slow for 4k?


Not to get into a discussion regarding what constitutes a "Release" for any specific project, whether it's tagging, pushing, announcing[0], updating documentation, creating release notes, publish a release blog post and so on.

A final build of 1.3 was tagged with an accompanying changelog and announcement post. I found it weird that it had no more ceremony, nor any prior submission on hn, and as it had been announced through the kubernetes-announce mailing list for 17 hours, I figured its existence would be interesting to the community, so I submitted it in good faith.

In any case, kudos to everybody working on it and congratulations on the release, whether it's this week or the next.

[0]: https://groups.google.com/forum/#!topic/kubernetes-announce/...


I'm glad you posted it - thanks!

My understanding is that with the timing of the US holiday, it made more sense to hold off on the official announcement for a few days. So that's why there aren't more announcements / release notes etc; and likely there won't be as many people around the community channels to help with any 1.3 questions this (long) weekend.

You should expect the normal release procedure next week! And if you want to try it out you can, but most of the aspects of a release other than publishing the binaries are coming soon.


While interesting in and of it self, it seems to have seen no changes since mid-2013, looking at its GitHub repository (https://github.com/greedy/scala)

    > This branch is 211 commits ahead, 10086 commits behind scala:2.12.x.
Is there any new context / development that makes this extra interesting right now?


> if you find something on SO that does exactly what you need then yes, just paste it.

I think you should be very careful about doing this without an explicitly stated license with reasonable proof of author copyright for the code snippet. It's one of the things that can easily compromise the legal security of a codebase.


StackExchange content is licensed under Creative Commons.

http://blog.stackoverflow.com/2009/06/stack-overflow-creativ...


Is there not a blanket/implicit release of claim that goes along with posting code on SO?

When I copy code verbatim from SO or another "help" site, I generally put a comment right above that chunk of code, something like:

  /* See: http://stackoverflow.com/a/xxxxxxxxxxx */
Don't know if that's enough to protect against any ownership claims though.


I think this is helpful more in helping future developers reading the code have more context about what the code is, why it's there, and where it came from, rather than anything to do with ownership claims.

Often times code taken from SO solves the problem but noticeably is structured differently from the rest of the codebase--often with good reason--and this helps the new dev go look back at the SO conversation and see what problems this is solving / if it can be removed.


Code posted on SO doesn't need to be written by the person posting the code.


Agreed if it's anything of significant size. If it's 2-4 lines though, it's almost certainly fair use. Can a lawyer comment?


IANAL

Fair use is a pretty subjective thing that's up to a judge to determine. It's a four-factor test, and you can read more here: http://fairuse.stanford.edu/overview/fair-use/four-factors/

* the purpose and character of your use

* the nature of the copyrighted work

* the amount and substantiality of the portion taken, and

* the effect of the use upon the potential market.

The fact that it's 2-4 lines may come in as part of factor 3, but the other factors also need to be taken into account.


I think there is a huge difference in what would be considered "fair use" in a commercial, value-generating software product. Fair use as I understand it usually applies to entirely personal use, or quotes used in an editorial, analysis, or critique.

If something I wrote, even just a few lines, is used to generate profit for someone else, and I did not permit that or place it in the public domain, I'm not sure a fair use claim would hold up. IANAL.


Fair use can apply to commercial or for-profit enterprises. You'll find that commercial enterprises are often disadvantaged by some of the factors.

For example, the first factor (purpose and character; transformative nature of the work) is often less transformative in commercial settings. The fourth factor (the effect of the use upon the potential market) is also often more challenging for commercial enterprises to get past.

However, there are plenty of commercial enterprises that rely on fair use regularly. News media is a very common commercial product that relies on fair use.

Copyright and fair use law is pretty much the same for commercial and personal products. If you can make a compelling case to a judge around those four-factors, it can be fair use.

However, when taking clippings from SO and using them directly in a software, you're going to have very hard time making a fair use claim — wether it's for a personal or commercial project.


It certainly raises some good questions, and the Printrbot is a great machine at the price point (I have a Printrbot Simple RepRap clone myself). His looks pretty well calibrated as I've seen a lot of Z-wobbling artefacts on the prints from several Printrbot (Simple Makers, not necessarily the metal one).

Worth nothing is that things like the Z-scar being on the side of the cup instead of under the handle is something that's decided by the slicer (the software that translates the 3d model into printer commands).

Different open source slicers that are regularly used produce different locations for these scars -- and I'm not sure if it's actually possible to position/hint this Z-scar manually in any of them. They often do try to be "smart" about it, but the end result may vary.


Do you think I just got lucky with the Z-scar and Cura?


So I just took the coffee cup STL and loaded into the latest Cura myself, rotated in 90 degrees so it was standing upright. Without changing anything else, I ended up getting the layer changes on a few difference places over the print (I'm seeing this just by analyzing the G-Code using http://gcode.ws), depending on the layer height. It starts to the right of the handle and after a while seems to sort-of stabilize almost on the opposite side of the handle and on the right side of the actual handle, before the handle separates from the cup body.

All in all, it will likely have come out decent, but certainly not under the handle as you have. This I guess is because Cura will start the new layer close to where it finishes it's previous layer by default. In other words: it depends on infill percent/pattern and model location on the build platform.

I figure this is because you've added support. This causes the print head to get a "closest" start of the new layer exactly under the handle (because that's where the support material are). So going as far as calling it "lucky", no. The support placement actually helps Cura put the Z-scars in a good place on this model. On another model, it might be the exact opposite.

(PS: I use Cura for all my slicing and I like it a lot -- but I don't feel like the software actually gives me much control about the Z-scar placement)


Fascinating. Thanks for this. I was thinking it was either the support or running it through NetFabb first.


you might try Slic3r, I did some tuning with the retract on layer change settings and managed to eliminate the z-scar stuff on my prints


> Printrbot (Simple Makers, not necessarily the metal one).

I've heard from the couple of owners that Metal ones are much better than Simple Makers. here is the quote "but Printrbot Simple Metal is very reliable, I had one I’ve been carrying around in my car unpacked (I just dump it behind the seat, same with the tablet) and I just put it on the desk and it prints. Every time. Auto-bed leveling does wonders."

And another one from the same guy

"Just to be clear, I’m talking about Printrbot Simple Metal (not the previous wood version – that one is realty bad: it de-calibrates from day to day simply because of humidity "


What you're saying is not really correct.

You cannot just "buy a second 2xlarge" and then recombine them into a 4xlarge. The 2xlarge reservations and pay attention here need to match on their purchase date and hour. I.e they have to be bought within the same clock hour on the same day.

If you reserved a 2xlarge at 2014-12-01T22:59:59Z, you couldn't even combine it with one bought at 2014-12-01T23:00:01Z. Yes Amazon is that picky about it. While you might be able to muscle that through a sales contact, it's not even available to customers on business level support, so anything more than a day apart and you're likely out of luck in any case.

(See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modify..., particularly bullet point #6)


Ah, I didn't realize this. Wow, that's...restrictive. Thanks for the info.


I've never ever received a message from AWS when they've had outages that have been affected us significantly. On the contrary, there's been multiple cases where we've experienced issues, contacted them and it's taken a few hours before they realize they're actually having infrastructure problems. Many of these don't even get an entry on their service status pages. So there's still a lot of room for improvement on AWS's side of things as well.


I can confirm this. I remember once when half of the Internet was down and the status reported for EC2 was yellow - experiencing some minor issues :-)

And I find out about it by yelling at Heroku - they told me that Amazon is having issues before Amazon's status turned yellow.


Usually when AWS has an outage they have a nice green circle but with a small blue "i" next to it that you need a loupe to see. Extremely dishonest.


This likely has a lot to do with how they do detection.

Swift uses the .swift extension, which is pretty unique, so detection is as simple as checking for the file extension with a very low risk of a bad classification.

Some of the older, more popular languages uses more common file-suffixes, which may be shared between multiple programming languages, which has a negative impact on classification accuracy.


Yes. In support of this theory: Github does not syntax color Swift files yet (eg. https://github.com/austinzheng/swift-2048/blob/master/swift-...)


I've been clicking around its user interface for a few minutes now, but I can't figure out how to do this..

Is it adding "whitelist *" somewhere or something similar?


Seems like it was the power-button looking icon found in the upper left part of the extension dropdown menu when on an actual page.

I expected this to be a part of the extension settings / options, but ah well.


The power button is what I call "hard" allow all, i.e. it turns off completely matrix filtering.

The "all" cell (top left corner of the matrix) is a "soft" allow all, i.e. it allows everything except those hostnames and types of request which are not specifically blacklisted.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: