Hacker Newsnew | past | comments | ask | show | jobs | submit | gshayban's commentslogin

What about libhydrogen's NORX construction with the Gimli permutation for AEAD and hashing? Seems like it checks a lot of boxes (from a layman's perspective.)


indeed, there is also www.discocrypto.com


std::map and red-black trees are associative & sorted (by key); HAMTs are just plain associative. HAMTs have generally excellent performance: the typical implementations use 32-way branching, which allows for the top levels to be likely in cache.

The best thing about HAMTs is passing immutable ones around, which have free update costs and multi-thread safe. Clojure uses these to represent information that flows through a program. (By free update costs I mean: they are free enough to never rise to the programmer's attention. Obviously ops have costs.)


The self-rewriting interpreter model is so so interesting, and has already proven itself in things like peak performance, and cross-language. However, it can have the disadvantage of blowing up warmup time. Are there Truffle language impls that persist their specialized interpreter nodes across executions?



Future predictions are hard, but it's especially hard to make them with such a large company. I agree with you about some of the missteps, but there are a lot of fundamental improvements happening too. The secure enclave technology (SGX) and its consumers like the Sawtooth blockchain [1] are truly novel.

[1] https://intelledger.github.io/introduction.html


SGX is an interesting technology from a security perspective, but the use of it for DRM going forward really worries me, I'm tired of not owning my content and hoping companies don't go out of business and money I've spent goes up in smoke.


Agreed. Thus far it appears that Intel itself has been vacillating with regard to launch enclave policy. As a reminder, all technology can be used for good or evil, whatever your definitions of those may be. The issue is one of control, but you can defer worrying about DRM until trusted I/O gets implemented.


SGX already works with the PVAP doesn't it? I can't imagine Netflix would be using SGX-based DRM for 4K content unless it actually provided a real benefit, especially considering without trusted I/O it's much more obvious where to grab decrypted data by looking for SGX-related instructions.


I looked into sgx recently and have a question related to your comment. It seems an enclave doesn't make system calls. I was wondering how any io gets done? You mention untrusted ... is that the only way??? Trust zone seems to do it better then?


An enclave does not make any system calls directly like the rest of the process would, but a system call can definitely be made through the use of a shim layer. In SGX parlance, calls to the outside of the enclave are known as OCALLs. The danger with relying on values returned by a syscall is that the OS could be lying. As an exercise, you could implement a simple "hello, world" filesystem driver that hides the presence of certain files. So, as long as the enclave has no trusted path to I/O, it must rely on the operating system, which is assumed compromised. If the enclave decrypts protected content for the sake of having those be written to the display by the OS, then you can see that the contents are not secure. SGX support for PAVP means that the chipset is involved in shuttling the data into and out of the enclave, with no one being able to interject. Not sure TrustZone solves this.

Just came across this interesting article: https://arxiv.org/pdf/1701.01061


I'd love that too, but there is a startup time overhead with the Clojure runtime. Given the moves Google's made in recent wrt moving away from JIT for startup time reasons, this would be challenging.


Agreed that this is a big step, but what is being included in core is very distinct from Prismatic Schema. For example the way keysets vs values are handled is intentionally different from schema.

The rationale is a good read.


Yes, I've read and see the difference from schema. The huge advantage with this, is that even though the approach is different, it supersedes schema's abilities and it will be a require away for every library and application author.


> The rationale is a good read.

Hickey's design choices consistently appear to be exceptionally well reasoned.


It's because of the hammock time.


Many people blindly point to the docs to say "don't use groupBy, prefer reduce because it's faster..." Are there better examples that illustrate the fundamental differences between the two operations? Surely there is still a need for both operations


Reduce can perform reductions on locally on each machine before shuffling the data. This decreases the memory as well as the network overhead. If you need all the elements for a given key - e.g. to display them to a user or save them to a DB, perhaps you should use groupBy. If you're going to perform some form of a reduce after that though, it's likely sub-optimal.


databricks has a page that describes the pitfalls: https://databricks.gitbooks.io/databricks-spark-knowledge-ba...

I don't know if the OutOfMemory exception can still occur in recent versions of Spark, but the performance impact of groupByKey is very real.


Exciting, inevitable news. Healthcare needs a shakeup. How does drchrono interoperate with medical systems? Other than SureScripts. Does it support regional labs or just LabCorp/Quest?

Disclaimer: I am writing a modern interface engine.


As a health care IT pro, I think you're right about the conclusions, but your reasoning is off.

Google is not pulling the plug because of a technical problem with the product (security/HIPAA), but a social issue. You can have the most beautiful API, repository, and website in the world, but it's worthless without any data. Ostensibly Google Health and Microsoft HealthVault are "personal health repositories" (PHR's), and live and die based on the content, not the platform.

Health care data has high complexity, lives in heterogenous silos (most of them obsolete), and has a huge variance in quality. Doctors don't understand it (and shouldn't have to), and patients certainly don't. It is certainly plausible for patients to expect that their health info is accessible to them online. With all their prowess, Google could have helped people expose their data. I do this every day with providers.

Google could have built more alliances with care providers (both large and small) and helped get an initial seed of users. They announced some, but not enough. Basically the only things the mainstream media covered about PHR's so far have been their birth, and now death. Nil penetration.

The health care industry needs technical help beyond Obama's stimulus, the existant products are so weak and suffer from so much technical debt...

I actually Google could have done well. They just didn't follow through. Now Dr. Chrono or Practice Fusion are far better positioned.


With OLED/IPS etc. taking over in the next few years, will this really support >8bpp or HDR?

Interesting move, Google


WebP uses the same color model as WebM which "works exclusively with an 8-bit YUV 4:2:0 image format" so seems like WebP will not be HDR capable, which is a pity.


Considering it basically is 1 frame of WebM it isn't exactly a surprise either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: