Hacker Newsnew | past | comments | ask | show | jobs | submit | mdxn's commentslogin

> "unaware that in nearby Ohio," Clearly this was written by a coastal or someone who needs a lesson in geography. Vulcan, Michigan, and for that matter most of the Upper Peninsula is closer to 3 other states including Iowa than it is to ANY part of Ohio.

Can you explain why this is relevant? It seems of little consequence to me in the context that it is written.

As someone who lived in Michigan for 20+ years (I am aware where Vulcan, MI is), Ohio seems "nearby" enough given the reason the author would use that language in the article.


Because it undermines the legitimacy of the author, who may be writing about an area of the country he has little direct knowledge of.


The author grew up in Michigan (although somewhat closer to southeast MI).


Thanks, I did some quick research but couldn’t find that info so just went with the “may” qualifier.


Lattice based cryptography can actually be made pretty accessible and can be taught to highschoolers and fresh undergraduates (I have done this with students at my job) as long as you give them a couple of math tools they may not have: matrix transforms, Gaussian distribution, vector dot product, and the one time pad. While this is a longish list, none of the topics are particularly hard and are accessible. I'll take a crack at an explanation (with schoolbook Regev-like LWE) and you can let me know how I did. It is better with pictures and probably can be shaped over the years, but I do think that people can adapt this explanation to turn it into something as easy, if not easier, than RSA.

1. Start with a random (secret) n-dimensional vector, s, that lives on a lattice (a square grid like pattern). This is your (small) secret key.

2. Hit that vector with a randomly generated matrix A (written As for multiplying matrix A with vector s), which basically just scales, skews, rotates, and projects that lattice into m-dimensional space (we want m to be much bigger than n). For those in the know, the new vector, As, is almost a "psuedorandom", "random looking", or hard to predict value. This is a way of taking your small key and turning it into a large key.

Intuition: Note that each component of As is basically an individual dot product. On a simple level, you can think of the dot product as a generalization of XOR, parity, or something that creates a checkerboard pattern using that lattice. This basically creates a high dimensional grid (lattice) with many colors that alternate. The end goal is to get some samples of these colors and reconstruct what the grid pattern looks like. If you have tried to learn a checkerboard (XOR, parity) pattern with a linear classifier, you know that this is hard. Here, we are generalizing this concept into something even harder.

3. Add (Discrete) Gaussian noise to As to get As + e. This is just adding "small" random noise to each of the components. You do this is to turn the almost "random looking" large key into an actually "random looking" large key.

Intuition: The reason for doing this is that if you have A and y where As = y, you can solve for s using Gaussian elimination. This is equivalent to the process that gradeschoolers use to solve systems of equations with multiple variables (where they subtract one row from another or solve then substitute variables).

Turns out that Gaussian elimination and other algorithms suck bad at getting the right answer if you have noise or small sources of error. The process gets thrown off wildly and gets no where close to s. The situation is so bad that even quantum computers are assumed to not be able to do it. This is what makes your large key resistant to being reverse engineered by powerful computers.

4. Use your As + e as a one time pad. To encrypt a message m, your ciphertext is the pair A and As + e + m. To decrypt, someone takes the key, s, computes As, then subtracts that off. This leaves you with m + e.

5. Decryption in (4) is bad because the small error, e, is leftover. This makes the process noisy and makes you lose a little information when you encrypt or decrypt. You fix this by using a noise correction encoding that makes the e go away.

Your new ciphertext is (A, As + e + encode(m))

EDIT: Note that to make this actually secure you need to pick good parameters for your lattices and your noise distributions. If your error is too small, it wont fool attacks and is easy to solve. If your error is too large you cant decrypt. The problem is easy unless you are in high enough dimensions (for instance, LWE is easy in 2 dimensions, which is what we use to visualize things).


Do you have any YouTube videos of your lectures? If not, consider making one and uploading it. As lattice based cryptography comes into maturity ELI15 for us non-mathematicians on the ground floor will be essential.


Watch YouTube videos of Dr. Peikert presenting his paper "lattice based cryptography for internet"

https://www.youtube.com/results?search_query=lattice+based+c...


Encryption keys do not have to give "all or nothing" access to encrypted material. With homomorphic encryption and related primitives, you can shape things in a much more finer grained manner and only expose carefully crafted partial information. This can accomplish what you are hoping for with your threshold cryptography example without relying on some sort of combined trust built from multiple authorized parties.

For instance, with functional encryption, you are able to distribute or derive keys that have a very specific functional purpose and leaks no other information. Basically it allows you to derive a "function key" that basically computes f(x) for you given an encryption of x. As long as the crypto is strong, no other information about x is exposed (only info about f(x) is revealed).

With the right system architecture, this would allow the government to perform a very specific, pre-defined query to check for illegal content without exposing any additional information about the encrypted data.

Another approach you could use is based on zero-knowledge proofs and verifiable computation. Essentially a government could come along and ask you to provide a proof that your encrypted data does not contain malicious content. Given a program that can check for what they are looking for, you can provide them a zero-knowledge proof that convinces them that you correctly ran their provided algorithm on the suspect data and that the algorithm did not identify any malicious content. In this process, no other bits of information are exposed or handed to the government other than the data is not malicious.


All of this presupposes you can create an algorithm that can "check" if something is illegal. That seems impossible if you have to define the algorithm before the crime is comitted, and honestly still pretty difficult even if you know what specificly you are looking for.


It's why the antivirus industry exists despite continuously failing. They're pretty much fighting that impossible battle every day.


> This is not a peer-reviewed research paper. It seems to be a project report likely done by undergraduates. The paper is full of typos and it is not clear what the specific novelty of the proposal is (if there is any).

This is not a constructive comment.

The URL suggests this is a final(?) project paper for MIT's 6.857: Computer and Network Security course. The paper seems appropriate in style and length for this kind of setting.


When I clicked the link, I figured this would be a research paper from MIT. It took me a bit of reading to figure out it was just a student project report. That's why I posted the GP.

I stand by my comment that there doesn't appear to be anything particularly novel about this work. Especially with crypto it is important to stick with well-studied and well-understood primitives. There's a danger that someone looks at this and assumes it is being endorsed by MIT and implements it in their system.


This is a misapplication of that principle and runs the risk of turning into the toxic gatekeeping that I know first hand has kept many talented people out of the academic cryptography community. The authors of this work implement their system using existing proof of concept zkSNARK libraries that have been developed by researchers studying this area for roughly a decade or more. They are not rolling their own crypto and are guided by top notch cryptography researchers at MIT (albeit in a course setting).

Even supposing that there is a dangerous time to experiment with application with proof of concept work, this really isn't the case for zero-knowledge proof (or more specifically, zk-SNARK) systems right now. I have been in this field for many years and the chief complaint that I have heard many researchers have is that the tooling exists, yet people are not making heavy usage of it outside of ZCash. This, thankfully, has become less true within the last year or two. This work, even if it is written by students, is nice to see given this reality and can give researchers trying to optimize these development tools necessary feedback to improve them.

To put this into a larger research context, keep in mind that various funding agencies have probably spent at least $1 billion on FHE, ZKP, and similar novel crypto research in the last decade. Off the top of my head, this includes DARPA, ONR, NSF, and many more. The biggest barrier this area currently faces is fine tuning these tools to be performant and accessible to non-research level security engineers and developers. This includes determining which applications this technology may be useful for and where, if applied, it is surprisingly not useful. This is why there are continuing (multi-million $) programs to directly target these problems (DARPA SIEVE and IARPA HECTOR for example). This work by MIT students is, at least on the surface, a potentially useful datapoint that we can use to inform our decisions going forward (I personally would like to know more about their development experience using the tools).


Your point about the need to "popularize" zk-snarks etc. is well-taken and I upvoted you for it.

But that said, we seem to be talking past each other at this point. I don't see why my criticism of the OP was unconstructive. This is not actually a research paper, it has not undergone any sort of academic peer review, and if the goal is to present it as one example of the kinds of things that are possible to build with zkSNARKs, then it ought to be presented in that context.

I do want to note that it is possible to compose zkSNARKs in an application in ways that result in the composition not being zero-knowledge. Not saying that has happened here, but implying there's no danger with incorrect usage of zkSNARKs seems sketchy to me.


There is definitely a difference. The problem with the vast majority of "Turing-completeness in X" claims on HN is that they are just logically complete Boolean circuits that are overgeneralized (infinitely tessellated or scaled) for free and cannot handle arbitrarily large inputs. They can only handle constant sized input lengths. To handle an arbitrary input, the circuit has to be resynthesized and created to handle that particular case length. This ends up being a massive leap in computational complexity from constant-time to decidable.

Some of these claims are even wrong since the posters don't bother analyzing whether their suggested generalization technique (e.g. infinite tiling) actually do let you topologically embed every gate and batch of wires into any Boolean circuit you want. This is quite common in 2D side scrollers where you have constant y-height and infinite x length. It just is not possible to communicate all the information you need for a computation from one side of the map to the other. This was the case with Minecraft prior to bidirectional flying machines. You could actually prove that any finite sized "machine" could not communicate an arbitrary distance away (think arbitrarily long Turing tapes) without losing some part of it going off in the distance forever. Hence it was not "Turing-equivalent" unless you used command blocks or gave some overly general infinite tiling capabilities.

Actual descriptions of Turing machines are finite in size/length. These "Turing-equivalent" candidates that keep popping up are not when laid out. This makes it pretty easy to run into uncomputability situations when you try to feed the machines a description of itself and compute some property (like a number = 2x its length). A typical Turing machine would have no problem handling this, though a Boolean circuit would.


The average GPA of millionaires alone is obviously not enough information to make the conclusions that this video is making. An average GPA of 2.9 is not far off from the average college GPA of regular students, anyways (for, let's say, the ones that graduate). For all we know (from just this statistic), college GPA could be completely independent from who is a millionaire and who is not. We really need information about the shape of the distributions in order to really compare these two groups. For instance, we would, at least, need to know that valedictorians (for the sake of simplicity, just people with 4.0s) have a lower likelihood of becoming millionaires than other groups. The average alone does not help us enough here.

I have a few questions for the people who are really more knowledgeable about the topic:

Is there really a lower incidence of high GPA graduates (3.8-4.0s) becoming millionaires than those closer to the average? Is this distribution tighter around the mean or are there clusters in certain bands of GPAs?


Here's a study you might be interested in:

http://time.com/money/4779223/valedictorian-success-research...


This study, at least as Time summarizes, seems to be quite badly formed. 81 valendictorians are far too small of a sample size to determine who goes on to be a visionary, given that the baseline odds of being a visionary (whatever that even is) are probably less than 1 in 1000.

The fact that you can randomly sample thought leaders and see a reasonably high number of valedictorians (Peter Thiel, Ben Bernanke) suggests there's a positive correlation.


Why must there ALWAYS be an auto playing video at the top of every article, that does nothing but play music, and show text.


Probably appeals to people with different learning styles than you. Doesn't take much effort for you to stop the video from playing.


It doesn't take much effort to start a video if you should so want it. This method has the added benefit of not annoying folks like me who seriously do not appreciate the noise pollution.


I think what you are claiming here is misleading.

Some pre-processing SNARK constructions (particularly ones used by some ZCash scientists) are based on multi-round interactive proof systems which reduce to one round (depending on how you count) when you relax some requirements. Such relaxations include weakening the adversary from being computationally unbounded to polynomial time bounded, forcing the prover and verifier to use a specific set of functions, or restricting what kinds of statements can be proven. Oded's work on efficient interactive proofs contributed to this effort. It is partially this efficiency that helps SNARKs actually be "succinct" and quick to verify.

You should check out some of the citations to Oded's work in https://eprint.iacr.org/2012/718.pdf (which is co-authored by Alessandro Chiesa of ZCash) and see for yourself.


One can construct SNARKs from succinct MIPs, but these are again very different from deIPs. Sure, both SNARKs and deIPs involve highly efficient verifiers, and there is some overlap between the techniques used for succinct MIPs and deIPs, but that's pretty much where the similarities end. Furthermore, MIP based SNARKs are way too inefficient, and definitely not the ones used in Zerocash.

There are no constructions of SNARKs from deIPs, nor vice versa. In particular, one cannot construct laconic IPs at all for NP like languages.

P.S.: I'm Alessandro's student ;-)


Withdrawn already

"The paper has been withdrawn due to a mistake in the last line of the proof--it does not hold for n=0. Thanks to Terry Tao for pointing out this crucial gap"


The hackathon that they mention in the post (Mhacks) put up an application process and enforced a 50/50 gender ratio. In the weeks leading up to the event, I overheard many defeated conversations from fellow CS students over anxiety of getting rejected from this hackathon because they were male. The females didn't share the same anxiety, but it definitely made them feel down that they wouldn't be able to participate with their male friends who were rejected/waitlisted. Overall, the fact that things ended up this way made everyone feel very awkward, guilty, and discouraged. Hopefully its success makes up for the hidden fractures that will be felt by our student body for quite a while after.

Mhacks lists the 50/50 gender divide on their site followed by the reasoning: "because it’s about time for a little change in the tech world"

Disclosure: I am a CS-Eng student at this university. I did not plan to attend or care about this hackathon.


Just exploit the identity-political zeitgeist that's overtaken universities lately: claim yourself genderqueer, non-binary presenting as male. With the right incantation, the organizers will fold like a house of cards, lest they suffer the blowback to excluding a "righteous" participant.


It actually turns out that cargo culting the language of social justice works about as well as the sovereign citizen idiots that cargo cult legal jargon -- both of these phenomena exist in a broad, clear social context and it's laughably obvious when someone who doesn't know what they're talking about, like you, attempts an "incantation" with the idea that it's magic words because they don't or can't understand what's actually happening.


> it's laughably obvious when someone who doesn't know what they're talking about, like you

You can tell from a single comment?


I know what's up. That why I made the comment. It's more about what is said than the actual content of the statement itself.

If I did the above, and someone made a stink, I'd hit 'em with "Are you denying my lived experience?" and then reaffirm the validity of my self identity (which is what matters) over what I actually am to observers (which is immaterial and doesn't matter).


I think you overestimate the intelligence and free-thinking ability of bureaucrats and other political-correctness devotees. When a group decrees an arbitrary gender-balancing rule like 50-50, in a field that's 90-10 male, cynical types can absolutely manipulate them to their advantage. In my opinion it's just more proof that the only fair system is a blind meritocracy (or random lottery).


> In my opinion it's just more proof that the only fair system is a blind meritocracy

Blind meritocracies are rarely blind enough, and end up being an old boys club.


I love collecting geek t-shirts. Once I asked for one but they said:

- "Well, but they are for girls only."

- "Do you have something against cross-dressing?"

- "Nope. Sure, take one!"


Hacking the hackers


I hear this is done already with the race thing, I've heard of the "check Hispanic" trick a while back to make it easier to get into collge.


To be fair the American racial categories seem pretty inane to my non-American eyes to begin with.

I understand that there is a lot of cultural baggage to the various categories but I can't count how many times I've been genuinely surprised by Americans identifying as black or Latina/o. The most striking ones I think were Halle Berry and Obama -- it took me a while to process that their recognition was genuinely noteworthy in the US because of their ethnicity. You could have told me Obama were Moroccan or Halle Berry were Indian and I wouldn't have been any more surprised.

For context: I'm German, grew up in the 90s in a city with ~1M residents and am literally colorblind (partially green blind). I'm not sure what part of that makes me unable to properly racially categorize people but I'm not convinced that's a bad thing.


I grew up in Europe and then moved to US. In US, at least in some context (college admissions for example), there is a thing called affirmative action. It seems in order to right a a wrong from the past, or to promote diversity they would sometimes have quotas on how many students from each background to pick. So depending just on race, one could have an easier or much harder way getting accepted. Therefore people would play that card in order to game the system.


I understand that the legislation/culture around race is in part used to correct historical racism (and the present day consequences of it) but I do wonder whether this obsession with racially categorizing oneself is a necessary evil or just evil.


This is insanely problematic. Exploiting marginalized identities is not only rude, but the blow back of pulling a Soul Man is not good for the person pulling the cons future prospects


This is insanely absurd. There is no blow back because the ability to change your mind about gender at the drop of a hat (if inclined) is embraced.


Pro tip: It is often a good heuristic to identify a sexist by their use of 'male' and 'female' for humans instead of 'men' and 'women', because it sounds dehumanizing. Call them 'women', never females, etc, etc.


What on earth are you talking about? I hope this is a joke and no a serious "tip".


I've heard this said in serious context before, so I don't think it's a joke. Only really get brought up when someone says "female" though.


Thanks for mansplaining that.


It's about time we synthetically attempted to skew the gender ratios in an entire industry! Fucktards.


That's the dumbest barrier I've heard in a long time to enter a hackathon. Your gender shouldn't prevent you from getting into an event to get work done, it should be based on the merit of what you want to do. Did anyone ask if their rejection would be reconsidered if they chopped their penis off?


We are sexist in this industry apparently, so the way to solve it is to choose people based on their sex....


Here's a link to Zcash's first blog post on their official site:

https://z.cash/blog/helloworld.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: