Hacker Newsnew | past | comments | ask | show | jobs | submit | tawaycozstoned's commentslogin

Sure it is. NNs can behave non-deterministic (i.e. they produce different output for same input because AFAIK some operations on GPUs are undeterministic) and it makes sense to invest into software systems that clearly take a closer look on software correctness.

One idea to get correcter code is clean code layout (I heard good things about the LLVM code base), simple to read code and compilers that exploit theoretical knowledge we gained over years of compiler and type theory research. I think Chris Lattner has expertise in all of them (or at least knows about their importance, contributions and drawbacks), and if want to build a full-blown self-driving car it is important to have no indeterminism in your car, and advanced languages and compilers help to guarantee specific statements about your code.

So it absolutely makes sense to invest in your compiler research team for safety, as our security, reliability and correctness expectations will rise (which is good) in particular for self-driving cars (they are not just DNNs).


It ultimately depends on how much of the car's action is driven by DNNs, a miss-classification in the CNN can easily lead to a car crashing on a highway and causing a pileup. DNNs are so complex, you can't really write unit tests like you typically do for deterministic code.

That's why some of the big banks flat out refuse to implement any form of deep learning for risk analytics. They're much more reliant on simpler ML models like random forests and logistic regression that are easier to analyse and diagnose by model governance teams.


Yeah and it will happen. I am all for self-driving cars simply as they increase the life-quality for our civilization (more independency, cheaper ways of transportation, less fatalities), but there will be accidents and then you have to analyze them (which is good).

In the end you want to know what did go wrong (the public will demand it and they are right) and it might be misclassification.

But that is not enough. You want to know why did it classify situation X wrong? So the answer is because the inference network (which was created via the training network) computed its weights because the input was Y. Now you might throw your hands in the air and say "oh it's complicated, the network is undeterministic, blame NVidia", but you can also go farther and build your networks deterministic (which is possible and AFAIK not a performance penalty). Compiler research helps in at least guaranteeing that certain parts of code are deterministic which makes it easier to debug and maybe avoid complex NN misclassification scenarios, but the way to do it doesn't have to do much with NN's itself but more with language design and (real-time, in particular deterministic) OS research.

So the statement oh, that's so complex, we do not no why we misclassified is no excusion, we can do better.

For starters we have to publish NN papers with implementations that describe how to make a particular NN out of given trainigs data (provide that too). We already publish the code and network structure (see caffe, etc.), but often with pre-trained models that have been build on a cluster with many forms of training-data going through network structure, etc.

Now at the moment you read a paper, head to the published code (often the case, again a desirable property of the ML community) and try to reproduce some examples by training on the data.

However it is hard to say in the end, if your network is really as good as the published, as a simple

  $ diff my-net-binary-blob.dat tesla-net-binary-blob.dat
might fail, if the stack (training etc.) to build the network is undeterministic. However if you have a good (i.e. deterministic) stack you might be able to reproduce NNs bit-for-bit, which makes it simpler to answer the question "why did we missclassify".


> However if you have a good (i.e. deterministic) stack you might be able to reproduce NNs bit-for-bit, which makes it simpler to answer the question "why did we misclassify".

It does make it simpler, but surely not usually simple enough to answer the question "why did we misclassify". It's like saying we will finally understand consciousness once we simulate the quantum mechanics of a certain cubic metre of space with perfect accuracy - which need not be true even if that cubic metre happens to contain a functioning brain.


Second that. I get the different opinions, but why is it funny (the downvoting implies that it was funny).

I can for 100% assure that I only nitpick because I want to understand the logical reasoning for down-voting and are scared that I am too high to understand the meaning of the phrase "It's funny you mention that". I am not a native speaker, but I know all the words, heard it in English and there exists a translation into my language and the interpretation is the same for both, so long story short ... why is it funny?



So there are 2 meanings, and I read it "funny peculiar". But why is it "funny peculiar"? I think it is right to say Elon Musk is the new Steve Jobs (as in "most popular tech/computer-stuff person for the public"), so working for Tesla has some kind of "coolness" factor and they have good marketing and might beat Google's self-driving endeavors simply via time-to-market (similar to IPhone).

But why is that funny (haha) or funny (strange)? If they can build a self-driving car for the masses Elon Musk will be the uber-tech guy for a whole generation, and Tesla seen as one of the good guys with cool tech.


I think you are over-thinking and misreading this. One person says something to the effect of "I think Musk is the new Jobs" another replies "I think Apple will die without a cultlike leader". The thing at the beginning of the reply is just a mostly content-free throwaway phrase not intended for the Talmudic analysis you're giving it.


You are right, I probably read too much into it. It's because I am high, and when I am high I get extremely interested in languages (both natural/real and computer languages) and as a non-native speaker but fluent reader I often wonder about phrases ... too long.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: