Hacker Newsnew | past | comments | ask | show | jobs | submit | gdahl's commentslogin

Check out the Flax ResNet50 example: https://github.com/google/flax/tree/master/examples/imagenet

It runs about as fast as any of the other popular machine learning frameworks, occasionally faster.

Disclaimer: I work for Google and use JAX, although I'm not on the Jax team.


To clarify, we found the range shrink if one trains for a fixed number of epochs and expand if one trains for a fixed number of steps. So tuning can be easier or harder depending on the budget.


I work as a researcher on the Brain team.

An experienced machine learning researcher like Andrew Ng would probably not join the team as a Brain resident. We hire experienced machine learning researchers and engineers all the time (see https://careers.google.com/jobs#t=sq&q=j&li=20&l=false&jlo=e... ) and the residency program is probably not appropriate for people who are already experts. It is a program designed to help people become experts in machine learning.

For residents we look for some programming ability, mathematical ability, and machine learning knowledge. If an applicant knows absolutely nothing about machine learning, it would be strange (why apply?). We accept people who are not machine learning experts, but we want to be sure that people know enough about machine learning to be making an informed choice about trying to become machine learning researchers. Applicants need to have enough exposure to the field to have some idea of what they are getting into and have the necessary self-knowledge to be passionate about machine learning research.

You can see profiles of a few of the first cohort of residents here: https://research.google.com/teams/brain/residency/

See the old job posting which should hopefully explain the qualifications: https://careers.google.com/jobs#!t=jo&jid=/google/google-bra...


Thank you so much! I may well apply in a year or two, after doing more ML work and talking my way into sabbatical time. :-)


We don't have a simple separation of concerns like that. Brain and DeepMind share a common vision around advancing the state of the art in machine learning in order to have a positive impact on the world. Because machine intelligence is such a huge area, it is useful to have multiple large teams doing research in this area (unlike two product teams making the same product, two research teams in the same area just produces more good research). We follow each other's work and collaborate on a number of projects, although timezone differences sometimes make this hard. I am personally collaborating on a project with a colleague at DeepMind that is a lot of fun to work on.

Disclosure: I work for Google on the Brain team.


The Google Brain (g.co/brain) team has people in SF. We are part of the same company as DeepMind so maybe this doesn't quite answer your question. ^_^

We are mostly in SF and Mountain View, but we also have people in a few other locations. Right now, SF and Mountain View are the largest.

Disclosure: I work for Google on the Brain team.


Here is a better article with an interesting interview from Arthur Chu: http://mentalfloss.com/article/54853/our-interview-jeopardy-...


Yes! This is exactly what I was hoping for in the comments. I found the article intriguing but could see that there was a lot more to say.

Having read the interview, what I really take away from this is a rationality success story. Chu kept a simple focus on what he was trying to achieve, and worked to maximise his chances.


There were a lot of pros a DeepMind. For example: Volodymyr Mnih, Andriy Mnih, Alex Graves, Koray Kavukcuoglu, Karol Gregor, Guillaume Desjardins, David Silver, and a bunch more I am forgetting.


It has content and communicates an approach to machine learning distinct from other approaches. It isn't like "big data" which is truly meaningless. However, deep learning is also not a single method or algorithm.

I would have described the library in question as a "GPU-Accelerated Neural Network library" since that is more descriptive.


Will high quality textbooks and the printing press make the medieval university go extinct?


Basically none. Numenta has yet to do anything that has impressed any researchers I know. Perhaps someday they will, but I am not counting on it.


NuPIC was released as free software, in part, because there has been a strong interest in the academic research community. I encourage you to check out the mailing list (http://lists.numenta.org/mailman/listinfo/nupic_lists.nument...) for some recent examples.


Generally, the most relevant academic community would be the NIPS community and I have not noticed any Numenta papers at NIPS, but please point me towards any I have missed if you are aware of any. I expect a lot of people have an opinion along these lines: http://developers.slashdot.org/comments.pl?sid=225476&cid=18... Don't get me wrong, I would love to see numenta produce something of value for the ML community, but it doesn't look good so far.


Have those researchers released any source code?


Yes, plenty. Releasing source code is quite common in the ML community.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: