Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[Disclaimer: I work for Siri; discount my enthusiasm accordingly]

For me, Siri when holding my phone is the least compelling use case. I kind of like using it for Alarm Clocks and Timers, because recognition in that domain is quite reliable, and one instruction saves multiple clicks.

Since touch navigation on a watch is less convenient than on a phone, some further use cases become more convenient with voice than with touch, e.g. asking "Is it going to rain today" before stepping out the door.

On Homepod, music is an obvious use case, which is unfortunately a rather difficult domain, because of the wide variety of media names. I use commands like "play some John Coltrane", "Shuffle play list Aggro", but also "who's playing piano on this track" (availability is a bit variable) or "what song is this". Home control is also convenient.

With Airpods, music is again the obvious use case, but I also like using them for walking directions (because you can walk without having to constantly glance on your phone). They also serve as a "poor man's CarPlay" in cars not equipped with a suitable media system (With the transparency mode on AirPod Pros, I feel that they are not an undue safety risk).

CarPlay is one of my favorite use cases, because I can navigate, listen to music, and listen and respond to messages without having to take my eyes off the road. When stuck in traffic, I also like asking what my ETA is.



Good to know. As you work for Siri(Apple) I want to learn and contribute in NLP fields as well.

1.Can you share your journey if possible?

2.Is Ph.D or masters necessary and how helpful the work related is in the company vs the research one does as part of Ph.D(because i have heard one doesn't get autonomy to researh one's own subjects under the supervisor but need to do what supervisor says)

3.Can you share any resource you lookup to learn and about Siri internal's working

4.How deadline works in research field ,currently working as a Front end developer deadline in my work as per ETA we can judge roughly,but how it works in research oriented field where one is unsure whether things are delivered as per requirement

5. Where do you see NLP future going?

Thanks


1. I have pretty much a pure programming background. No previous experience with speech or machine learning when I was hired (but that was long ago).

2. The software engineers in the team have a variety of backgrounds. I think I may be the only PhD, and my subject was not relevant to the job. Plenty of people with bachelors (for visa reasons, non-US employees tend to have higher degrees). Machine learning knowledge helps, but is not strictly required. For the data scientists, on the other hand, advanced degrees are a definite plus, and so is some specialization in a relevant subject.

3. The most important thing to know about Siri's internal working is that we don't talk about Siri's internal working…

If you randomly would like to learn something to improve your chances of working at Siri, a machine learning class (e.g. Ng's and/or Hinton's Coursera classes) could definitely help.

4. There is still a lot of engineering involved, and often by the time a formal schedule is worked out, the scientific discovery part has largely been solved. Sometimes features end up not working out and have to be pushed back. What helps for Siri is that a lot of the complex functionality is server side or in updatable assets, so iteration is a possibility.

5. That's above my pay grade, really. What I learned the past few years is never to bet against deep learning being able to tackle a particular problem, but I can't shake the feeling that we'll discover limits some day.


Thanks for answer :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: