Hacker News new | past | comments | ask | show | jobs | submit login

A. Most people still think Google search is good. B. Unless you work for Google specifically on that search team I'm going to say you don't know what you're talking about. So we can safely throw that point away.

I've implemented a natural language search using bleeding edge work, the results I can assure you are impressive.

Everything from route planning to spam filtering has seen major upgrades thanks to ML in the last 8 years. Someone mentioned the zoom backgrounds, besides that image generation and the field of image processing in general. Document classification, translation. Recommendations. Malware detection, code completion. I could go on.

No one promised me AGI so idk what you're on about and that certainly wasn't the promise billed to me when things thawed out this time but the results have pretty undeniably changed a lot of tech we use.




Why would you discount someone who has been measuring relevancy of search results and only accept information from a group of people who don't use the system? You are making the mistake of identifying the wrong group as experts.

You may have implemented something that impressed you but when you move that solution into real use were other's as impressed?

That's what is probably happening with the google search team. A lot of impressive demos, pats on the back, metrics being met but it falls apart in production.

Most people don't think Google's search is good. Most people on Google's team probably think it's better than ever. Those are two different groups.

Spam filtering may have had upgrades but it is not really better for it and in many cases worse.


Maybe because a single anecdote isn't really useful to represent billions of users? They have access to much more information.

I used it in real use, the answer was still a hard yes.


One of Deepmind's goals is AGI, so it is tempting to evaluate their publications for progress towards AGI. Problem is, how do you evaluate progress towards AGI?

https://deepmind.com/about

"Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI)."


AGI is a real problem but the proposed pace is marketing fluff -- on the ground they're just doing good work and moving our baselines incrementally. If a new technique for let's say document translation is 20% cheaper/easier to build and 15% more effective that is a breakthrough. It is not a glamorous world redefining breakthrough but progress is more often than not incremental. I'd say more so than the big eureka moments.

Dipping into my own speculation, to your point about how to measure, between our (humanity's) superiority complex and with how we move the baselines right now I don't know if people will acknowledge AGI if and until it's far superior to us. If even an average adult level intelligence is produced I see a bunch of people just treating it poorly and telling the researchers that it's not good enough.

Edit: And maybe I should amend my original statement to say I've never heard a researcher promise me about AGI. That said that statement from DeepMind doesn't really promise anything other than they're working towards it.


Shane Legg is a cofounder of DeepMind and an AI researcher. He was pretty casual about predicting human level AGI in 2028.

https://www.vetta.org/2011/12/goodbye-2011-hello-2012/

He doesn't say so publicly any more, but I think it is due to people's negative reaction. I don't think he changed his opinion about AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: