Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why I am working on Artificial General Intelligence (medium.com/career-pathing)
17 points by chegra on Oct 1, 2013 | hide | past | favorite | 10 comments


My sister and I are working on a problem like that: https://medium.com/p/3fdccaab0956

I believe the term is Artificial Genuine Intelligence so that it can pass a Turning test. Not too long ago I made an alpha test in PHP/MYSQL using a Thesaurus and released the code on Github.

We are working on a specific plan on how to solve this problem. Modeled after how the human brain should work, verses how the human brain fails to get genuine intelligence with most people and ends up with general intelligence instead. It is important to note the difference.

For example we are trying to set up a conscience (ignore my sister's poor spelling in her medium post) https://en.wikipedia.org/wiki/Conscience that filters out bad thoughts that may harm someone. We are trying to use logic, reason, and critical thinking but haven't yet devised any data structures or algorithms for those yet.

We use Artificial Genuine Intelligence from here: http://www.sciencedirect.com/science/article/pii/00016918699...

Artificial General Intelligence is the sort of AI that won't pass a Turning test. We use general intelligence and genuine intelligence from neuroscience and psychology. Most people don't know the difference.


> It is a technology risk. This means that if I solve it, I wouldn’t have to worry about finding customers. VCs like technology risk.

This is wrong (with all due respect to Steve Blank in his video, I don't really disagree with that video in context). Technology VCs hate technology risk. Biotech can get funded with 10-15 year lag times because the market risk is so lopsidedly low with patents, very concrete information on disease incidence rates, and solid metrics on what likely drug prices can be, among other factors. Technology VCs would call his AGI a "science project" and dismiss it. The only reason he can sort of make this statement is that he's assuming that the technology risk is gone when he says "if I solve it," but that doesn't mean that they don't hate technology risk. Even if he were successful in building an AGI there would still be market risk because if he can get it done, what's the probability that Google or IBM won't have beaten him to the punch (a VC might say)?

The reason this is a confusion is because "technology risk" and "market risk" are not anchored to any particular value, they are relative to the speakers norms, and so each of these parties is anchoring them differently. When Steve Blank is talking about technology risk, he's not talking about high technology risk as in building AGI hard, he's talking about "can we build an enterprise software that automatically handles some highly complex business process that we aren't QUITE sure is automatable."


AGI is a very weird field, in that approximately half the practitioners are getting experimental results, half are getting formal mathematical results, and half are complete crackpots outside their own field of Narrow AI expertise. These attributes can even appear in the same person, though Ray Kurzweil and Google are famous for seeming more crackpot-y than many for essentially claiming that a sufficiently large deep-learning or neural-network algorithm will at some point develop sapience ;-).

Which is obviously wrong. Everyone knows it will develop sapience and then develop a fetish for small office implements and destroy humanity.

Ok, to be serious, the guys who are actually doing Real Research into this sort of thing are, IMHO, Juergen Schmidhuber and Marcus Hutter. They are getting formal results in what they call the "new formal science" of "Universal Artificial Intelligence", and their insights into UAI are then leading them back to insights into Narrow AI and Machine Learning. Notice how they keep producing publications in reputable journals, keep getting awards for their papers, and keep getting major research grants? That's called results, and it's what shows they're onto something.

They actually wrote a textbook on the subject, but it is, unfortunately, well beyond my level of background knowledge right now. I would recommend it, however, to anyone who thinks that AGI is going to be as cheap and easy as throwing lots and lots of machines into a single gigantic machine-learning cluster.

On the upside, their algorithms can play a game of Pac-Man. Someone ought to enter them in a Procedural Super Mario competition. But overall, the old dream of "Strong AI" is not a matter of just coming up with the One True Algorithm and crossing the finishing line to victory. Even for the researchers smart enough to see the eventual shape of the finished product, there are lots of intermediate steps still to be solved -- even though we now have a better idea of what they are than before.


Thank you eli_gottlieb, do you have a link to where to buy their book and a link to their website?

My AGI project is stalled and this level of AI is beyond my understanding right now, and I have to find a better source to read. I've been using an old AI book from Radio Shack that my father-in-law gave me just before he died in 2002.

I think it was designed for the Tandy 1000 series and BASIC or Prolog that he used to have but I have been trying to convert it to different programming languages. The problem is my wife cleaned up my stuff and I cannot find it anymore. I think this was it: http://www.amazon.com/Understanding-Artificial-Intelligence-... but I am not 100% sure so I have been guessing.

I got sick and became disabled in 2002, and then my father-in-law died of cancer while I was in a hospital almost dying myself. I wanted to finish the AI project he wanted me to do for him, but I've been sick and in way over my head.


My disclaimer is: I am NOT an AI researcher. I don't have the mathematical background yet.

As to Schmidhuber and Hutter, Google them. This is their book: http://www.amazon.com/Universal-Artificial-Intelligence-Algo...

If you can't understand the math in that book, then you are basically not going to do better at the Formal UAI field than the crackpots have done for decades. I mean no disrespect, but nobody has actually discovered an easier to understand theory of UAI that gets equivalently good results.

A book from 1986 is definitely obsolete, and definitely applies to Narrow AI rather than AGI. The General/Universal AI field in its modern form dates to roughly the early-mid 2000s (2003ish is when AIXI was published in the Journal of Machine Learning and they got their own conference in 2005... which was kinda crackpotish).

On the other hand, to be encouraging rather than discouraging, one of the things about the AI/Machine Learning field is that you can discover far less than "ahaha, talking robots now!" and still have a useful discovery. A* Heuristic Search was a useful discovery that powers a huge fraction of modern video-game AI, even though it will never take over the world.

For instance, I read a blog post yesterday about writing improved "rock paper scissors" bots and came up with a nice little model of strategic "I know you know I know" Sicilian Reasoning that I scrawled out into a Reddit post.


Have you developed a specific plan to tackle this problem? ;)


For how many generations have we said that Artificial General Intelligence will be "solved in our lifetime". Seems like it is alway a couple of decades away.

But, if the definition is narrowed, there might be parts of the field that can be solved.


> For how many generations have we said that Artificial General Intelligence will be "solved in our lifetime".

Ah, but it isn't the same "we".

60 years ago, I guess your basic computing researcher thought human-level AI was just around the corner. And why not? Suddenly there was this wonderful machine that could do amazing feats of reasoning and computation in an eyeblink. And computers were constantly being improved; who knew what they would be capable of in a few more years.

Nowadays, it's mainly Ray Kurzweil and a few others like him. And ... well, they're basically paid to say it. I could get a group of 1000 people together and give them a speech about how 2040 will be much like today, except for cooler phones, more expensive gas, and faster pizza delivery, and they'll all be bored and go home disappointed. Get Ray K. in front of the same group telling them the future is going to be indescribably different, and they're interested. Some people do write-ups on the speech. It gets discussed. It's something you hear about. And Ray K. is the one who gets invitations to other speaking engagements.

In short, when you hear that super-AI is on the horizon, remember that, however far-fetched that might be, this is an interesting thought. The idea that it's a long way off, is not nearly so catchy. (See also memes, etc.)


It is a matter of building data structures and algorithms to teach a computer the meaning of words, and then use logic of how those words fit together along with parts of speech. Then make ones for logic, reason, critical thinking, and then a conscience to screen out any bad 'thoughts' (Yes you have to make a data structure for thoughts and algorithms for them as well and then a conscience function to determine if they are good or evil, like 'slice loaf of bread' is good 'slice finger off person' is evil.)

All I've been able to do is use a Thesaurus to paraphrase words at random. This Artificial genuine Intelligence will need a Thesaurus database to keep track of words and words that are like them for fuzzy logic comparisons. https://github.com/orionblastar/blastarparaphrase

My PHP/MYSQL code is just an alpha test proof of concept and I need help with it as it just does random replacements.

Someone I know has made some AI chatbots with Ruby and Python etc over here: http://subbot.org/

They too need work, but the source code is open and you can look at it and contact the developer.


It is a matter of building data structures and algorithms to teach a computer the meaning of words, and then use logic of how those words fit together along with parts of speech. Then make ones for logic, reason, critical thinking, and then a conscience to screen out any bad 'thoughts' (Yes you have to make a data structure for thoughts and algorithms for them as well and then a conscience function to determine if they are good or evil, like 'slice loaf of bread' is good 'slice finger off person' is evil.)

This describes something an AGI would be able to do. This is nowhere near an accurate definition of an AGI.

I'm about to write a top-level comment on this subject pointing to the actual current science on the subject, so go read ;-).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: