As someone who also hand-coded a neural network implementation, forward and back prop as well as an RNN in C, yea "raw numpy" is a joke.
Something I've always hated about people that say "why do I have to write a backprop when TF does it for me?"
Here's why: you go to a company, they want you to incorporate machine learning into their c++ engine. Have fun using numpy, you said you knew machine learning right? implement backprop for me, you can do that right?
I had the same outlook, however it is for a game I am currently in development.
And for those who are reading this, and are interested in game AI with interest in making true machine learning integrated into their characters, I gope this will help keep you inspired.
So, like you, I had grandeous ideas and vision about adaptive, machine learning approach to controlling my character in a platform-based fighting game. They would learn from previous mistakes and improve themselves, to create a truly interactive AI, one that can challenge the player beyond just memorizing his pre-programmed state machine patterns or giving him inhuman reaction times. Then I ran into the same issues, the game must run at 60 fps, and constant learning cannot be done, so i've implemented a basic AI, but i know every way it will act, nothing is amazing.
Until one day I went back and looked at my AI approach, I realize I could still use machine learning, but perhaps use a smaller neural network, and a different learning algorithm. So, after implementing a combination of reenforcement learning and evolutionary learning. I let them train for a day. Then something amazing happened, and it 's what i imagine a parent feels when their kids learns to do something: it saved itself. Initially, starting out the AI would just spam buttons and usually end up jumping off the platform and killing itself, or stray away from the edge and just not touching the control stick, but this time, he got knocked off and he saved himself.
It was an amazing feeling, I never taught it to do that, but I gave it the ability to learn to do that, and that was extremely liberating.
So i encourage people to not give up on the ML AI for videogames, I know deep mind recently teamed up with blizzard to make a StarCraft2 AI, and that looks awesome.
Humans are not magically creative as much as they'd like to be.
If i ask you to think of a random number, you don't just pull it out of thin air, It can be based on tens to hundres of things:
-Should i do a relaly low or high number?
-People always use round numbers that end in 0 or 5, maybe i shouldn't do that, or should i to make it seem truer
-what other large "random" numbers have a heard?
-i remember seeing a number recently, maybe try a modification of that
-you used {x} as a random number last time, go similar to that?
All this adds up in that under a second thought you have when i asked you to think of a random number. the literal same thing goes into all creative works, the output is a function of the input.
Yes, but this function you defined, that applies tens of judgements to select a "random" number, is based on a random number input itself. The random part is just the seed, it then passes through various neural nets that expand on it and turn it into a plausible answer.
Randomness is injected into all brain processes on account that biological neurons are stochastic. So there is an amount of randomness mixed into everything the brain does.
Some neural nets can map real images into a Gaussian, and back. That means they disentangle the factors of the image into a mix of independent factors that map into the standard deviation. Any set of random numbers could be converted back into an image, by the reverse process.
> Yes, but this function you defined, that applies tens of judgements to select a "random" number, is based on a random number input itself. The random part is just the seed, it then passes through various neural nets that expand on it and turn it into a plausible answer.
On what basis do you make this claim? Humans are empirically terrible random number generators. If you ask someone to pick a random number, the result is very not random. Our biases are large and obvious, so it seems faulty to claim that our "seed" number is in any way truly random.
> Randomness is injected into all brain processes on account that biological neurons are stochastic. So there is an amount of randomness mixed into everything the brain does.
There's also some amount of randomness in what happens if you drop a rock but the net result is largely the same: it falls down. The fact that there is some randomness to a process does not mean that the randomness is actually driving the process.
> Some neural nets can map real images into a Gaussian, and back. That means they disentangle the factors of the image into a mix of independent factors that map into the standard deviation. Any set of random numbers could be converted back into an image, by the reverse process.
You're acting like the brain is some kind of simple algorithm, more goes into a painting or a composition than just a bit of simple logic. A composer is not sitting there at 2am on her Piano going "Hm, I like round numbers, so I might make this note a F because it's the fourth note in the C major scale".
I might be wrong but from my understanding, we don't even really understand how neural nets are able to make certain decisions or generate certain pictures yet, correct?
> You're acting like the brain is some kind of binary computer
It's not binary but it is a computer. The alternative is to believe in magic.
> we don't even really understand how a computer is able to make certain decisions or generate certain pictures yet, correct?
I don't think that's correct, no. We understand how the process works. We may not understand the weights a specific neural net ends up with, but that's an issue with just having so much data to deal with. Similarly we don't "understand" how a web page ends up with a specific PageRank. We understand the process, but we can't manually reproduce the result because it's just too much data.
Something I've always hated about people that say "why do I have to write a backprop when TF does it for me?"
Here's why: you go to a company, they want you to incorporate machine learning into their c++ engine. Have fun using numpy, you said you knew machine learning right? implement backprop for me, you can do that right?