The comments were by genuine people, answering the survey questions.
YC startup school videos had a concept called 'hair-on-fire' problem. There are many lonely people in the world whose 'hair-on-fire' problem is loneliness, and they are willing to try things to make it go away.
I think my app managed to tap into their needs, and I am honestly more glad that they have found it somewhat useful than that they shared me their locations for my app.
Hmm...
Such a strong password requirement for a casual app. No google signup integration. clicks on the user and see the submission "Check username availability ..." ookay. Thx.
This seems very desirable. Though at the moment, when the self-attention got popular for the first time, it was already available in PyTorch. Python seems to have the edge just because there are lots of people using it. Maybe it is just a matter of time and users. Probably I will wait until the ecosystem gets larger, and then switch to it. (Yes, I am a lazy person to implement a transformer from scratch)
It is quite interesting that LeCun is very critical when it comes to GPT from OpenAI. The same arguments can also be said for the current deep learning paradigm and convolutional nets, but you don't see any criticism from him when it comes to this stuff. Look at his arguments when he is tweet-debating with Gary Marcus.
A very small fraction of experimental science is about just doing something and seeing what happens without a strong prior. You are expected to do your homework first. As a scientist, your role is typically to read and understand all the relevant prior work in the area first, then using this theory to derive a hypothesis for the outcome of your experiment. If you are lucky, the way the experiment unfolds might be outside your understanding, given that your understanding of the prior theory was correct, you have made a new scientific discovery which will be used to create new theory.
The fragment of uncertainty in this is typically very small and the process is very far from guessing your way through, hoping to find something new (whatever you found using such a method will most likely be either already known or incorrect).
Yes, I agree with you. My comment looks dull with your explanation. Of course, I also don't advocate doing something without any hypothesis. Though, I think hands-on practice on a subject can help you learn things better. For example, I find it very useful for linear algebra (maybe this is not applicable to other subjects). I can analyze my flawed intuitions.
When I first heard about the Monty Hall problem, I didn't understand it and tried it myself. It was way easier for me to understand the flawed intuition by analyzing each line as opposed to, say, for example, the explanation of Judea Pearl (which is also good).
What I wanted to emphasize is that it is not bad guessing your way through a problem you don't understand. But yeah, of course, you should have some knowledge.
I wish I could say I have been doing science already, not yet (hopefully), but I was referring to (say) programming problems like (for example) one recently where I could see that it boiled down to asking whether there existed a linear combination of a input set of vectors with a single second vector - I was able to spot the connection, I'm not convinced I would've been able to get it without the mathematics under my belt to spot the pattern.
Regardless, I plan to be learning until the day I drop, science or not.
Mathematics is the exploration of the a priori. Historically some axioms have been deemed more real than others. I think Gauss rejected non-eulidean geometry for instance. But this point of view has changed. With abstract algebra, and I believe also computer science, modern mathematics is about exploring connections between structures such as they emerge from stipulated axioms and rules of inference. It is science in the sense that ideas and hypotheses can be tested experimentally. But a proof requires more than non-falsification. Then there is the complication of potentially irreducible computation problems, where essentially a kind of mining of the computational space is the only way forward. This is the new kind of science Stephen Wolfram speaks of.
If you look at network structure, it acts as one agent, not five. So, free coordination. (See: https://t.co/GPKHPsIu1C)
In my opinion, what i see is a very good player who knows how to chain stun precisely without any strategic depth. If you claim to have built an AI system, which you ultimately want it to evolve to AGI, you at least expect some sort of strategic decision making at the macro level. Though since it has almost perfect micro, it can easily outweight the most of teams. So yeah, with that expectation I see this as a joke, too.
P.S. The model is trained with 128k cpus and 256 gpu. It is able to play 180 years worth of game in a day. Think about it.
It's five independent agents. The article on OpenAI's website and the network structure both say this. I'll zoom in since it's a complicated structure.
They use a hyperparameter called team spirit to cooperate. I don't think the goal of this is AGI at all, so I don't see why people are making that leap. But sure, for the geniuses of HN this must clearly be trivial.
It's not independent agents. The neural networks have the same input, share weights, and also share some activations. With that much sharing, it's better to think of it as one neural network which has output heads for all the 5 players. So coordination is free. Actually, coordination makes as little sense as saying that multiple neurons in a neural network are cooperating, or that the two legs of a humanoid are cooperating to walk. Further there is no game being played between heroes of the same team. They literally have the same objective. The "coordination" buzzword is just another attempt by OpenAI to confuse and mislead readers, and give a false sense of their progress.
They cannot share the same inputs unless the team spirit hyperparameter is exactly 1, which it is not. You are partially correct in that the agents consume the parameters of the four other agents, but it is weighted differently accorsing to team spirit parameters.
The team spirit hyperparameter is a crutch they've introduced themselves. Ideally it should be one. In Dota there is only one objective for the entire team and that's to win the game. The fact that they shape rewards is an implementation detail and doesn't change the fact that Dota 2 does not require cooperation, because there's no cooperation game being played. It's a purely zero-sum adversarial game being played between two teams.