No, again we just discovered it only does well in constrained environments with well defined rules and a lot of data. Either that or something subjective that it can afford a lot of inaccuracy without looking totally stupid. It literally can't generalise to some basic logic problems just like AlphaZero is not going to be cooking your meals anytime soon.
If to you that's ok, fantastic. But to compare what we got so far to something intelligent rather than seeing it as more of a calculator then you are totally misrepresenting it.
If to you that's ok, fantastic. But to compare what we got so far to something intelligent rather than seeing it as more of a calculator then you are totally misrepresenting it.