I'm very familiar with how mcts is used in alpha go and mu zero.
I'm not sure how you can say it's hidden in the details: the name of the paper is "mastering go with deep neutral networks and tree search."
It's also not an oversell on the deep learning component. Per the ablations in the alpha go paper, the no-mcts ELO is over 2000, while the mcts-only ELO is a bit under 1500. Combining the two gives an ELO of nearly 3000. So the deep learning system is outperforming the mcts-only system, and gets a significant boost from using mcts.
The mu zero paper also does not hide the tree search; it is prominent in the figures and mentioned in captions, for example. It is not the main focus of the paper, though, so perhaps isn't discussed as much as in the alpha go paper.
Well I haven't read those papers since they came out so I will defer to your evidently better recollection. It seems I formed an impression from what was going around on HN and the media at the time and I misremember the content of the papers.
I'm not sure how you can say it's hidden in the details: the name of the paper is "mastering go with deep neutral networks and tree search."
It's also not an oversell on the deep learning component. Per the ablations in the alpha go paper, the no-mcts ELO is over 2000, while the mcts-only ELO is a bit under 1500. Combining the two gives an ELO of nearly 3000. So the deep learning system is outperforming the mcts-only system, and gets a significant boost from using mcts.
The mu zero paper also does not hide the tree search; it is prominent in the figures and mentioned in captions, for example. It is not the main focus of the paper, though, so perhaps isn't discussed as much as in the alpha go paper.
(Weirdly axe-grindy comment...)