Unless I'm misreading something, it appears that the linked paper appears to use one-hot encoding to represent each of the cards -- not any learned embedding to represent each card -- unless I'm misunderstanding what you mean by "representation learning"?
I hadn't seen this, this is awesome! You'd think given the volume of data available that this type of method would outperform an LLM, cool results.
Still some fun things about LLM representations -- you can do fun things like give the bots preferences / personality in a system prompt which is entertaining!
The best performing draft AI's I've seen leverage representation learning in some form.
See: https://arxiv.org/pdf/2107.04438.pdf