Reinforcement Learning is basically sticks and carrots and the problem is credit assignment. Did I get hit with the stick because I said 5 plus 3 is 8? Or because I wrote my answers in green ink? Or... That used to be what RL was. S&B talk about "modern reinforcement learning" and introduce "Temporal Difference Learning", but imo the book is a bit of a rummage through GOFAI. Is the recent innovation with LLMs to perhaps use feedback to generate prompts? Talking about RL in this context does seem to be an attempt to freshen up interest. "Look! LLMs version 4.0! Now with added Science!"
Haha that's crazy I'm so used to reading RL papers that when the blog linked to a textbook about RL I just filled in Sutton & Barto without clicking on the link or thinking any further about the matter.
I think the other criticism I have is that the historical importance of RLHF to ChatGPT is sort of sidelined, and the author at the beginning pinpoints something like the rise of agents as the beginning of the influence of RL in language modelling. In fact, the first LLM that attained widespread success was ChatGPT, and the secret sauce was RLHF... no need to start the story so late in 2023-2024.
I would encourage everyone to read the Sutton and barto directly. Best technical book I've read past year. Though if you're trying to minimize math, the first edition is significantly simpler.
It was a good read on the concept but I’m left unsatisfied by hand waving all the stuff. Like how, physically, is the reinforcement actually saved? Is it a number in a file? What is the math behind the reward mechanism? What variables are changed and saved? What is the literal deliverable when you serve this to a client?