Interesting article but, IMHO, completely impractical. Teaching the model about specific content is totally what you should not do. What you should do is to teach the model how to effectively retrieve the information even if it is unsuccessful on the first try.
We are finding that fine tuning is very good at setting the style and tone of responses. A potential use case we are thinking about is what if your star sales person leaves the company? Could you fine tune an LLM on their conversations with customers and then do inference where it would write text in the style of your star sales person.
We are also adding function calling so the model would know to reach out to an external API to fetch some data before generating a response.
I think fine tuning makes sense when you need some domain specific knowledge to properly read, analyze, and interpret the information you're passing to it. But its not an information store itself.
The most valuable skill an LLM can have is good reasoning skills and a broad enough knowledge base to understand. From there you can pass it the important bits it needs.