The thing I really want to get working is retrieval augmented generation - so effectively answering questions based on a blob of context that I pass in, and being able to do good-enough summarization.
I haven't quite proved this to myself yet but I think it's going to work pretty well.
Do a search, then re-order the results based on a criteria. Easy when the criteria is easy to code, less so when it isn't. But turns out LLMs are pretty good at interpreting the re-ranking instructions.