Love to hear it. It's true that for some searches you might not notice a difference, but for complex code examples, reasoning, and debugging Expert mode does seem to be much better. We quietly launched Expert mode a few days ago on our Discord but are now telling the broader HN community about it.
We're working on making all of our searches the same quality as Expert mode while being much faster.
I'm definitely giving this a try sometime soon. I had an idea back when it was just GPT-3 out there, to use LLM-generated embeddings as part of a search ranking function. I'm betting that's roughly how Expert mode works, right?
Edit: Just had another thought. You could use the output of a normal search algorithm to feed the LLM targeted context, which it could then use to come up with a better answer than it would without the extra background. Yeah, I like that.
Although, I will say I asked it about writing a lisp interpreter in Python, because I was just tooling around with such a thing a little while ago for funsies. It essentially pointed me to Peter Norvig's two articles on the subject, which, unfortunately, both feature code that either doesn't run properly or doesn't run right at all. I was disappointed.
We're working on making all of our searches the same quality as Expert mode while being much faster.