Those are great, I would watch any one of those movies. Maybe even the "Across the Indiana-Verse" one where they are all pulled into a single dimension.
"You will need git, XCode tools, CMake and libomp. Git, CMake and libomp can be installed via Homebrew"
That really doesn't seem like much. Was there more to it than this?
Edit: I tried it myself and the cmake configure failed until I ran `brew link --force libomp`, after which it could start to build, but then failed again at:
It’s pretty astounding to me that this aspect of MCP is not mentioned more. You’re putting a LOT of trust in both the model and the system prompt when you start attaching MCPs that provide unfettered access to your file system, or connect up to your REST API’s POST endpoints.
(That being said, I have to admit I’ve been writing my own powerful but extremely dangerous tools as an experiment (e.g. run arbitrary Python code on my machine, unsandboxed) and I have to admit the results have been incredibly compelling.)
You have to read it with some context in mind, though. The Audi RS3 only scores that high because they had a factory option to ship it with 60 Treadwear track day tires.
On the more standard performance tires it dropped to around the ~150 mark on the chart.
Fun car, but I wouldn't put it in the same league for track performance. Put those same 60 Treadwear trackday tires on the GT-R, Mustang, or Supra and they'd all jump up the list too.
Pyodide is far from a perfect CPython and even the packages it includes often have limitations you won't find when running natively. But there's definitely enough here to be interesting and even somewhat useful. Here's an interactive app written on Pyodide that uses astropy, numpy, and matplotlib: https://shinylive.io/py/examples/#orbit-simulation
I agree that retrieval can take many forms besides vector search, but do we really want to call it RAG if the model is directing the search using a tool call? That like an important distinction to me and the name "agentic search" makes a lot more sense IMHO.
Yes, I think that's RAG. It's Retrieval Augmented Generation - you're retrieving content to augment the generation.
Who cares if you used vector search for the retrieval?
The best vector retrieval implementations are already switching to a hybrid between vector and FTS, because it turns out BM25 etc is still a better algorithm for a lot of use-cases.
"Agentic search" makes much less sense to me because the term "agentic" is so incredibly vague.
I think it depends who "you" is. In classic RAG the search mechanism is preordained, the search is done up front and the results handed to the model pre-baked. I'd interpret "agentic search" as anything where the model has potentially a collection of search tools that it can decide how to use best for a given query, so the search algorithm, the query, and the number of searches are all under its own control.
Exactly. Was the extra information pushed to the model as part of the query? It’s RAG. Did the model pull the extra information in via a tool call? Agentic search.
I prefer Anthropic's models but ChatGPT (the web interface) is far superior to Claude IMHO. Web search, long-term memory, and chat history sharing are hard to give up.
There are several high-level web application frameworks for Python that are based around this concept. It's particularly useful for data-oriented apps that let the user tweak parameters and update (arbitrarily complex) calculations and visualizations in response. I personally work on one called Shiny (https://shiny.posit.co/py/) but there are others including Reflex.dev and Solara.dev.
(I haven't looked at Reaktiv beyond the readme but it's clearly based on the same concepts, albeit it's "only" the reactive primitives and doesn't provide the rest of the stack like the frameworks I mentioned do)