Hacker News new | past | comments | ask | show | jobs | submit | jcheng's comments login

Those are great, I would watch any one of those movies. Maybe even the "Across the Indiana-Verse" one where they are all pulled into a single dimension.


I'm sympathetic in general, but in this case:

"You will need git, XCode tools, CMake and libomp. Git, CMake and libomp can be installed via Homebrew"

That really doesn't seem like much. Was there more to it than this?

Edit: I tried it myself and the cmake configure failed until I ran `brew link --force libomp`, after which it could start to build, but then failed again at:

    [ 55%] Building CXX object src/CMakeFiles/solvespace-core.dir/bsp.cpp.o
    c++: error: unknown argument: '-Xclang -fopenmp'


It’s pretty astounding to me that this aspect of MCP is not mentioned more. You’re putting a LOT of trust in both the model and the system prompt when you start attaching MCPs that provide unfettered access to your file system, or connect up to your REST API’s POST endpoints.

(That being said, I have to admit I’ve been writing my own powerful but extremely dangerous tools as an experiment (e.g. run arbitrary Python code on my machine, unsandboxed) and I have to admit the results have been incredibly compelling.)


https://www.caranddriver.com/features/a23319884/lightning-la...

The GT-R appears three times, at 75, 76, and 93. The Mustang Dark Horse, Audi RS3, and Supra are all about $60K.


Lightning Lap is great and I'm glad C&D does it

You have to read it with some context in mind, though. The Audi RS3 only scores that high because they had a factory option to ship it with 60 Treadwear track day tires.

On the more standard performance tires it dropped to around the ~150 mark on the chart.

Fun car, but I wouldn't put it in the same league for track performance. Put those same 60 Treadwear trackday tires on the GT-R, Mustang, or Supra and they'd all jump up the list too.


That makes sense, I was surprised to see the RS3 that high on the list.


Pyodide has numpy, scipy, matplotlib, pandas, and as of last month, even polars. https://github.com/pyodide/pyodide/pull/5282

Pyodide is far from a perfect CPython and even the packages it includes often have limitations you won't find when running natively. But there's definitely enough here to be interesting and even somewhat useful. Here's an interactive app written on Pyodide that uses astropy, numpy, and matplotlib: https://shinylive.io/py/examples/#orbit-simulation


I agree that retrieval can take many forms besides vector search, but do we really want to call it RAG if the model is directing the search using a tool call? That like an important distinction to me and the name "agentic search" makes a lot more sense IMHO.


Yes, I think that's RAG. It's Retrieval Augmented Generation - you're retrieving content to augment the generation.

Who cares if you used vector search for the retrieval?

The best vector retrieval implementations are already switching to a hybrid between vector and FTS, because it turns out BM25 etc is still a better algorithm for a lot of use-cases.

"Agentic search" makes much less sense to me because the term "agentic" is so incredibly vague.


I think it depends who "you" is. In classic RAG the search mechanism is preordained, the search is done up front and the results handed to the model pre-baked. I'd interpret "agentic search" as anything where the model has potentially a collection of search tools that it can decide how to use best for a given query, so the search algorithm, the query, and the number of searches are all under its own control.


Exactly. Was the extra information pushed to the model as part of the query? It’s RAG. Did the model pull the extra information in via a tool call? Agentic search.


That's far clearer. Yes.


This is a really useful definition of "agentic search", thanks.


Care to elaborate? I’ve never used it but have heard good things from colleagues who have.


Lock in.


What lock in?

I use their AI SDK, but never touch vercel servers. It's just a unified interface.


The SDK is the lock in.


Same as any other open source framework or library.

Calling that "lock in" is a stretch, but you're free to write everything from scratch if that's the way you roll.



I prefer Anthropic's models but ChatGPT (the web interface) is far superior to Claude IMHO. Web search, long-term memory, and chat history sharing are hard to give up.


There are several high-level web application frameworks for Python that are based around this concept. It's particularly useful for data-oriented apps that let the user tweak parameters and update (arbitrarily complex) calculations and visualizations in response. I personally work on one called Shiny (https://shiny.posit.co/py/) but there are others including Reflex.dev and Solara.dev.

(I haven't looked at Reaktiv beyond the readme but it's clearly based on the same concepts, albeit it's "only" the reactive primitives and doesn't provide the rest of the stack like the frameworks I mentioned do)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: