Camera Search (camerasearch.ai) is my iOS app for tradespeople and DIY users. It combines voice, video, image understanding, and chat—backed by tuned LLM API—to help diagnose issues and guide builds/repairs in realtime.
LOL, nice try sacks. Also wonderful to see our president doesn't let his ego get in the way of an action plan by plastering his name all over something he has nothing to do with.
my take away after wasting minutes of my life reading this is that you remove meat slightly sooner, rest for 5-6 minutes. that way juice is retained and internal temperature is on target when sliced.
I've been building a voice+vision AI assistant for tradesman, industrial services, and DIY consumers the last 8 months.
It streams vision+voice+text+spatial data to a multimodal llm, to help users solve problems on the job. Basically a "Cursor" for tradesman. Most of my work now has been figuring out what to put in the context window, when, and how to provide the best deterministic response.
This assistant is a bridge between where we are now (little to no tech for these guys) and where the future will be (robotic automation). I think the in between stage will be a significantly longer timeframe than people realize, and I hope my app can provide value to these guys while they work. www.camerasearch.ai
Maybe it was missed, but in the beginning he said he was told the audience are mostly students about to enter the industry, so I feel like a lot of the talk is just establishing vocabulary, basic information about what LLMs are, analogies to get people to wrap their head around where in the workflow they can fit, and so on.
So while most of it seems obvious or relatively abstract, I think that's because of the target audience of the talk. I had that lens while watching the talk and while I cannot say my worldview has done a large change because of it, I understand it could be valuable to newer members of the ecosystem.
> Yeah, not sure I ever saw anything similar on HN before, feels very odd.
What exactly have you been seeing here on HN? I've been reading through most of the comments in this submission, since it was submitted yesterday, and none of it seems to be "fanboying" (maybe I misunderstand the term?) but discussions about where LLMs fit in the software development workflow.
Some people find some parts interesting, others obvious, others think he's selling something, others find the analogies lacking, but I've seen no "fanboy" comments like what parent seemed to exclusively see here.
It's been a multi-day like conversation where multiple people are trying to obtain the transcripts, publish the text as gospel, and now the video. Like, yes thank you but, holy shit.
TL;DR:
Karpathy says we’re in Software 3.0: big language models act like programmable building blocks where natural language is the new code. Don’t jump straight to fully autonomous “agents”—ship human-in-the-loop tools with an “autonomy slider,” tight generate-→verify loops, and clear GUIs. Cloud LLMs still win on cost, but on-device is coming. To future-proof, expose clean APIs and docs so these models (and coming agents) can safely read, write, and act inside your product.