Hacker News new | past | comments | ask | show | jobs | submit | mitjam's comments login

I think you're right: Agents are at first just an LLM wrapper (app, can even be a Spreadsheet).

For me, the question of which protocols are used to communicate with the environment (MCP et al) ist just one and not even the most interesting question. Other questions uncover better, why agents are "a different kind of software" and why they might vastly change how we think of and use software:

- Stochastic not deterministic (evals/tests are crucial, creating reliable systems is much harder)

- conversational not forms (changes how we write software, and what we need to know for great UX)

- modalities, especially voice, change how we use computers (screens may become less important),

- batch with occasional real-time vs interactive might change how we feel that software works "for us"

- different unit economics (inference costs are significant compared to traditional run-time costs) change how software can be marketed

- data-driven capabilities on every level may change value chains and dictate how agents can work (if data is the moat, will agents need to "go to" the data owner and will be closely guarded of what they can extract/use?, much more than just traditional AAA)

- agents can be implemented in a way that they get better "themselves", because LLMs can be trained with data - will model providers capture most of the value of specialized vertical solutions? Is code less valuable than data/LLMs in the end?

- human-agent-relationship: By definition, agents act on someones behalf. This may again change how we interact with services/websites/content. Currently our personal systems are just like terminals. Will our interactions with services/websites etc. be mediated by "our" personal agents, or will we continue to use the different services, directly (and their agents, too)? Depending on that, the internet as we know it might change dramatically - services must deal more with agents than to humans, directly.

Bottom line: Agents are just an LLM wrapper, but they have the potential to dramatically change a lot of things around software. That's what's interesting about it, in my view.


Afaik vllm is for concurrent serving with batched inference for higher throughput, not single-user inference. I doubt inference throughput is higher with single prompts at a time than Ollama. Update: this is a good Intro to continuous batching in llm inference: https://www.anyscale.com/blog/continuous-batching-llm-infere...


It is much faster on single prompts than ollama. 3X is not unheard of


The main costs definitely not hosting and can be quite significant. MITRE had $2.37B revenue in 2023, most if it contributions. I don't know how much of it can be attributed to the CVE, but I assume it's not an insignificant part of it: https://projects.propublica.org/nonprofits/organizations/422...


The superficial view: „they hallucinate“

The underlying cause: 3rd order ignorance:

3rd Order Ignorance (3OI)—Lack of Process. I have 3OI when I don't know a suitably efficient way to find out I don't know that I don't know something. This is lack of process, and it presents me with a major problem: If I have 3OI, I don't know of a way to find out there are things I don't know that I don't know.

—- not from an llm

My process: use llms and see what I can do with them while taking their Output with a grain of salt.


But the issue of the structural fault remains. To state the phenomenon (hallucination) is not "superficial", as the root does not add value in the context.

Symptom: "Response was, 'Use the `solvetheproblem` command'". // Cause: "It has no method to know that there is no `solvetheproblem` command". // Alarm: "It is suggested that it is trying to guess a plausible world through lacking wisdom and data". // Fault: "It should have a database of what seems to be states of facts, and it should have built the ability to predict the world more faithfully to facts".


It’s the age of thinking instead of doing. Thinking doesn‘t solve doing problems but we can think and talk them away or at least outsource the doing. —- Hmm, what an interesting thought. Let‘s think about it some more.


It’s a matter of the relative value of types of production factors. Will AI increase or decrease the relative value of human labor compared to machinery, raw materials, and land? Beyond Adam Smith: What about Social, cultural, and Symbolic capital (Pierre Bourdieu). My gut feeling: median relative value of labor down, especially for knowlege worker, Other factors up, including social, symbolic, cultural capital. Being in an in-group protects, eg. in regulated professions. I expect regulations and group-thinking to go up as a protective measure.


K3s (a lightweight Kubernetes) has an embedded registry mirror (https://docs.k3s.io/installation/registry-mirror)


Can recommend hot water bottles and a hairdryer for occasional on demand bed warming.


Yes and before reading HN on a screen which is my habit, unfortunately.


Just need to redshift the screen. But the intensity will not be enough, unfortunately.


Satellite images of Wuhan may suggest coronavirus was spreading as early as August 2019:

https://www.bbc.com/news/world-us-canada-52975934.amp

https://edition.cnn.com/2020/06/08/health/satellite-pics-cor...


Would it mean it was active earlier but a critical mass was needed to cause a pandemic? Or it evolved in humans while circulating Wuhan?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: