Hacker Newsnew | past | comments | ask | show | jobs | submit | babyshake's commentslogin

This reminds me of a thought I have had that the future of education may involve a type of "school for robots" where the human students are the teachers. I am sure Neal Stephenson and others thought of this same thing decades ago but seems closer to becoming a reality.


Would you say the same for Mastra? If so, what would you say indicates a high quality candidate when they are discussing agent harnessing and orchestration?


I somewhat take issue as a LangChain hater + Mastra lover with 20+ years of coding experience and coding awards to my name (which I don't care about, I only mention it for context).

Langchain is `left-pad` -- a big waste of your time, and Mastra is Next.js -- mostly saving you infrastructure boilerplate if you use it right.

But I think the primary difference is that Python is a very bad language for agent/LLM stuff (e.g. static typesystem, streaming, isomorphic code, strong package management ecosystem is what you want, all of which Python is bad with). And if for some ungodly reason you had to do it in Python, you'd avoid LangChain anyway so you could bolt on strong shim layers to fix Python's shortcomings in a way that won't break when you upgrade packages.

Yes, I know there's LangChain.js. But at that point you might as well use something that isn't a port from Python.

> what would you say indicates a high quality candidate when they are discussing agent harnessing and orchestration?

Anything that shows they understand exactly how data flows through the system (because at some point you're gonna be debugging it). You can even do that with LangChain, but then all you'd be doing is complaining about LangChain.


> And if for some ungodly reason you had to do it in Python

I literally invoke sglang and vllm in Python. You are supposed to (if not using them over-the-network) use the two fastest inference engines there is via Python.


Python being a very bad language for LLM stuff is a hot take I haven’t heard before. Your arguments sound mostly like personal preferences that apply to any problem, not just agentic / LLM.

If we’re going to throw experience around, after 30+ years of coding experience, I really don’t care too much anymore as long as it gets the job done and it doesn’t get in the way.

LangChain is ok, LangGraph et al I try to avoid like the plague as it’s too “framework”-ish and doesn’t compose well with other things.


I used to write web apps in C++, so I totally understand not caring if it gets the job done.

I guess the difference where I draw the line is that LLMs are inherently random I/O so you have to treat them like UI, or the network, where you really have no idea what garbage is gonna come in and you have to be defensive if you're going to build something complex -- otherwise, you as a programmer will not be able to understand or trust it and you will get hit by Murphy's law when you take off your blinders. (if it's simple or a prototype nobody is counting on, obviously none of this matters)

To me insisting that stochastic inputs be handled in a framework that provides strong typing guarantees is not too different from insisting your untrusted sandbox be written in a memory safe language.


What does static type systems provide you with that, say, using structured input / output using pydantic doesn’t?

I just don’t follow your logic of “LLMs are inherently random IO” (ok, I can somehow get behind that, but structured output is a thing) -> “you have to treat them like UI / network” (ok, yes, it’s untrusted) -> static typing solves everything (how exactly?)

This just seems like another “static typing is better than dynamic typing” debate which really doesn’t have a lot to do with LLMs.


he says its bad for agents, nit 'LLM stuff'. python is fine to throw task to the GPU. it is absolutely dreadful at any real programming. so if you want to write an agent that _uses_ LLMs etc like an agent, there are much better languages, for performance, safety and your sanity.


so the argument boils down to “untyped languages are dreadful for real programming” ?


I'm not familiar with it. My first question would be: Are there any prominent projects that use it?

A lot of these frameworks are lauded, but if they were as good as they claim you would run into them in all sorts of apps. The only agents that i ever end up using are coding agents, i think they're obviously the most popular implementations of agents. Do they use langchain? No, i don't think so. They probably use in house logic cus it's just as easy and gives them more flexibility and less dependencies


I have been experimenting with these same type of factory pattern skills. Thanks for sharing.


After a session with Claude Code I just tell it "turn this into a skill, incorporate what we've learned in this session".


There is a deluge, every day. Just nobody notices or uses them.


A good system prompt goes a long way with the latest models. Even just something as simple as "use DRY principles whenever possible." or prompting a plan-implement-evaluate cycle gets pretty good results, at least for tasks that are doing things that AI is well trained on like CRUD APIs.


Aside from speed, what would the major selling points be on migrating from pnpm to bun?


A couple points from this I'm trying to understand:

- Is the idea that MCP servers will provide tool use examples in their tool definitions? I'm assuming this is the case but it doesn't seem like this announcement is explicit about it, I assume because Anthropic wants to at least maintain the appearance of having the MCP steering committee have its independence from Anthropic.

- If there is tool use examples and programmatic tool calling (code mode), it could also make sense for tools to specify example code so the codegen step can be skipped. And I'm assuming the reason this isn't done is just that it's a security disaster to be instructing a model to run code specified by a third party that may be malicious or compromised. I'm just curious if my reasoning about this seems to be correct.


If it was example code, it wouldn't let codegen be skipped, it would just provide guidance. If it was a dererministically-applied template, you could skip codegen, but that is different from an example, and probably doesn't help for what codegen is for (you are then just moving canned code from the MCP server to the client, offering the same thing you get from a tool call with a fixed interface.)


One aspect the report is very vague about is the nature of the monitoring Anthropic is doing on Claude Code. If they can detect attacks they can surely detect other things of interest (or value) to them. Is there any more information about this?


The rhetoric you see in some places about how social assistance is used on hair weaves says something about the underlying reasons for much of this concern.


Remember the only reason we have school lunch programs in the US at all is because the Black Panthers started a free breakfast program for black children in the 70s and the government wanted to undermine the political and propaganda power the Black Panthers had gained through that and other social programs. So the government created its own, then Reagan underfunded it.


No, that is not true. The first school lunch programs started with private initiatives in the 1890s. The first major federal program for student lunches was the National School Lunch Program enacted in 1946. That has since been updated several times: the Child Nutrition Act in 1966, the Child Care Food Program in 1975, etc.


What you're saying doesn't contradict the argument that the goal was do outdo the black panther lunch programs.

Certainly I'd like to read more about the idea before I buy into it, but it does make a lot of sense - schools in black neighborhoods are chronically underfunded and the black panthers were first and foremost a direct action and mutual aid group, and furthermore the USA government viewed them as a huge threat to government authority and did many things to attempt to undermine the black panthers... Including outright assassination.


> [Original, emphasis added]: the only reason we have school lunch programs in the US at all is because the Black Panthers started a free breakfast program for black children in the 70s

> [Response, emphasis added]: The first school lunch programs started with private initiatives in the 1890s. The first major federal program for student lunches was the National School Lunch Program enacted in 1946

Are you saying that the government started trying to one-up the Black Panther school lunches 30 years before the Black Panthers started offering them?

Is it possible that the people in charge of school lunches in the 1970s viewed the Black Panther program as some kind of competition? Sure. Was the 1970s Black Panther program "the only reason" the US started a national school lunch program in the 1940s? I don't see how that would be possible.


> the only reason we have school lunch programs in the US at all is because the Black Panthers started a free breakfast program for black children in the 70s

> The first school lunch programs started with private initiatives in the 1890s. The first major federal program for student lunches was the National School Lunch Program enacted in 1946

How does the existence of a food program in the 1890s, or 1946, automatically invalidate the notion that the promulgation of the food programs into 2025 is due to the efforts of the black panthers? Similarly, one could attribute gun control laws in California to the black panthers focus on arming black neighborhoods, rather than some kind of liberal anti-gun attitude.


> automatically invalidate the notion that the promulgation

Goes the other way around too? Regardless government continuing doing what they were already doing for the past half century seems reasonable. Without any additional evidence that seems like an inherently much more valid argument that attributing it to the Black Panthers. So equating them seems disingenuous...


Where do these weird conspiracy theories come from?


This scenario only plays out if it is known what was or wasn't made with GenAI.


It would become known during discovery.


How can you find out if an AI created something versus a human with a pixel editor?


In a legal case? You question the authors under oath, subpoena communications records, billing records, etc.

If there's even a hint that you used AI output in the work and you failed to disclose it to the US Copyright Office, they can cancel your registration.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: