Hacker Newsnew | past | comments | ask | show | jobs | submit | dragonwriter's commentslogin

> To be clear, I'm suggesting that any specific format for "skills.md" is a red herring, and all you need to do is provide the LLM with good clear documentation.

Agent Skills isn't a spec for how information is presented to the model, its a spec whose consumer is the model harness, which might present information made available to it in the format to the model in different ways for different harnesses, or even in the same harness for different models or tasks, considering things like the number and size of the skill(s) available, the size of the model context, the purpose of the harness (is it for a narrow purpose agent where some of the skills are central to that purpose?), and user preference settings.

The site itself has two different main styles of integration for harnesses described ("tool based" and "filesystem based"), but those are more of a starting point for implementers that an exhaustive listing.

The idea is that skill authors don't need to know or care how the harness is presenting the information to the model.


No, if it was maximizing suppression bang for the buck it would be the Democratic precincts in swing states, not “swing precincts and states”, because electoral votes (except for 5—out of the 9 in Nebraska and Maine—that are determined by Congressional district) are decided by statewide (not precinct level) outcomes, so you get the maximum effect on the outcome by suppressing the vote in Democratic-leaning areas of swing states, not by targeting precincts that are near parity in the same states.

That’s for the presidential elections right? For midterms wouldn’t they target swing districts that determine House seats?

Sure, I was thinking in terms of the presidential; but its pretty similar for midterms, you'd still mostly want to target Democratic and not swing precincts, but those in swing congressional districts rather than swing states; precincts are typically on the several hundred to a few thousand people; congressional districts are several hundred thousand to just over a million people.

> Why not just extend the OpenAPI specification to skills?

Because approximately none of what exists in the existing OpenAPI specification is relevant to the task, and nothing needed for the tasks is relevant to the current OpenAPI use case, so trying to jam one use case into a tool designed for the other would be pure nonsense.

It’s like needing to drive nails and asking why grab a hammer when you already have a screwdriver.


> This is different from swagger / OpenAPI how?

Because the descriptions aren't API specs and the things described aren't APIs.

Its more like a structure for human-readable descriptions in an annotated table of contents for a recipe book than it is like OpenAPI.


> At some point in the near future I see a day where our work laptops are nothing more than a full screen streaming video to a different computer that is housed in a country that has no data extradition treaties and is business friendly.

Do you mean they will be pure worker surveillance systems, or did you mean “from” instead of “to”?


X is most definitely not a dumb pipe, you also have humans beside the sender and receiver choosing what content (whether directly or indirectly) is promoted for wide dissemination, relatively suppressed, or outright blocked.

> LLMs have access to the same tools --- they run on a computer.

That doesn't give them access to anything. Tool access is provided either by the harness that runs the model or by downstream software, if it is provided at all, either to specific tools or to common standard interfaces like MCP that allow the user to provide tool definitions for tools external to the harness. Otherwise LLMs have no tools at all.

> The problem here is the basic implementation of LLMs. It is non-deterministic (i.e. probabilistic) which makes it inherently inadequate and unreliable for a lot of what people have come to expect from a computer.

LLMs, run with the usual software, are deterministic [ignoring hardware errors ans cosmic ray bit flips, which if considered make all software non-deterministic] (having only pseudorandomness if non-zero temperature is used) but hard to predict, though because implementations can allow interference from separate queries processed in a batch, and the end user doesn't know what other typical hosted models are non-deterministic when considered from the perspective of the known input being only what is sent by one user.

But your problem is probably actually that the result of untested combinations of configuration and input are not analytically predictable because of complexity, not that they are non-deterministic.


> We really need to peacefully explore the "national divorce" idea again. In the 1860s the concept was too intermingled with the evil of slavery to be considered separately.

The idea is still just as intermingled with fundamental human rights, plus the sides are more deeply geographically intermingled than in the 1860s, largely because the victors decided not to really root out the evil they had defeated an instead allowed it to metastasize. There may be no peaceful resolution; there is certainly no possibility of a peaceful divorce.


No, mypy existed before the type hints spec, and was created by Jukka Lehtosalo; Guido did, once he encountered it, work to make sure mypy could work with Python rather than being a separate, python-like language, and the type hints spec was a big part of that.

> Static typing and duck typing both date back to the 1950s. You may have heard of Lisp.

> The last new significant thing invented in programming was OOP in the 1990s.

OOP is from the 1960s (Simula 67 is generally recognized as the first OOP language.) Probably not actually the last new significant thing invented in programming, though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: