Hacker Newsnew | past | comments | ask | show | jobs | submit | podperson's commentslogin

OP here.

I wrote this because I was frustrated by the recent "16 AI Agents build a C compiler" narrative. Building a C compiler is an exercise in implementation; I wanted to see if one dev could use AI for invention.

The result is TJS. It's my attempt to fix what I dislike about TypeScript (erased types) and solve the eval() problem for AI agents. It treats JS like the Lisp it was meant to be: types are real values, autocomplete works by introspection, and execution is strictly gas-metered.

The playground is live here if you want to test the sandbox or the types: https://platform.tosijs.net

Happy to answer technical questions about the compiler architecture, the AJS sandbox, or how the gas metering works.

I am in Finland so apologies in advance if I am slow to reply.


OP again:

The tjs compiler transpiles ts into itself: https://platform.tosijs.net/#example=Hello+TypeScript&sectio...

It's written in typescript and can transpile itself.

When it transpiles typescript, type declarations become contracts, and types are available at runtime.

It does all this with a 50-70% overhead (imperceptible in most cases), but there's an unsafe escape hatch (!) => {} functions bypass type checks.

Oh yeah, it does inline WASM and it handles the hard stuff other WASM implementations kind of leave as an exercise -- moving data across. And it support SIMD.

https://platform.tosijs.net/#view=tjs&example=Vector+Search+...

Error propagation is monadic, so functions passed bad arguments just don't execute. Also functions know from whence they came and you can execute code in debug mode and get a trace with your monadic error, so it's very agent friendly.

If you code in tjs natively you can do inline tests of unexported functions at transpile time.

If you transpile ts using tjs it gives you the same capability.

Single pass transpilation does inline tests and generates documentation at the same time.

tjs itself is a TRUE js superset with predicate functions to handle complex types (if your type system is going to be turing complete, own it), full introspection (so it's a true LISP, or what Dylan aspired to be), and safe Eval using a language subset that is deeply async. Again, universal endpoints.

Simple types are declared by example, so:

function greet(name: 'Alice') -> 'Hello, Alice' => `Hello, ${name}`

is not just a declaration of a function that takes a string and returns a string, but the string should look like 'Alice' and if it is 'Alice' the function will return 'Hello, Alice' and this is also an inline test that runs at transpile time.


How does being in Finland make you slow? Finland is a country most western countries aspire to be!


It's evening when I posted. I was heading to dinner, not hovering over my computer.


> execution is strictly gas-metered

What does that mean?


Every execution atom in the save eval sandbox has a 'gas' cost that also time bounds it, so eval is safe both from the halting problem and type fuzzing, and the only things the code can do is execute capabilities you explicitly give it, and you can give those capabilities the same access tokens the request had. So you get universal endpoints.


While I was waiting for the night train to Oulu in Helsinki on Friday night, I pondered two problems I have been thinking about for years on the one hand, and for months on the other.

1. How can I build a simple *service-as-a-service* endpoint that pulls data, maybe does a little work on it (e.g. whittling it down, converting XML to JSON, etc.) caches it, and returns it? This seems to require *eval* to be really useful and then it's a whole deployment, code-review thing (for good reason). 2. How can I build an LLM-powered agent that… you get the idea. If service-as-a-service, why not *agent-as-a-service*? I'd already played with lang-chain and found it quite fiddly even for simple things, and this had led me to build a lighter, schema-first alternative to zod.

The epiphany I had was that these are the same question, and the problem was *eval*. So why not make *eval* completely safe?

The result is *agent-99* – a Turing-complete, cost-limited virtual machine that enables "Safe Eval" anywhere (Node, Bun, Deno, and the Browser). The runtime model is so simple it could easily be ported to any of your favorite languages: Python, Rust, Go, Java, Zig… Haskell. It's a perfect fit for Erlang.

Oh and because the underlying language is JSON schema, it's easy to hand it to an agent as a set of tools. So *agent-99* makes it easy to do that, too.

It's the infrastructure for *sky-net* but, you know, type-safe, deeply asynchronous, and without a halting problem.

- *agent-99 repo* https://www.google.com/search?q=https://github.com/tonioloew... - *agent-99-playground (Vibe Coded in ~2 hours):* https://github.com/brainsnorkel/agent99-playground

### The Core Idea

Most Agent frameworks (like LangChain) rely on heavy "Chain" classes or graph definitions that are hard to inspect, hard to serialize, and hard to run safely on the client.

`agent-99` takes a different approach: *Code is Data.*

1. You define logic using a *Fluent TypeScript Builder* (which feels like writing standard JS). 2. This compiles to a *JSON AST* (Abstract Syntax Tree). 3. The AST is executed by a *Sandboxed VM* (~7kB gzipped core).

Or to put it another way:

1. A program is a function 2. A function is data organized in a syntax tree 3. An agent is a function that takes data and code

`agent-99` provides a *builder* with a fluent-api for creating your language and creating programs, and a *virtual machine* for executing those programs with explicitly provided capabilities.

### Why use a VM? * *Safety:* It solves the halting problem pragmatically with a "Fuel" (Gas) counter. If the Agent loops forever, it runs out of gas and dies. * *Security:* It uses *Capability-Based Security*. The VM has zero access to fetch, disk, or DB unless you explicitly inject those capabilities at runtime. * *Portability:* Because the "code" is just JSON, you can generate an agent on the server, send it to the client, and run it instantly. No build steps, no deployment.

### Batteries Included (But Optional)

While the core is tiny, I wanted a "batteries included" experience for local dev. We built a standard library that lazy-loads:

* *Vectors:* Local embeddings via `@xenova/transformers`. * *Store:* In-memory Vector Search via `@orama/orama`. * *LLM:* A bridge to local models (like LM Studio).

### Proof of Concept

I sent the repo to a friend (https://github.com/brainsnorkel). He literally "vibe coded" a full *visual playground* in a couple of hours using the library. You can see the definitions generating the JSON AST in real-time.

### The Stack * *agent-99:* The runtime. * *tosijs-schema:* The underlying schema/type-inference engine https://github.com/tonioloewald/tosijs-schema

I’d love to hear your thoughts on this approach to code-as-data and agents-as-data.


N is increasing. O(1) means constant (actually capped). We never check more than 100 items.


Then it's not 1%, because if you have 100k items and you check at most 100 you have checked at most 0.1% of items.


JSON Schema is a schema built on JSON and it’s already being used. Using XML would mean converting the XML into JSON schema to define the response from the LLM.

That said, JSON is “language neutral” but also super convenient for JavaScript developers and typically more convenient for most people than XML.


Maybe I missed a detail here, sorry if that's the case!

Why would we need to concert XML, which already supports schemas and is well understood by LLMs, back to JSON schema?


Because most of the world uses JSON and has rich tooling for JSONSchemas, notable many LLM providers allow JSONSchemas to be part of the request when trying to get structured output


LLM providers allow sending any string of text though, right? In my experience the LLM understands XML really well, though obviously that doesn't negate them from understanding JSONSchema.


No, it's more than just text now, it's more than just an LLM for the most part now too. They are agentic systems with multiple LLMs, tools, and guardrails

When you provide a JSONSchemea, the result from the LLM is validated in the code between before passing on to the next step. Yes the LLM is reading it too, but non LLM parts of the system use the schema as well

This is arguably much more important for tools and subagents, but also these things are being trained with JSONSchema for tool calling and structured output


LLMs are not people.

We want a format for LLMs or for people?


As a person myself, I very much prefer JSON


MCP isn't meant for humans though, I'm not side why it matters what a human would prefer


JSON schema is very human readable.


Why does that matter though? MCP is meant for LLMs not humans, and for something like this lib it seems the human side if the API is based on JavaScript not JSON.


I wrote this library this weekend after realizing that Zod was really not designed for the use-cases I want JSON schemas for: 1) defining response formats for LLMs and 2) as a single source of truth for data structures.


What led you to that conclusion?


Zod's validation errors are awful, the json schema it generates for LLM is ugly and and often confusing, the types structures Zod creates are often unintelligible in the and there's even no good way to pretty print a schema when you're debugging. Things are even worse if you're stuck with zod/v3


None of this makes a lot of sense. Validation errors are largely irrelevant for LLMs and they can understand them just fine. The type structure looks good for LLMs. You can definitely pretty print a schema at runtime.

This all seems pretty uninformed.


What's wrong with Zod validation errors?


And what makes this different? What makes it LLM-native?


It generates schemas that are strict by default while Zod requires you to set everything manually.

This is actually discussed in the linked article (READ ME file).


That's not true based on zod docs. https://zod.dev/api?id=objects

most of the claims you're making against zod is inaccurate. the readme feels like false claims by ai.


It seems to be true to me. And aside from the API stuff (because I am far from an expert user of Zod) all of this has been carefully verified.


1. Zoe’s documentation, such as it is 2. Code examples


Happy to see more tools in the data schema space.

Will you support Standard Schema (https://standardschema.dev)? How does this compare to typebox (https://github.com/sinclairzx81/typebox)?


Unlike previous GUI platforms there's no reason we should need to cross-compile to XR devices given the capabilities of the VisioPro and Quest 3. This is literally the first demo of a multi-window dev environment building an XR scene in XR using the Quest 3 (because I don't have a Vision Pro to play with).


Well, the GP is talking about Apple's overall margins and the 70% figure is simply wrong. (Retail price - cost of goods is not margin.)


Apple does spend comparatively little on R&D (perhaps part of that is because they don't produce 400 different models of every darn device), but their margins on iPhones are still nowhere near 70%.


I was just speaking specifically the notion that Apple's margins as a company are eaten up by R&D spend which we agree they're clearly not. (The fact that they're doing multi-billion dollar stock buybacks and issuing increasing dividends show they have more money than they know what to do with).

Although since you brought it up, I did a quick search and found that the margins on the iPhone as an individual product (vs. the company's overall margin) are speculated to be near 70% and that figure isn't just pulled out of thin air:

http://appleinsider.com/articles/13/09/30/iphone-5s-demand-h...


Clearly being an actual war criminal probably outweighs donating to prop 8, so there's that.

In any event, I think your problem is you took the wrong position on Eich -- first, there was no "lynch mob" -- being denied a high profile, well-paid job as head of a non-profit is not the same as being lynched. Second, no-one forced Eich to donate to prop 8, and the outcome was quite foreseeable. Isn't a bit of judgment an important job skill for the CEO of a non-profit?


What's odd to me is the choice of Condi Rice. Even assuming that she was relatively blameless compared to (say) Cheney, what the heck do they think they gain by appointing a polarizing political figure to their board?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: