We have created a public registry of AI agent benchmarks and agent runtime traces to help everyone better understand how AI agents work and fail these days.
Our team and many agent builders we talked to wanted a better way of viewing what agents in these benchmarks do, e.g., how a particular coding agent approaches SWE-bench (https://www.swebench.com/). Right now, there are two reasons why this is difficult: benchmark traces are distributed on many different websites, and they are really hard to read. Often, they are huge raw JSON dumps of the agent in formats that are hard to read.
To alleviate this, we build this repository, where it is easy to see what individual agents do and how they solve tasks (or fail to).
We hope that putting these agent traces in one place makes it easier to understand and progress on AI agent development in both industry and academia. We are happy to add more benchmarks and agents – let us know if you have something in mind.