I had a few useful examples of this. In order to make it work you need to define your quality gates, and rather complex spec. I personally use https://github.com/probelabs/visor for creating the gates. It can be a code-review gate, or how well implementation align with the spec and etc. And basically it makes agent loop until it pass it. One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task. You can also play around with the gates with a more simple tooling, for example https://probelabs.com/vow/
> One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.
This is definitely a way to keep those who wear Program and Project manager hats busy.
That is interesting. Never considered trying to throw one or two into a loop together to try to keep it honest. Appreciate the Visor recommendation, I'll give it a look and see if I can make this all 'make sense'.
Nice one. Mermaid validation is a huge issue given how mermaid.js is architected.
I built a mermaid generation harness last year and even the best model at it (Claude Sonnet 3.7 at the time; 4o was okay, Gemini struggled) only produced valid mermaid ~95% of the time. That failure rate adds up quickly. Had to detect errors client-side and trigger retries to keep server load reasonable.
Having a lightweight parser with auto-fix like this back then would have simplified the flow quite a bit.
A2A is for communication between the agents.
MCP is how agent communicate with its tools.
Important aspect of A2A, is that it has a notion of tasks, task rediness, and etc. E.g. you can give it a task and expect completely in few days, and get notified via webhook or polling it.
For the end users for sure A2A will cause a big confusing, and can replace a lot of current MCP usage.
I'm building Probe https://probeai.dev/ for a while now, and this this docs-mcp project is showcase of its capable. Giving you a local semantic search over any codebase or docs without indexing.
I do maintain big OSS projects and and try to contribute as well.
However contribution experience can very bad, if you follow the path of picking the most famous objects. Good luck contributing to Node, Rust, Shadcn and etc - they do not need your contribution, their PR queue is overloaded and they can't handle it. Plus you need to get to their internal circles first, though quite complex process.
The world is much bigger. There are so many help required from the smaller but still active projects.
Just recently I raised 3 small PRs, and they reviewed the same day!
As a my respect to all the OSS community, I have build https://helpwanted.dev/ website, which in the nutshell shows latest "help wanted" and "good first issue" issues, from all over github in the last 24 hours.
You would be amazed how many cool projects out of there looking for the help!
One of the cases when AI not needed. There is very good working algorithm to extract content from the pages, one of implementations: https://github.com/buriy/python-readability
Some years ago I compared those boilerplate removal tools and I remember that jusText was giving me the best results out of the box (tried readability and few other libraries too). I wonder what is the state of the art today?
Feel free to answer then, how do you do the same functions this does with gpt(3/4) without AI?
Edit -
This is an excellent use of it, a free text human input capable of doing things like extracting summaries. It does not seem to be used at all for the basic task of extracting content, but for post filtering.
I think “copy from a PDF” could be improved with AI. It’s been 30 years and I still get new lines in the middle of sentences when I try to copy from one.
Meh, it’s just the “how does it work?” question. How content extractors work is interesting and not obvious nor trivial.
And even when you see how readability parser works, AI handles most of the edge cases that content extractors fail on, so they are genuinely superseded by LLMs.
Macros? Any situation where code edits other code?
Sure, I could not write a regex engine, but the language itself can be fine if you keep it to straightfoward stuff. Unlike the famous e-mail parsing regex.
I have had challenges with readability. The output is good for blogs but when we try it for other type of content, it misses on important details even when the page is quite text-heavy just like blog.
Hope it helps!
reply