Hacker Newsnew | past | comments | ask | show | jobs | submit | LeonidBugaev's commentslogin

I had a few useful examples of this. In order to make it work you need to define your quality gates, and rather complex spec. I personally use https://github.com/probelabs/visor for creating the gates. It can be a code-review gate, or how well implementation align with the spec and etc. And basically it makes agent loop until it pass it. One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task. You can also play around with the gates with a more simple tooling, for example https://probelabs.com/vow/

Hope it helps!


> One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.

This is definitely a way to keep those who wear Program and Project manager hats busy.


That is interesting. Never considered trying to throw one or two into a loop together to try to keep it honest. Appreciate the Visor recommendation, I'll give it a look and see if I can make this all 'make sense'.

Nice one! I had my own spin on this issue as well, but from the other angle https://github.com/probelabs/maid

Getting AI to generate valid mermaid diagrams on scale extremely hard. With maid i'm hitting 100% accuracy.

Maid is basically built from scratch mermaid parser, without any dependnecies, which knows how to auto-fix common AI slop diagramming issues.


Nice one. Mermaid validation is a huge issue given how mermaid.js is architected.

I built a mermaid generation harness last year and even the best model at it (Claude Sonnet 3.7 at the time; 4o was okay, Gemini struggled) only produced valid mermaid ~95% of the time. That failure rate adds up quickly. Had to detect errors client-side and trigger retries to keep server load reasonable.

Having a lightweight parser with auto-fix like this back then would have simplified the flow quite a bit.


It does not implement the Auth :)

(mcp auth is terrible btw)


I couldn’t find any great examples of MCP auth, so made this demonstrate an oauth flow recently - https://github.com/OBannon37/chatgpt-deep-research-connector...


For my app I'm bypassing MCP auth and doing the regular oauth2 flow to connect users to external apps.

Then I pass the stored oauth token directly to my (private) MCP servers alongside a bearer token.


You should check https://probeai.dev/ too. Thats one of those building blocks which makes AI trully understand the code.


To put it simple:

A2A is for communication between the agents. MCP is how agent communicate with its tools.

Important aspect of A2A, is that it has a notion of tasks, task rediness, and etc. E.g. you can give it a task and expect completely in few days, and get notified via webhook or polling it.

For the end users for sure A2A will cause a big confusing, and can replace a lot of current MCP usage.


If an agent could wrap itself in an MCP server, would that make A2A redundant?


I have the same problem come out in my mind.

What if I wrap the agent as a tool in MCP?

Since the agents I got from the 'A2A' protocol is passed as tools to another Agent...

https://github.com/google/A2A/blob/72a70c2f98ffdb9bd543a57c8...


you mean wrap mcp server in itself?


Hello HN!

I'm building Probe https://probeai.dev/ for a while now, and this this docs-mcp project is showcase of its capable. Giving you a local semantic search over any codebase or docs without indexing.

Feel free to ask any questions!


Nope, it is simply fresh issues with "help wanted" and "good first issue" labels.


I do maintain big OSS projects and and try to contribute as well.

However contribution experience can very bad, if you follow the path of picking the most famous objects. Good luck contributing to Node, Rust, Shadcn and etc - they do not need your contribution, their PR queue is overloaded and they can't handle it. Plus you need to get to their internal circles first, though quite complex process.

The world is much bigger. There are so many help required from the smaller but still active projects.

Just recently I raised 3 small PRs, and they reviewed the same day!

As a my respect to all the OSS community, I have build https://helpwanted.dev/ website, which in the nutshell shows latest "help wanted" and "good first issue" issues, from all over github in the last 24 hours.

You would be amazed how many cool projects out of there looking for the help!


One of the cases when AI not needed. There is very good working algorithm to extract content from the pages, one of implementations: https://github.com/buriy/python-readability


Some years ago I compared those boilerplate removal tools and I remember that jusText was giving me the best results out of the box (tried readability and few other libraries too). I wonder what is the state of the art today?


This is worth having a look at: https://mixmark-io.github.io/turndown/

With some configuration you can get most of the way there.


oh AI is optional here. I do use readability to clean the html before converting to .md.


Last time I tried readability it worked well with articles but struggled with other kinds of pages. Took away far more content than I wanted it to.


How do you achieve the same things without AI here using that tool?


"How do you do it without AI" is a question I (sadly) expect to see more often.


Feel free to answer then, how do you do the same functions this does with gpt(3/4) without AI?

Edit -

This is an excellent use of it, a free text human input capable of doing things like extracting summaries. It does not seem to be used at all for the basic task of extracting content, but for post filtering.


I think “copy from a PDF” could be improved with AI. It’s been 30 years and I still get new lines in the middle of sentences when I try to copy from one.


That's a great use case, you might be able to do this if you've got a copy and paste on the command line with

https://github.com/simonw/llm

In between. An alias like pdfwtf translating to "paste | llm command | copy"


i've long assumed that is a "feature" of PDF akin to DRM. Making copying text from a PDF makes sense from a publisher's standpoint.


Meh, it’s just the “how does it work?” question. How content extractors work is interesting and not obvious nor trivial.

And even when you see how readability parser works, AI handles most of the edge cases that content extractors fail on, so they are genuinely superseded by LLMs.


I was honestly expecting it to be mostly black magic, but it looks like the meat of the project is a bunch of (surely hard won) regexes. Nifty.


> I was … expecting it to be mostly black magic, but … the meat of the project is a bunch of … regexes

Wait, regexes are the epitome of black magic. What do you consider as black magic?


Macros? Any situation where code edits other code?

Sure, I could not write a regex engine, but the language itself can be fine if you keep it to straightfoward stuff. Unlike the famous e-mail parsing regex.


how is it compared to mozilla/readability?


it uses readibility but does some additional stuff like relink images to local paths etc., which I needed


I have had challenges with readability. The output is good for blogs but when we try it for other type of content, it misses on important details even when the page is quite text-heavy just like blog.


yeah that’s correct. i put a checkbox to disable readability filter if needed…


Plus in production, with high load, Redis cluster is way more common, which kind of solve single-threaded concern.


I've always found redis cluster to just bring problems with it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: