It’s designed to plug into frameworks like CrewAI, AutoGen, or LangChain and help agents learn from both successful and failed interactions - so instead of each execution being isolated, the system builds up knowledge about what actually works in specific scenarios and applies that as contextual guidance next time. The aim is to move beyond static prompts and manual tweaks by letting agents improve continuously from their own runs.
Currently also working on an MCP interface to it, so people can easily try it in e.g. Cursor.
Steady | Berlin | Full Stack Elixir Developer | Full-time | ONSITE
We are looking for developers who want to write Elixir code at Steady. Join us in Berlin and help empower independent media makers. Find out more at steady-media-jobs.personio.de/job/96567.
I've recently launched https://pryin.io, an application performance monitoring tool made for Elixir and Phoenix.
It hooks into Phoenix and gives you insights into how long your request / channels take and what Ecto queries are run / how long those take. You can also manually augment pretty much anything else (background jobs, API calls, ...).
Plus it keeps track of some important BEAM metrics like memory consumption.
I added the Github issue importer functionality in part because of some threads I saw here, e.g. https://news.ycombinator.com/item?id=8712035.
Now maintainers only need to tag issues with the label "Moved to ProjectTalk" and we will automatically import them and post a comment with a link.
mainly because a lot of projects use live chat (gitter, slack), which i think isn't always the best solution (time zones, discoverability of past discussions, ...).
There was a post about https://showoff.io/ here not long ago. Seems they are doing quite similar things, except showoff.io costs a little. I didn't really compare features, though.
I've been using showoff to develop against the github service hook. I've got a paid account so I have a static url which means I only had to set it up once in github. I use a lot of cloud services and don't bother with a VPS. For $5 a month it's a really simple service that does just what I want.
It’s designed to plug into frameworks like CrewAI, AutoGen, or LangChain and help agents learn from both successful and failed interactions - so instead of each execution being isolated, the system builds up knowledge about what actually works in specific scenarios and applies that as contextual guidance next time. The aim is to move beyond static prompts and manual tweaks by letting agents improve continuously from their own runs.
Currently also working on an MCP interface to it, so people can easily try it in e.g. Cursor.