That's what I ended up doing, made my daily briefing a Cron job and running it via "claude -p". Wired it up to make a podcast, with MCP tools I made to create an MP3 with OpenAI, another to upload it to one of my sites with an updated RSS feed, so I can listen in the AntennaPod podcast app each morning.
Nice. Even better would be having your agent write code for the deterministic bits and telling the agent it should “invoke the script called blah” to do uploads (or whatever you want to have happen deterministically).
Yep, I agree! My MCP tools are local compiled Go binaries, and the tool that uploads my podcast is actually a local Go CLI that Claude calls. Claude's main role / intelligence is in evaluating which of the morning's HN & Lobsters news is most relevant to me specifically, and writing the podcast script. I'm all for deterministic tools, and it saves on tokens too.
One advantage of splitting it into MCP tools though, one day I'd run out of pre-paid OpenAI TTS credit, and Claude was smart enough to try using Mistral TTS instead. I could have done that fallback deterministically too, but it wasn't something I'd thought of yet.
I once had a friend tell me they'd got their AI to tell them the weather every morning... and the thought of that poor AI, web researching Weather APIs & writing a new python script to call the API every morning, instead of just doing the research once and making it a binary (or even just a curl line)... drove me crazy. All that wasted time and compute. Some people just like to watch tokens burn.
Raw AI output is dangerous to just use. And yet - we do, because that’s 2026’s state of the art.
It’s like your raw thoughts - you wouldn’t act on them, you’d pass them through many filters you’ve designed over the course of your life.
This is a tricky harness engineering problem - but it’s solvable. We need deterministic shells around these things.
Don’t use raw AI output. Paste it back with feedback, or build tools and scripts that automate that self-reflection loop. Don’t ask it for financial advice; instead have it build - and then populate - financial models. Request that it use symbolic modeling to reason about problems (this nudge was all it took for Gemini to ace the “walk 50m to the car wash” question.) Ask it to contemplate its essay in the context of Wikipedia’s “signs of AI writing” article and clean it up a bit. Have it build you a tool that automates the “clean this essay up” step for you, so you only see cleaned up essays.
We all refine our work until it’s ready - the culture of AI use needs to mirror that.
> Security testing has to become an automated, integral part of the CI/CD pipeline. When a developer opens a pull request, an AI agent should immediately attempt to exploit it. When infrastructure changes, an AI should autonomously validate the new attack surface. You do not beat automated attackers by turning off the lights; you beat them by running better automation on the inside.
This feels like the core of the article, but it doesn’t prove the need for open source.
It was announced in like November of last year, so it's certainly taken some time. The announcement was by some senior management at GitHub, so it has some degree of buy-in.
You can write your own linters for every dumb AI mistake, add them as pre-commit checks, and never see that mistake in committed code ever again.. it’s really empowering.
You don’t even have to code the linters yourself. The agent can write a python script that walks the AST of the code, or uses regex, or tries to run it or compile it. Non zero exit code and a line number and the agent will fix the problem then and rerun the linter and loop until it passes.
Lint your architecture - block any commit which directly imports the database from a route handler. Whatever the coding agent thinks - ask it for recommendations for an approach!
Get out of the business of low level code review. That stuff is automatable and codifiable and it’s not where you are best poised to add value, dear human.
There should be volunteer groups at local libraries running these services for their local communities.
It’d be a great way for kids to learn to operate services and a great alternative for anyone who wants to use the fantastic open source stuff that’s out there but lacks expertise or time.
> There should be volunteer groups at local libraries running these services for their local communities.
The problem with bespoke anything in computers is always the support.
No one wants to be on the hook for customer support. I absolutely agree with them.
There are a ton of "services" that exist solely to enable people to cut a check and say "Customer support is over there. Go talk to them and leave me alone."
reply