I gave a similar presentation in January which covers the AI features that emerged in 2025 that culminated in the step-function in capability in Nov'25 and where I went from there.... (certainly my GitHub activity is bright green since)
The presentation was created with Claude Code to prove itself; never going back to Keynote/PowerPoint. Press 'X' key to disable "safe mode". Prompts are in the repo.
I agree with you, but also we will start sharing these conversation traces more and more. That's why it is important for redaction to be in the export pipeline. There can be both deterministic (eg regex) and LLM-based redaction.
Building a TUI was easy before, especially with the great toolsets for their respective languages BubbleTea / Textualize / Ratatui. And thanks to those frameworks, LLMs can manifest useful tools.
Similar to WebApps, it's only since the November'25 renaissance that I felt I could use them to create TUIs. Once I had that revelation, I started going into my backlog and using it.
I maintain a TUI Charting library, NTCharts. In January, I fixed a bug - totally obvious once identified - that I personally failed to find earlier. But the test harness, prompting, and Gemini got it done [1]. Gemini's spatial understanding was critical in completing the task.
I've been vibe-crafting a local LLM conversation viewing tool called thinkt. After scraping ~/.claude and making a data model, this is the point in PROMPTS.md where I start creating the TUI using BubbleTea. [2].
The second bubble there is a tool for 3D visualization and analytics of Claude Code sessions. The sample conversation is the one that made the tool itself!
That was a fun toy I learned a lot from. I’m not expanding that but am working intensely on the first bubble:
thinkt a CLI/TUI/Webapp for exploring your LLM conversations. Makes it easy to see all your local projects, view them, and export them. It has an embedded OpenAPI server and MCP server.
So you can open Kimi and say “use thinkt mcp to look at my last Claude session in this project, look at the thinking at the end and report on the issues we were facing”.
I added Claude Teams support by launching a Team and having that team look at its own traces and the changing ~/.Claude folder. Similar for Gemini CLI and Copilot (which still need work).
Doing it in the open. Only 2 weeks old - usable, but early. I’m only posting as it’s what I’m working on. Still working on polish and deeper review (it is vibe-crafted). There’s ergonomic issues with ports and DuckDB. Coming up next is VSCode extension and an exporter/collecter for remote agents.
The Claude Code analytics space is really interesting to me right now as well, this is cool.
I'm coming at it from more of the data infrastructure side (e.g. send all of your logs and metrics to a cheap Iceberg catalog in the cloud so you have a central place to query[1]) but also check out https://github.com/tobilg/ai-observer -- duckdb popping up everything to make this interesting and easy.
Over the weekend I took pictures of the four walls of my office and asked Claude Desktop to examine them and give me a plan for tackling it. It absolutely “understood” my room, identifying the different (messy) workspaces and various piles of stuff on the ground. It generated a checklist with targeted advice and said that I should be motivated to clean up because the “welcome back daddy” sign up on the wall indicates that my kids love me and want a nice space to share with me.
I vibe-code TUI and GUI by making statements like “make the panel on the right side two pixels thinner”.
Related to this thread, I explored agentic looping for 3d models (with a swift library, could be done with this Rust one by following the workflow:
https://github.com/ConAcademy/WeaselToonCadova
My running joke after showing off some amazing LLM-driven work is...
if you think this is impressive, I once opened a modal dialog on an Apple IIGS in 65C816 assembly
I don't think you need to learn BASIC, if you know concepts like conditionals and looping and indexing. It is interesting to compare the higher-level language of the time with its companion assembly. And you might find yourself writing BASIC programs to complement your assembly, if you stick to that platform.
<lore>
A friend dropped me a BASIC program that ran and wrote text to the Apple IIGS border. He asked me to figure it out, because it wasn't obvious what was going on. OG hacker puzzle... it was a BASIC program that jumped to hidden assembly after the apparent end of the text file (hidden chars maybe, I forget) and the assembly was changing the border at appropriate rate to "draw" on it. Those were the days... trying to find some reference to this and am failing.
</lore>
I certainly credit my stack-frame debugging capability to dealing with that stuff so long ago. Oddly enough, I didn't really find it helpful for computer architecture class. Just because you know registers exists and how to manipulate them, doesn't exactly map architecting modern hardware system. But being fluent in logic operations and bit-twiddling and indexing does help a lot.
I really appreciate all of his message -- responsibility and actual engineering are critical and can't be (deceptively) lost even though Pull Request and CI/CD workflows exist. I hate the term vibe-coding because it seems flippant, and I've leaned into LLM-assistance to frame it better.
I consider vibe coding and LLM-assistance to be distinctively separate things.
I am vibe coding, if I needed x, I lay that out task with any degree of specificity, and ask for the whole result. Maybe it’s good, I gave the LLM a lot of rope to hang me.
I am using an LLM for assistance if I need something like this file renamed, and all its functions renamed to match, and all the project meta to change, and every comment that mentions the old name to be updated. There is an objectively correct result.
I've been evangelizing vibe coding, because we are wielding something much more powerful now than even ~3 months prior (Nov was the turning point).
Now that Prometheus (the myth, not the o11y tool) has dropped these LLMs on us, I've been using this thought experiment to consider the multi-layered implications:
In a world where everyone can cook, why would anybody buy prepared food?
>In a world where everyone can cook, why would anybody buy prepared food?
I would guess for convenience and saving time. While vibe-coding might be faster, you still have to "do it". As in, think about what you want your software to do, write out or dictate your prompts, test that it works etc. That takes time (might be less time than writing it out by hand but it's still a non-zero amount of time).
I think my comment was misconstrued as siding either way. I didn't communicate it well. It's that it frames questions -- like you were exploring.
Because of course we all buy prepared foods of all sorts from street vendors to fast food to local restaurants to chains to Michelin Stars. While there are many reasons one will cook for themselves, there are many reasons one will buy from someone else too.
The presentation was created with Claude Code to prove itself; never going back to Keynote/PowerPoint. Press 'X' key to disable "safe mode". Prompts are in the repo.
https://neomantra.github.io/presentations/GolangMeetupJan202...
reply