Hacker News new | past | comments | ask | show | jobs | submit | didibus's comments login

Including a coding style guide can help the code looks like what you want. Also including an explanation of the project structure, and overall design of the code base. Always specify what libraries it should make use of (or it'll bring in anything or implement stuff a library has already).

You can also make the AI review itself. Have it modify code, than ask to review the code, than ask to address review comments, and iterate until it has no more comments.

Use an agentic tool like Claude Code or Amazon Q CLI. Then ask it to run tests after code changes and to address all issues until test pass. Make sure to tell it not to change the test code.


Unless your employer pays for you to use agentic tools, avoid them. They burn through money and tokens like there's no tomorrow.

It is, AI lets you have an ad-free web browsing experience. This is a huge part of it as well.

I agree with the potential of AI. I use it daily for coding and other tasks. However, there are two fundamental issues that make this different from the Photoshop comparison.

The models are trained primarily on copyrighted material and code written by the very professionals who now must "upskill" to remain relevant. This raises complex questions about compensation and ownership that didn't exist with traditional tools. Even if current laws permit it, the ethical implications are different from Photoshop-like tools.

Previous innovations created new mediums and opportunities. Photoshop didn't replace artists, because it enabled new art forms. Film reduced theater jobs but created an entirely new industry where skills could mostly transfer. Manufacturing automation made products like cars accessible to everyone.

AI is fundamentally different. It's designed to produce identical output to human workers, just more cheaply and/or faster. Instead of creating new possibilities, it's primarily focused on substitution. Say AI could eliminate 20% of coding jobs and reduce wages by 30%:

    * Unlike previous innovations, this won't make software more accessible
    * Software already scales essentially for free (build once, used by many)
    * Most consumer software is already free (ad-supported)
The primary outcome appears to be increased profit margins rather than societal advancement. While previous technological revolutions created new industries and democratized access, AI seems focused on optimizing existing processes without providing comparable societal benefits.

This isn't an argument against progress, but we should be clear-eyed about how this transition differs from historical parallels, and why it might not repeat the same historical outcomes. I'm not claiming this will be the case, but that you can see some pretty significant differences for why you might be skeptical that the same creation of new jobs, or improvement to human lifestyle/capabilities will emerge as with say Film or Photoshop.

AI can also be used to achieve things we could not do without, that's the good use of AI, things like Cancer detection, self-driving cars, and so on. I'm speaking specifically of the use of AI to automate and reduce the cost/speed of white collar work like software development.


For me this is the "issue" I have with AI. Unlike say the internet, mobile and other tech revolutions where I could see new use cases or existing use case optimisation spring up all the time (new apps, new ways of interacting, more efficient than physical systems, etc) AI seems to be focused more on efficiency/substitution of labour than pushing the frontier on "quality of life". Maybe this will change but the buzz is around job replacement atm.

Its why it is impacting so many people, but also having very small changes to everyday "quality of life" kind of metrics (e.g. ability to eat, communicate, live somewhere, etc). It arguably is more about enabling greater inequality and gatekeeping of wealth to capital - where intelligence and merit matters less in the future world. For most people its hard to see where the positives are for them long term in this story; most everyday folks don't believe the utopia story is in anyway probable.


> The primary outcome appears to be increased profit margins rather than societal advancement. While previous technological revolutions created new industries and democratized access, AI seems focused on optimizing existing processes without providing comparable societal benefits.

This is the thing that worries me the most about AI.

The author's ramblings dovetails with this a bit in their "but the craft" section. They vaguely attack the idea of code-golfing and focusing on coding for the craft as essentially incompatible with the corporate model of programming work. And perhaps they're right. If they are, though, this AI wave/hype being mostly about process-streamlining and such seems to be a distillation of that fact.


Maybe it's like automation that makes webdev accessible to anyone. You take a week long AI coaching course and talk to an AI and let it throw together a website in an hour, then you self host it.

Interesting thing to keep an eye on.

Though personally, I'm not sure if I'm most scared of issues of safety with the models themselves, or more so in the impact these models will have on people's well being, lifestyles, and so on, which might fall under human law.


Ya, I've always wondered like do blood cells in my body have any awareness that I'm not just a planet they live on? Would we know if the earth was just some part of a bigger living structure with its own consciousness? Does it even need to be conscious, or just show movement that is non random and influenced in some ways by goals or agenda? Many organisms act as per the goal to survive even if not conscious, and so probably can be considered a life-form? Corporations are an example of that like you said.

    - Data to train on
    - Being the first GenAI experience of users that then might associate this stuff with xAi branding.

Not sure I buy your second bullet.

Windows has had copilot baked in for a while now, where genai stuff is already possible.

Meta has their ai baked into WhatsApp, and probably into instagram as well (not sure though)

Google is rolling out gemini on android.

I would posit that for a majority of telegram users, xAi is just going to be "yet another AI integration" for them, and it'll be nothing novel.


Where is LLM in my whatsapp? Never seen it, latest version.

On the chats tab, right above the green "+" floating button there is a "meta AI" icon on my version of WhatsApp. If you click on it it opens up a chat with the AI like normal.

I guess they are still rolling out the feature.


Amazon Q CLI, Claude Code CLI, Goose with any model, OpenAPI released Codex CLI recently, and Google released Jules.

You can also just use Claude Desktop and hook it up to MCP servers.


I still need to catch up on MCPs, it is still new to me and truth be told, I am clueless about it.

I thought Cursor had support for MCP now? So in theory it can now navigate the code base, query for code structure, and so on as well no?

I think you need to try the 3rd way:

    3. Agentic Coding (aka Vibe Coding)
This is what clojure-mcp with Claude Desktop lets you try. Or you can try Amazon Q CLI (there is a free tier https://aws.amazon.com/q/developer/pricing/). Not Clojure specific.

You need to find a workflow to leverage it. There are two approaches.

    1. Developer Guided
Here you setup the project and basic project structure. Add the dependencies you want to use, setup your src and test folders, and so on.

Then you start creating the namespaces you want, but you don't implement them, just create the `(ns ...)` with a doc-string that describes it. You can also start adding the public functions you want for it's API. Don't implement those either. Just add a signature and doc-string.

Then you create the test namespace for it. Creates a deftest for the functions you want to test, and add `(testing ...)` but don't add the body, just write the test description.

Now you tell the AI to fill in the implementation of the tests and namespace so that all described test cases pass and to run the test and iterate until it all does.

Then ask the AI to code review itself, and iterate on the code until it has no more comments.

Mention security, exception handling, logging, and so on as you see fit, if you explicitly call those concerns it'll work on them.

Rinse and repeat. You can add your own tests to be more sure, and also test things out and ask it to fix.

    2. Product Guided
Here you pretend to be the Product Manager. You create a project and start adding markdown files in it that describe the user stories, the features of the app/service and so on.

Then you ask AI to generate a design specification. You review that, and have it iterate on it until you like it.

Then you ask AI to break down a delivery plan, and a test plan to implement it. Review and iterate until you like it.

Then you ask AI to break up the delivery in milestones, and to create a break down of tasks for the first milestone. Review and iterate here.

Then you ask AI to implement the first task, with tests. Review and iterate. Then the next, and so on.


I'm skeptical because I don't think generating the Clojure code is the hard part. These ideas seem more like wishful thinking than actual productivity improvements with the current state of tech.

Developer guided: For the projects I'm currently working on, the understanding is the most difficult part, and writing the code is a way for me to check my understanding as I go. I do use LLMs to generate code when I feel like it can save me time, such as setting up a new project or scaffolding tests, but I think there are diminishing returns the larger and/or more complex the project is. Furthermore, I work on code that other people (or LLMs) are meant to understand, so I value code that is consistent and concise.

Product guided: Even with meat-based agents (i.e humans), there's a limit to how many Jira tickets I can write and junior engineers I can babysit, and this is one of the worst parts of the job to begin with. Furthermore, junior engineers often make mistakes which means I need to have my own understanding to fix the issues. That said, getting feedback from experienced colleagues is invaluable, and that's what I'm currently simulating with LLMs.


I'm not sure what you are asking exactly?

Say you have a Clojure project, it's in a folder on your computer where you likely cloned or initialized a git repo.

Now you want to leverage an agentic LLM that can connect to clojure-mcp so you can prompt the LLM to make real edits to your project source files, folder structure, resources, documentation, etc.

Your options are kind of limited: - Amazon Q CLI - Claude Code CLI - OpenAI Codex CLI

Those are the best. Then you have the IDE based ones, like Cursor, Windsurf, Copilot agent mode (in public preview currently), and so on.

What they are saying though is that Claude Desktop also support MCP, and can be used without incurring API charges.

Honestly, the in-IDE ones, for me, are not very good, you really don't need this stuff tied up inside an editor. I prefer the CLIs personally, but can see how you can also just as easily run Claude Desktop as a side-bar than need something inside your editor.


And present state isn't even that important. In 2-4 years we'll be 2 hardware generations in the future, people can buy hardware tailored to 2024-25 models and VRAM will be creeping up (or mitigations for low VRAM found). The models something uses today don't tell us much about what a project will look like in 3 years. None of the current crop of leading models are going to last that long. A project might easily be looking at the medium term not the present.


> Your options are kind of limited: - Amazon Q CLI - Claude Code CLI - OpenAI Codex CLI

Ampcode have a CLI, which is their agent using Claude 4.

Google also came out with Jules a few days ago.

There's aider, with which you can use whichever LLM you'd like.

I'm pretty sure that there are others...


Aider does not have MCP support yet. Neither does Jules I believe.

Ampcode I heard of, but I also heard it's very expensive, same for Devin. I also don't know if either of them support MCP.

I'm sure there are others, of varying quality, but realistically, the options you'd want to use are the ones I listed I think.

P.S.: I'd been looking for alternatives by the way, something that lets me use OpenAI models, I've yet to try it but heard good things about: https://block.github.io/goose/


> Your options are kind of limited: - Amazon Q CLI - Claude Code CLI - OpenAI Codex CLI

Out of curiosity, which option do you go for?


For clojure-mcp, you really should try just Claude Desktop. That's because clojure-mcp provides all the tools you need already, reading files, running shell commands, running code in the REPL, running tests, listing directories, linting code, etc.

The others I listed above come with a lot of tools baked in, and I'm not sure if they could interfere, like the LLM might prefer one that's bundled to using the clojure-mcp ones.

Otherwise I use Amazon Q CLI, because it is the cheapest of the bunch. I'd say Claude Code CLI is the other I'd use personally.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: