Hacker Newsnew | past | comments | ask | show | jobs | submit | ChadMoran's commentslogin

I launched a technical feature on Amazon's retail platform that is responsible for 9 figures worth of revenue. When I launched it, it had no infrastructure. It was a collection of small changes across every core platform (Detail Page, Cart, Checkout, etc).

At first people were like "Well, you didn't do much" but when they saw the value things changed drastically. It's a bit of marketing you have to do to help bring people along.

Often perceived impact is correlated with complexity, sadly.


Coincidentally similar story, but with a different end -- I eliminated a test station from the final assembly line for Astro (Amazon's home robot), because there were no unique failure modes that the test station could catch (we collected ideas, and then went through and analyzed every failure mode suggested, they would all be detected by other stations). I think we estimated that was ~$1 million savings, in addition to simplifying the workflow and reducing time spent to do final testing. I brought this up with my manager when pushing for a promo, but he told me "somebody else made that value, you just stopped it from being thrown away", lol. Maybe I should have pushed harder, but I definitely felt underappreciated, and am still not sure if I could have gotten better recognition for it.

Don't feel sorry for me though, they still paid me well enough, and I'm happily doing my own stuff now :)


How do you ascribe a revenue number like that based on one collection of changes in a huge system? Presumably there were a bunch of other features being released around the same time as it. Was there a lot of A/B testing around it?


Amazon uses a complicated process called "attributed OPS". Meaning you may not be directly responsible for but you contributed in some way.


Doesn’t matter, @ChadMoran is already on the fast track whilst you are on a pip.


Hah, well I've done something right. They've let me stay here almost 15 years.


Model aside, the harness of Claude Code is just a much better experience. Agent teams, liberal use of tasks and small other ergonomics make it a better dev tool for me.


I've heard a lot of people prefer OpenCode to Claude Code, myself included. Having tried both, I find myself having a much better time in OpenCode. Have you tried it?

I'll admit it lacks on the agent teams side but I tend to use AI sparingly compared to others on my team.


I’ve been using Claude Code for about six months and evaluated OpenCode on my Windows work laptop a few weeks ago. Found 3 dealbreakers that sent me back:

1. No clipboard image paste. In Claude Code I constantly paste screenshots – a broken layout, an error dialog, a hand-drawn schema – and just say “fix this.” OpenCode on Windows Terminal can’t do that without hacky workarounds (save to file, drag-and-drop, helper scripts). I honestly don’t understand how people iterate on UI without this. 2. Ctrl+C kills the process instead of copying. And you can’t resume with --continue either, so an accidental Ctrl+C means losing your session context. Yes, I know about Ctrl+Ins/Shift+Ins, but muscle memory is muscle memory. I also frequently need to select a specific line from the output and say “this part is wrong, do it differently” – that workflow becomes painful. 3. No step-by-step approval for individual edits. Claude Code’s manual edit mode lets me review and approve each change before it’s applied. When I need tight control over implementation details, I can catch issues early and redirect on the spot. OpenCode doesn’t have an equivalent.

All three might sound minor in isolation, but together they define my daily workflow. OpenCode is impressive for how fast it’s moving, but for my Windows-based workflow it’s just not there yet.


I've launched 3 Rails SaaS products in the last 6 months, all profitable. In the world of LLMs things like this feel less valuable. I can kick off a Claude Code prompt and in 1 hour have a decent design system with Rails components.

Things like this likely need to be AI-first moving forward. This feels built for humans.


Personally, if I feel like you vibe coded your SaaS I’m probably not gonna pay for it. You can obviously tell when a project is vibe coded just based on the way it looks, the weird bugs you see and the poor documentation.

There’s definitely a market for good looking UI that actually works and stands out from the vibe coded junk. Artisanal corn fed UI I guess.


Same here. This was human driven UI. I used AI sparingly for mostly architecture decisions on the gem. Otherwise all by hand. I'm a product designer by trade.


AI is an effort leverage tool, not a thinking one.

If you know what you're doing it can truly accelerate your work.


I agree.

None of mine are vibe-coded. They're making a competent engineer like myself - more efficient.

The roadblock for me is the coding part, not the idea part.


That's fair. I think there's a future where some folks won't want AI to generate all the things. I replied to another comment before but this was very little AI minus some architecture direction of the underlying ruby gem.


I definitely don't AI-generate all of the things.

For me it's a bandwidth increaser for the tedious parts.


Any chance to reach out to you? I'd like to ask you some questions about those SaaS (not in a bad way, just trying to learn)


Maybe they used AI to make this ? But really though I hope they didn't and did some of designing themselves... I'm worried we are approaching a world where we never get new human designs just regurgitated designs from pre 2025.


I used AI sparingly actually. Mostly just some help for Ruby gem architecture and how to approach swapping themes on the fly otherwise all me. I'm a product designer by day so this stuff I do constantly.


I came here to say: Is someone going to tell him? Glad I’m not the only one to be like “Wait.. I can do this with an agent in no time”.

In fact, armed with Context7, Claude could recreate this whole business model in a day.


Definitely aware. I built it to scratch my own itch to be honest. I'm going the non AI route with it. Lotta slop out there. I'm sure it will improve but I'm fine with this being a side gig.


This is how I say it in my head.


Now every time I see UTC, I hear the voice of Yoda: "Universal Time, Coordinated it is."


Interesting take. As a low vision person, the icons help me scan menus like this.


This is what holds me back from Zed.


This is the crux of knowledge/tool enrichment in LLMs. The idea that we can have knowledge bases and LLMs will know WHEN to use them is a bit of a pipe dream right now.


Can you be more specific? The simple case seems to be solved, eg if I have an mcp for foo enabled and then ask about a list of foo, Claude will go and call the list function on foo.


> […] and then ask about a list of foo

Not OP, but this is the part that I take issue with. I want to forget what tools are there and have the LLM figure out on its own which tool to use. Having to remember to add special words to encourage it to use specific tools (required a lot of the time, especially with esoteric tools) is annoying. I’m not saying this renders the whole thing “useless” because it’s good to have some idea of what you’re doing to guide the LLM anyway, but I wish it could do better here.


I've got a project that needs to run a special script and not just "make $target" at the command line in order to build, and with instructions in multiple . MD files, codex w/ gpt-5-high still forgets and runs make blindly which fails and it gets confused annoyingly often.

ooh, it does call make when I ask it to compile, and is able to call a couple other popular tools without having to refer to them by name. if I ask it to resize an image, it'll call imagemagik, or run ffmpeg and I don't need to refer to ffmpeg by name.

so at the end of the day, it seems they are their training data, so better write a popular blog post about your one-off MCP and the tools it exposes, and maybe the next version of the LLM will have your blog post in the training data and will automatically know how to use it without having to be told


Yeah, I've done this just now.

I installed ImageMagik on Windows.

Created a ".claude/skills/Image Files/" folder

Put an empty SKILLS.md file in it

and told Claude Code to fill in the SKILLS.md file itself with the path to the binaries.

and it created all the instructions itself including examples and troubleshooting

and in my project prompted

"@image.png is my base icon file, create all the .ico files for this project using your image skill"

and it all went smoothly


It doesn't reliably do it. You need to inject context into the prompt to instruct the LLM to use tools/kb/etc. It isn't deterministic of when/if it will follow-through.


Sub-agents. I've had Claude Code run a prompt for hours on end.


What kind of agents do you have setup?


You can use the built in task agent. When you have a plan and ready for Claude to implement, just say something along the line of “begin implementation, split each step into their own subagent, run them sequentially”


subagents are where Claude code shines and codex still lags behind. Claude code can do some things in parallel within a single session with subagents and codex cannot.


By parallel, do you mean editing the codebase in parallel? Does it use some kind of mechanism to prevent collisions (e.g. work trees)?


Yeah, in parallel. They don't call it yolo mode for nothing! I have Claude configured to commit units of work to git, and after reviewing the commits by hand, they're cleanly separated be file. The todo's don't conflict in the first place though; eg changes to the admin api code won't conflict with changes to submission frontend code so that's the limited human mechanism I'm using for that.

I'll admit it's a bit insane to have it make changes in the same directory simultaneously. I'm sure I could ask it to use git worktrees and have it use separate directories, but I haven't (needed to) try that (yet), so I won't comment on how well it would actually do with that.


I personally do not do any writes in parallel but parallel works great for read operations like investigating multiple failing tests.


Claude Code with a good prompt can run for hours.


Okay but when will we get visibility into this other than we're at the 50% of the limit? If you're going to introduce week long limits, transparency into use is critical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: