That test does not mean anything. I can also spin up a large LLM on my 5090 and say these models are ready for on device deployment now. However that would not be true for most people. You should test a Golang hello world binary as well. I bet it will take less than 40 milliseconds.
I had almost forgotten about that subreddit. Sadly it has been in a zombie state for years now. Despite having millions of members you can hardly find even 100+ comments on any post in the front page.
Last time I checked only political posts (like related to offshore programmers) got any kind of attention. Most technical posts barely gets 10 comments. Some of the smaller subreddits (like /r/ProgrammingLanguages) are much better.
It's badly moderated (not enough mod resources or something). There is essentially only one mod. Bad comments have a 99.9% chance of not being moderated out at all, and that killed my interest in participation
Actually they are using everything they have to combat these cheap drones. That includes Patriot and THAAD systems as well. Specially UAE, which got struck with more drones than Israel. That is how Iran was able to take out a THAAD radar, because it was deployed so close to them.
You can’t really deduce THAAD was used against drones based on this. As the number of US assets in the Middle East increases, it is logical to deploy more THAAD to protect them from Iranian ballistic missiles. Israel intercepted a few[0] at the beginning of the current conflict, so it is common sense to presume there are still some remaining.
Getting early into any technology only makes sense if you are building your business on top of it. Or you are making money from it in some way. Other than that it makes sense for the rest of us to wait.
Of course those that believe that AI will convert into AGI and destroy society as we know it won't be convinced.
Apple's ecosystem is the 8th wonder of this world. Nowhere else you can put a logo on a piece of cloth or aluminum wheel and sell them for hundreds of dollars. Greatest capitalist company of all time.
> The spec ended up being 6KiB of English prose. The final implementation was 14KiB of TypeScript.
Wait, this is how people vibe code? I thought it was just giving instruction line by line and refining your program. People are really creating a dense, huge spec for their project first?
I have not seen any benefit of AI in programming yet, so maybe I should try it with specs and like a auto-complete as well.
Yes, definitely! The AI tooling works much like a human: it works better if you have a solid specification in place before you start coding. My best successes have been using a design document with clear steps and phases, usually the AI creates and reviews that as well and I eyeball it.
I've been using checklists and asking it to check off items as it works.
Another nice feature of using these specs is that you can give the AI tools multiple kicks at the can and see which one you like the most, or have multiple tools work on competing implementations, or have better tools rebuild them a few months down the line.
So I might have a spec that starts off:
#### Project Setup
- [ ] Create new module structure (`client.py`, `config.py`, `output.py`, `errors.py`)
- [ ] Move `ApiClient` class to `client.py`
- [ ] Add PyYAML dependency to `pyproject.toml`
- [ ] Update package metadata in `pyproject.toml`
And then I just iterate with a prompt like:
Please continue implementing the software described in the file "dashcli.md". Please implement the software one phase at a time, using the checkboxes to keep track of what has already been implemented. Checkboxes ("[ ]") that are checked ("[X]") are done. When complete, please do a git commit of the changes. Then run the skill "codex-review" to review the changes and address any findings or questions/concerns it raises. When complete, please commit that review changeset. Please make sure to implement tests, and use tests of both the backend and frontend to ensure correctness after making code changes.
I’ve always heard (despite the incredibly fluid definition) that “vibe” coding specifically was much more on the “not reading/writing code” side of the spectrum vs. AI assisted code writing where you review and tweak it manually
> Yes, OpenAI is burning $8-12B in 2025. Compute infrastructure is obviously not cheap when serving 190M people daily.
So casual. Actual ad giants like Meta and Google are serving many more people than 190M while bringing in actual profit.
Yes, let's say these are just the early days and they are burning money just like any other VC company. But how are they going to scale up their hardware/usage and get a profit at the same time?
AI hardware is getting optimized YOY too but the flagship models are getting bigger every year as well. I don't see how they are going to get profit without jacking their prices at the same time. And price increases always hits usage growth.
Serving an ad is very cheap these days, while serving a big model is very much expensive.
I just looked at Gmail on my Android phone and it is only 164 MB. That is a big difference.
Also, one thing that annoyed me when I used iPhones is that you can't remove an app's cache without reinstalling it and losing all your data. And most modern applications think cache is free so they use a lot of it. Many times it will exceed even your installed apps or data size.
reply