This article was a bit confusing for me. It starts off by describing what "doing it wrong" looks like (okay). It then goes on to talk about Agents. Perhaps it's just that my human brain needs a firmware update, but I was expecting the "what doing it wrong looks like" section to be followed by a "what doing it right looks like" section. Instead, the next paragraph just begins with "Agents".
Sure, one could surmise that perhaps "doing it right" means "using Agents", but that's not even how the article reads:
> "To make AI development work for you, you’ll need to provide your AI assistant with two things: the proper context and specific instructions (prompts) on how to behave under certain circumstances."
This, to me, doesn't necessitate the usage of agents, so to then enter a section of agents seems to be skipping over a potentially-implied logical connection between the problem in the "doing it wrong" section and how that is solved in the "Agents" section.
Copying code snippets into web UIs and testing manually is slow and clunky, but Agents are essentially just automations around these same core actions. I feel this article could've made a stronger point by getting at the core of what it means to do it wrong.
• Is "doing it wrong" indicated by the time wasted by not using an agentic mechanism vs manual manipulation?
• Is "doing it wrong" indicated by manually switching between tools instead of using MCP to automate tool delegation?
Having written several non-trivial agents myself using Gemini and OpenAI's APIs, the main difference between handing off a task to an agent and manually copy/pasting into chat UIs is efficiency — I usually first do a task manually using chat UIs, but once I have a pattern established, or have identified a set of tools to validate responses, I can then "agentify" it if it's something I need to do repeatedly. But the quality of both approaches is still dependent on the same core principles: adequate context (no more nor less than what keeps the LLM's attention on the task at hand) and adequate instructions for the task (often with a handful of examples). In this regard, I agree with the author, as correct context + instructions are the key ingredients to a useful response. The agentic element is an efficiency layer on top of those key ingredients which frees up the dev from having to manually orchestrate, and potentially avoids human error (and potentially introduces LLM error).
>>>"The biggest thing about the Facebook deal is that Oculus will have hundreds of millions of dollars to develop the hardware they dream of, not the hardware they have to settle for."
I didn't realize this was public knowledge, yet you certainly state it as if it was obvious. Do you have evidence of them having to settle for lame hardware that $75M of investment couldn't get them? Where did this news surface? Sounds interesting.
>We can make custom hardware, not rely on the scraps of the mobile phone industry. That is insanely expensive, think hundreds of millions of dollars. More news soon.
>You are right that screens with big lenses in front of your eyes is essentially a brute force design, a design that relies on utilizing the scraps of the mobile phone industry to provide a good VR experience at the cost of performance and form factor. Doing better requires insane resources, which we now have.
> We can make custom hardware, not rely on the scraps of the mobile phone industry. That is insanely expensive, think hundreds of millions of dollars. More news soon.
- Palmer [1]
I don't get what point you're refuting here. Why did you start this off "No, no, and triple no. Wake up call here." That seems like a really drastic response to saying "billionaires aren't like the rest of us" — which seems to be a fairly accurate statement, no? (unless everyone here are billionaire$).
So I'm guessing you were referring to maybe a different aspect of his post?
If you feel like a person's possessions define who they are, yes, a billionaire can never be like you. Don't you think that's a little limiting?
What about the millionaire businessman you meet at a conference? What about your neighbour who earns three times as much as you do? What about your other neighbour who makes half?
As someone who has done plenty of indie game development, I understand their gripes, but also I know this is the "Game" we're playing here — anyone who makes any form of success should expect to be cloned. Period. So think about that from the get-go and come to peace with it in your strategy/set expectations accordingly. The second I put any app out there I assume it's out in the general "mindspace" and will probably manifest somewhere in another form at some point or another. The speed at which it manifests usually correlates to the popularity of the idea.
10x annual revenue I think flies by many investors books and wouldn't seem "outlandish". If need be you could probably meet somewhere in-between 5 and 10x. 2x seems like too low of a multiple.
I don't know if I agree. 10x seems to be the common theme a lot of people quote but I've never personally seen a statement that says this is an established standard (which books are claiming this?). I think there are multiple factors to consider when buying and selling, and claiming 10x annual revenue is standard is fool hardy.
Sure, one could surmise that perhaps "doing it right" means "using Agents", but that's not even how the article reads:
> "To make AI development work for you, you’ll need to provide your AI assistant with two things: the proper context and specific instructions (prompts) on how to behave under certain circumstances."
This, to me, doesn't necessitate the usage of agents, so to then enter a section of agents seems to be skipping over a potentially-implied logical connection between the problem in the "doing it wrong" section and how that is solved in the "Agents" section.
Copying code snippets into web UIs and testing manually is slow and clunky, but Agents are essentially just automations around these same core actions. I feel this article could've made a stronger point by getting at the core of what it means to do it wrong.
• Is "doing it wrong" indicated by the time wasted by not using an agentic mechanism vs manual manipulation?
• Is "doing it wrong" indicated by manually switching between tools instead of using MCP to automate tool delegation?
Having written several non-trivial agents myself using Gemini and OpenAI's APIs, the main difference between handing off a task to an agent and manually copy/pasting into chat UIs is efficiency — I usually first do a task manually using chat UIs, but once I have a pattern established, or have identified a set of tools to validate responses, I can then "agentify" it if it's something I need to do repeatedly. But the quality of both approaches is still dependent on the same core principles: adequate context (no more nor less than what keeps the LLM's attention on the task at hand) and adequate instructions for the task (often with a handful of examples). In this regard, I agree with the author, as correct context + instructions are the key ingredients to a useful response. The agentic element is an efficiency layer on top of those key ingredients which frees up the dev from having to manually orchestrate, and potentially avoids human error (and potentially introduces LLM error).
Am I missing something here?