In my opinion the separation isn’t between AI vs. not. It’s between software that is restrained and well reasoned in their development vs. software that sticks unnecessary features in without enough testing that creates bugs and pushes useful UI features into a submenu. Software that updates just so they can sell a new version and get a product manager a promotion almost always sucks.
This is conflated right now with AI because every CEO and PM is pushing AI into their product as a first step to increased monetization once they figure out how. This is not well thought out and it ruins the experience of users who were previously happy with their software. A smart company would wait until the AI actually served a real use case and then roll out a well-tested update, but these companies are exceedingly rare.
I honestly don’t care if you add AI features, but make it so I can ignore them and make sure they don’t get in the way of the actual use of your software.
It’s sad that the best example I can think of such restraint is Apple, who was planning on rolling out AI on everything but realized that a lot of it sucked and halted the rollout partway through. It would have been better if they hadn’t made AI the key selling point of the iPhone 16 before failing to deliver, but at least they realized they should stop.
Microsoft also had a partially self-aware moment with their Recall feature, but it took a massive public outcry and their modifications only partially addressed privacy concerns.
But the volume of shit that gets funding because of AI overwhelms even the partial success stories- agents that don’t do what they claim, self driving cars that have never materialized, Rabbit AI, the Humane AI pin, Meta AI glasses, whatever shit Jonny Ive’s deal with OpenAI will produce- all of that makes people very skeptical because product managers and CEOs can’t hold back and wait until an idea is fully baked.
And let me continue my rant: pricing these things suck. You can charge $20-25 monthly for a general AI service like ChatGPT that has a bunch of features like image generation and is very versatile. You can’t cram a shitty feature into your single-application software and charge the same amount. If a PM thinks this is appropriate pricing they are insane.
Additionally, I said previously that I want to be able to ignore the features I don’t use. If you only have one premium tier, and the price went from $5/month to $20+ monthly because you added AI, then I’m probably cancelling because I have to now decide if the features I do actually use are worth the extra money. Plus, companies that do AI this way will inevitably restrict my usage if I do fine the feature useful.
So I guess if I had the choice, I would prefer a company keep a lower premium tier even if I have no access to AI features. But what I would really like is an option to enable whatever AI you’ve developed and bring my own API key, and don’t change your premium tier pricing (or minimally change). But clearly there are enough people out there that are shelling out for the shitty $20+ monthly subscriptions (or getting their company to pay for it) that my opinion doesn’t matter.
This is conflated right now with AI because every CEO and PM is pushing AI into their product as a first step to increased monetization once they figure out how. This is not well thought out and it ruins the experience of users who were previously happy with their software. A smart company would wait until the AI actually served a real use case and then roll out a well-tested update, but these companies are exceedingly rare.
I honestly don’t care if you add AI features, but make it so I can ignore them and make sure they don’t get in the way of the actual use of your software.
It’s sad that the best example I can think of such restraint is Apple, who was planning on rolling out AI on everything but realized that a lot of it sucked and halted the rollout partway through. It would have been better if they hadn’t made AI the key selling point of the iPhone 16 before failing to deliver, but at least they realized they should stop.
Microsoft also had a partially self-aware moment with their Recall feature, but it took a massive public outcry and their modifications only partially addressed privacy concerns.
But the volume of shit that gets funding because of AI overwhelms even the partial success stories- agents that don’t do what they claim, self driving cars that have never materialized, Rabbit AI, the Humane AI pin, Meta AI glasses, whatever shit Jonny Ive’s deal with OpenAI will produce- all of that makes people very skeptical because product managers and CEOs can’t hold back and wait until an idea is fully baked.