Just a hot take, but if you ask someone to complete a rote task that AI can do, you should not be surprised when they use AI to do it.
The author does not mention whether the generated project plan actually looked good or plausible. If it is, where is the harm? Just that the manager had their feelings hurt?
Having seen a lot of AI-generated ads, I'm so skeptical that AI is actually improving marketing metrics. Every time I see one of these abominations on YouTube I think "is this working for you?"
With that said, I'm long-term bullish on AI. A lot of companies will over invest, just as they did during the dot-com bubble. But some of those investments will actually pay off, because this technology is not going anywhere.
Regarding your first paragraph, I've even talked with people who go out of their way to actively _avoid_ said product after encountering AI-generated advertising.
So that'll probably continue to have an effect for as long as average people with good eyes can still distinguish "AI"/generative media from "real"/traditional footage.
I have observed this as well, and we've already seen some pushback when major brands use AI in their creative. I wonder if we're entering an era where AI will actually taint a brand.
> Having seen a lot of AI-generated ads, I'm so skeptical that AI is actually improving marketing metrics.
I'm sure that firing tonnes of marketing people for prompt engineers will be a good return on investment... Perhaps it reveals something deeper - which is that a 30 second Youtube ad doesn't generate that much revenue at all.
> With that said, I'm long-term bullish on AI
In the long run, yes. But a lot of people will end up losing their shirts over this.
> Having seen a lot of AI-generated ads, I'm so skeptical that AI is actually improving marketing metrics. Every time I see one of these abominations on YouTube I think "is this working for you?"
they likely track conversions, so someone is clicking and buying.
Measuring the effectiveness of ad campaigns, particularly in the short term, is notoriously difficult. They likely mostly don't _know_ if it's working at this point (though, yeah, I'd kind of assume it isn't.)
Really? It shouldn't be difficult to measure the performance of a YouTube campaign. There are very clear metrics related to watch time, CTR and conversion rate (if applicable).
Honestly the point of this is not to help app developers—it's to replace the need for apps altogether.
The vision here is that you can chat with Gemini, and it can generate an app on the fly to solve your problem. For the visualized landscaping app, it could just connect to landscapers via their Google Business Profile.
As an app developer, I'm actually not even against this. The amount of human effort that goes into creating and maintaining thousands of duplicative apps is wasteful.
This sounds like they creators think that even more duplicative apps that no one knows how it works or what the code even looks like... is a better idea?
How many times are users going to spin GPUs to create the same app?
I mean it's hard to tell if this story is even real, but on a serious note, I do think Anthropic should only allow `--dangerously-skip-permissions` to be applied if it's running in a container.
Oof, you are bringing out the big philosophical question there. Many people have wondered whether we are running in a simulation or not. So far inconclusive and not answerable unfortunately.
I asked Claude and it had a few good ideas… Not bulletproof, but if the main point is to keep average users from shooting themselves in the foot, anything is better than nothing.
I'm not sure how much you should do to stop people who enabled `--dangerously-skip-permissions` from shooting themselves in the foot. They're literally telling us to let them shoot their foot. Ultimately we have to trust that if we make good information and tools available to our users, they will exercise good judgment.
I think it would be better to focus on providing good sandboxing tools and a good UX for those tools so that people don't feel the need to enable footgun mode.
Color is so hard. From colorspaces, to bit-depth, to gamuts, to HDR vs SDR, to ICC profiles. And your hard work is getting displayed on a $20 Wal-mart Android tablet in bright sunlight.
The amazing thing is that it's one of the oldest fields of study when it comes to human perception, yet it's still an active space with tons of new technologies, techniques and discoveries.
If you dig deep enough into the "best" way to map HDR values to a monitor (HDR or SDR), you'll eventually reach active discussions on the ACES forum, with new techniques and transforms posted constantly.
The nice thing about Apple owning the whole stack is the color management is pretty decent. Really good for testing your stuff to see how it would look if everyone had calibrated displays with the correct settings all the way through the entire software stack.
I keep a janky 10 year old display hooked up so I can drag my content over to it and see how bad it's going to look on everyone else's systems.
The other trick they have is really good ambient lighting compensation. Google just added something similar to the Pixel series, but it's not quite as good as Apple's implementation. AFAIK Apple have custom driver ICs and panels, which probably gives them way more control.
It's not inevitable that LLMs will be providing mental health care; it's already happening.
Terrible idea or not, it's probably helpful to think of LLMs not as "AI mental healthcare" but rather as another form of potentially bad advice. From a therapeutic perspective, Claude is not all that different from the patient having a friend who is sometimes counterproductive. Or the patient reading a self-help book that doesn't align with your therapeutic perspective.
As a color company, I'm surprised they didn't lean into some of the current trends that involve shocks of bright color. I'm seeing a lot of Y2K-inspired design, holographic/dichroic color, and super expressive packaging. There are some GenZ/GenA pastel trends, but I do think these pastels have a Millennial stigma.
The author does not mention whether the generated project plan actually looked good or plausible. If it is, where is the harm? Just that the manager had their feelings hurt?
reply