> it has advanced significantly since early releases
I have used Claude Code past few weeks and honestly can't tell if it is any better, or does anything different than Copilot in agent mode with same models.
I'd say it is quite opposite, a deep understanding of what you like and consequently understanding what will make a creation into exactly what you like. (Well I guess some people can create without understanding, just directly expressing their likes)
Since many of our likes are driven by our shared culture and physiology, many other people will appreciate such creation (even if they don't understand why exactly they like it). Others will appreciate depth of nuance and uniqueness of your creation.
Opposite to taste is approximated "good" average which is likeable but just never hits all the right notes, and at the same time already suffering from sameness fatigue.
For me, even if I drop into "mental space" completely and stop seeing(or being aware of) real world while thinking about something I saw/did recently, vividness of this mental image will depend on how close I am to dream state, but even so I think I can never see this image with a lot of details, I think even in my dreams I never see very detailed image.
It is like seeing with peripheral vision, I know that is there and sometimes see it with quick glances, but details only appear if I focus on some part of it and disappear quickly when focus shifts.
Good erotic literature does not only describe images, but also desires, emotions and sensations, all of which I think have different channels of imagination/recall.
I didn't mean it describes images, I meant it elicits them. If you cannot imagine what's happening, you cannot get aroused. Words are just words, they must conjure an image.
Aphantasiacs often cannot imagine sensations either (at least, my friend doesn't. He cannot imagine the smell of coffee either).
For me visualization by itself is mostly useless, it is more of a concept of something arousing happening and vague visual flashes of something similar I have seen. It somewhat works, but nowhere near as effective as real pictures.
What works for me - is imagining sensations, they could enhance both real and vague pictures, and I feel them directly in the body which makes them very effective.
I feel like "intuition" really fits to what LLM does. From the input LLM intuitively produces some tokens/text. And "thinking" LLM essentially again just uses intuition on previously generated tokens which produces another text which may(or may not) be a better version.
A bit sad that the language and framework so enjoyable to write an read will be mostly hidden in a coding box.
And thinking about it made me realize that soon there will be a completely different programming language used solely by coding agents.
ChatGPT gives an interesting take on this, "The fundamental shift is that such a language wouldn’t be written or read, but reasoned about and generated. It would be more like an interlingua between symbolic goals and executable semantics, verbose, unambiguous, self-modifying, auto-verifiable, evolving alongside the agents that use it").
I am not really interested in the product, so can't comment on that, but I want to give my 2c about logo, it gave me an uncomfortable feeling, which is not good for business.
Piggy on fire is fine, I guess, but it's tail looks more like a hole with a disturbing shape.
I have used Claude Code past few weeks and honestly can't tell if it is any better, or does anything different than Copilot in agent mode with same models.