Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why A.I. Isn't Going to Make Art (newyorker.com)
49 points by eludwig on Aug 31, 2024 | hide | past | favorite | 139 comments


I love articles that tell me why something they can't know is something they like to guess at anyway. This is pointless. Art will get made. Artists will use AI to make some of it. The debate about whether or not its art will be part of the art. This article is not part of anything, it's just throwing punches at air in a tantrum.


> The debate about whether or not its art will be part of the art

> This article is not part of anything

Seems incoherent I think?


In my mind no - because it was hardly concerned with looking at what artists are producing with AI and why its interesting or not interesting, and felt much more concerned with just putting a stop to something it doesn't care for. Which seems anti-creative in a way.


I agree that the article is poorly argued. I think the reason why won't AI can't create art (at least in the near term) comes down to your definition of art. To me, it is a form of communication. This certainly doesn't mean AI can't produce something that people consider art, it means it is incapable of differentiating something humans will consider artistic vs. copy pasta vs. hallucination.

In the end, there is a human curator that decides which creation by the AI is art. There is a human that writes the prompt that creates it. There is human intent as part of the mix, so AI can certainly create art but it is no different than a camera that remove many of the choices people make and presents new ones - which means it is a tool for creating art at a different level of abstraction.


Yes! It reminds me of when electronic music, which went through a similar inquisition. I recall bjork at one point saying if the person behind the computers puts their soul into it it will have soul.

Something about this article just felt like a kind of gatekeeping in hopes that they can convince people its not worth even trying.


I agree; can’t know what AI will do at this point so punditry like this is just meh.

… and this analysis generalizes all types of AIs based on characteristics of just one type of AI - LLMs - that select the “next best word “ (my layman’s understanding of what they do). With so many other types of AI out there, we will eventually definitely get new approaches to AI where the arguments presented won’t make sense

About throwing punches in the air in a tantrum - that was a funny comment. So funny that I wanted to see if someone made art of it. Found something and sharing a link here that is pretty short and to the point, sweet and cool

(I hate mystery links so some info - it’s an performance art piece called “Plastic Bag” on YouTube, 5 mins long, about 60k views, where someone throws punches in the air at a plastic bag)

https://youtu.be/-W6rn2cWs2g


> Art will get made.

There's art. And then there is Art.


"Whereof one cannot speak, thereof one must be silent." — Wittgenstein(Tractatus 7)


"I love articles that tell me why something they can't know is something they like to guess at anyway."

HN is an excellent place to find hundreds of articles and comments predicting the future, i.e., something authors cannot know but like to guess at anyway, especially regarding the future capabilities and predicted use of "AI". There has been a steady stream of this gibberish pertaining to "AI" ever since the announcement of ChatGPT.

In a recent HN poll a majority of voters indicated they thought that "AI" was overhyped.

The parent comment is an yet another example of a comment that tells us something the commenter cannot know but wants to guess at anyway. For example,

"Artists will use AI to make some of it. The debate about whether or not its art will be part of the art."


This is an amusingly ignorant article. Its initial argument, that AI tools don't produce art because they offer meaningfully fewer knobs and dials to creators than a camera, is a classic example of mistaking the contingent for the essential. Those knobs and dials do exist. Download a copy of Stable Diffusion and see all the things you can tweak, iteratively, using the same seed, to work towards an image you desire. The same applies for text.

As it happens I have been using Claude quite extensively as a drafting partner over the past few months for writing a novel. I enjoy plotting, planning and editing, but not drafting, so I let it do the zeroth draft for me. It has been quite a productive arrangement.


My friend is a very good painter, but not a famous painter. In the art world, its an open secret that most famous painters, especially ones that are old, don't really paint much. They hire "ghost-painters" to do the actual work for them, and they simply set the direction of the art pieces and collaborate with the hired-on-contract ghost-painters. My friend has painted for a bunch of these artists and when I ask her whether its unethical, she just shrugs her shoulders because she needs to pay for rent but also, importantly, she thinks that the painting really does belong to the artist setting the direction - that she's merely doing the grunt work.

Are the thousands of choices of which brush strokes to put where actually the seed of creativity? According to some artists - not really.


https://www.througheternity.com/en/blog/art/how-did-michelan...

"the old tale that Michelangelo painted the entire vault by himself is not quite true; he did have help from assistants, and not just to assist in menial tasks such as mixing the plaster, grinding the pigments, moving the scaffolding and aligning the cartoons.

Some less important aspects of the painting were delegated too – minor angels fluttering around the fringes of the main images for example, as well as oak leaves and other ornamental details.

We even know the names of four assistants that arrived from Michelangelo’s native Florence in 1508: Bastiano da Sangallo, Giuliano Bugiardini, Agnolo di Donnino and Jacopo del Tedesco. They were relatively poorly paid, however, and it seems unlikely that they were entrusted with any significant tasks in the project. "


It's weird that in, let's call it "static visual art", so many people defining "artistry" so narrowly, but in, say, film, we recognize -- just in regard to the visual production, not even talking about script, acting, etc. -- that the director, cinematographer, and camera operators, and others, are all artists on some level, and no one would argue that automating functions of one of those roles would negate the artistry of the others, whereas in static visual art narrowing the human involvement to a directorial role is seen as making it non-artistic.


It doesn’t seem at all weird to me that paintings have completely different standards than films. Using a ghost painter does seem a little like cheating, if the painting is sold as being the creative and manual work of the named painter. It seems slightly dishonest to advertise it and talk about it as the creative manual work of one person, but create it using other people. I mean, I’m sure it really does happen and that some people don’t care, but still it doesn’t seem clearly and cleanly ethical unless they advertise the paintings as cooperative works and credit the painter.

In the art world, painting and sculpting are known as ‘fine art’ partly because they’re supposed to be singular creative works that aren’t reproducible and that came from an artist, or in other words they’re judged by the purity of the discipline (according to the Wikipedia page on Fine Art). Prints and photography are second-tier because they use machinery to duplicate images, and they go to lengths artificially limiting the number of copies and including certificates of authenticity just to try to sit above commercial art and ads, closer to fine art. Directing a ghost painter isn’t that far of a step away from selling a print as though it’s a unique original. Artists caught doing that would quickly lose any notoriety.

Films inherently take many people to create, and more importantly many people get the credit. The lay public knows that films are created by large teams & production companies. Paintings, on the other hand, are supposed to be a one-person show, the sole painter gets all the credit. Painting is taught as a solo creative endeavor, the public thinks of it as a solo process, and the narrative of the artistry includes technique because it’s a manual solo process.


I wonder whether a (legal? social?) requirement for a full "credits" on any painting involving more than one person would change things.


Yeah good question. I would think so. To me, the problem goes away if there are full credits, or even just very clear expectations. I think it’s fine to have people collaborate on a painting, or to have a director and a painter, as long as the audience (and importantly, buyers) understand exactly what it is. What’s problematic is to have an artist charging a lot of money on the basis of their and reputation, and let people think they’re doing all the painting, but fail to disclose they don’t actually do the brush work or make the detailed decisions.

This is somewhat common in other arts too, maybe more common than with painting. I know Stradivarius didn’t make his own violins, but had hoards of apprentices doing the work, almost like a company. Architecture is labeled a fine art, and gets a special exception there, but everyone knows the architect is the director of other people who labor.


That's an even better example. Although I can see Ted Chiang arguing that the really good directors painstakingly redo shots, cut, edit etc. Maybe the essence of art is the grind? Not sure.


> Its initial argument, that AI tools don't produce art because they offer meaningfully fewer knobs and dials to creators than a camera

Chiang is not saying this at all. I'm not sure how you interpreted it this way.


> > Its initial argument, that AI tools don't produce art because they offer meaningfully fewer knobs and dials to creators than a camera

> Chiang is not saying this at all. I'm not sure how you interpreted it this way.

He says this, very explicitly, in his direct comparison with photography. I'm not sure how you managed to miss it.

"When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure. But over time people realized that there were a vast number of things you could do with cameras, and the artistry lies in the many choices that a photographer makes. It might not always be easy to articulate what the choices are, but when you compare an amateur’s photos to a professional’s, you can see the difference. So then the question becomes: Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no."


I found Chiang's argument a little incoherent, but I think he _is_ essentially saying this. "Knobs and dials" are the opportunity to make all the little decisions that Chiang's definition of art requires. He says explicitly that someone who has developed "hacks" for an image generator has engaged in the artistic process. I think he uses the term "hacks" to mean that the artist found additional, unexposed controls (aka knobs and dials) over the model.


Reminds me to when Ebert argued video games can't be art.

https://www.rogerebert.com/roger-ebert/video-games-can-never...

The argument rings just as silly today as it was 12 years ago.


The problem with this discussion is not only that the definition of "art" lacks a consensus. The main issue is that instead of being a descriptive, a lot of people call something "art" as a compliment. Which is utterly silly, by the way.

Because at that point "art" becomes sinply something that is "aesthetically pleasant", and that will change from person to person. "Art" as a compliment is useless.

If we try to use "art" as a descriptive, then we would need to draw a hypothetical Venn diagram, and define things that are art and things that are not, so we could try to categorize videogames, or whatever AI produces. This implies a lot more agreement than there is currently.


'Art completes what nature cannot bring to a finish. The artist gives us knowledge of nature's unrealised ends.' — Aristotle (284-322 BC)

'The true work of art is but a shadow of the divine perfection.' — Michelangelo (1475-1564)

'Art is a mediator of the unspeakable.' — Goethe (1749-1832)

'A work of art is a corner of nature seen through a temperament.' — Emile Zola (1840-1929)

'Art has absolutely no existence as veracity, as truth.' — Marcel Duchamp (1887-1968)

'Art is science made clear.' — Jean Cocteau (1889-1963)

'Art is anything you can get away with.' — Andy Warhol (1928-1987)


Over the summer I have had about half a dozen conversations with people who work in non-technical fields (university administration, healthcare, government administration, teachers, etc) who are furtively using ChatGPT to augment their communication tasks. There is a hushed-tones quality to them admitting it.

I suspect the rate of individualized adoption of AI augmented writing is well beyond what a casual observer here on HN would think it is.

I also share Chiang's worry about this:

> We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?

I do not think OpenAI Et al. set out to create a self-perpetuating slop machine like this but it sure feels like this is where it is going. For individuals it improves their life I guess but when zoomed out there is something quite dystopian about it.


Artists have been spending decades redefining art to broaden it as much as possible, to the point where it seems to be defined now as anything that makes you feel something. It sure seems like AI art is making some people feel something alright.


If Jackson Pollock made art I don’t see why whoever programmed the AI didn’t.


and "93% of Paint Splatters are Valid Perl Programs" - https://www.mcmillen.dev/sigbovik/ (discussion: https://news.ycombinator.com/item?id=27929730 )


Do you feel anything when looking at DALE 3 images on this page? https://openai.com/index/dall-e-3/

I can’t tell you why, but I don’t really react to any of them or really any AI art I’ve seen.


Perhaps you just feel that way because it's made by a machine. Would need some double blind experiment really, preferably with people who don't know the 'style' of these models.


I think we've already had those experiments, really, when that generated piece won an art competition ('Theatre D'opera Spatial')


It won a Colorado State Fair contest.


The camel's nose under the tent.


I'd be willing to bet that a lot of it is influenced by context and expectations.

Take one of those images, put it in a nice frame, hang it in a quiet art museum, stick a little placard on the wall next to it with a made up backstory, and your emotional response will probably differ.

Conversely, go to your local art museum and randomly pick 10 paintings. Take a hi-res picture of each and place them on a web page entitled, "AI is getting better at generating art" and you'll probably pick out a bunch of tell-tale flaws that are evidence of machine-generate pictures. :)


I would hazard a guess that there will eventually be art you consume that won't immediately be obvious to you that no human artist made it

With the context of looking at images on the dall-e page you're looking at a gallery of ai images not art in its natural habitat.

Art deserves to be contextualized properly not be evaluated by the large batch.


My current background on my PC is AI generated. I only know about it because when I was looking for a higher res version I found the origin. It's my current favourite image(which always changes).

My point is that your argument is non-sensical. Just because you don't feel anything from AI art doesn't mean others don't.


I was making an observation not an argument.

With hundreds of millions of such images someone is going to feel something when looking at some of them. But so far I don’t which seems kind of odd.

Which is why I was asking if you feel anything looking at those specific images on their home page?


No I don't feel anything. But it's very rare for me to feel anything from a piece of art. Even something as renowned as the Mona Lisa evokes nothing for me. The only classical piece of art that I know of the top of my head that makes me feel something is Edvard Munch’s “Anxiety”. Most of it seems drivel to me where well constructed AI art is better.


Most people respond to movies and songs etc.

If you specifically mean still images it’s worth remembering the context. A great deal of “art” in museums isn’t really aiming for emotional responses. Portraits are just selfie’s before camera phones etc. It’s degraded because the colors have changed over time as the paint degraded. At the extreme ancient sculptures weren’t just stone they got painted. Further even modern art was meant to be viewed in person, a life sized animal sculpture hits very different than that same art on a screen.

Anyway personally walking around museums I responded to quite a bit of the art excluding portraits etc.


I like them, if anything a little more than the contents of the pre-AI art galleries I've walked around.

But some of my friends say as you do.

I wonder if this could be uncanny valley? I understand that sense of off-ness will be in different places for everyone, as we know (and therefore spot) different things about what we're looking at.


I also don't feel anything looking at most "modern art".

It is still considered art.


Modern art is considered art because it's a form of human expression, regardless of whether you "get it" or not - it matters that a human being made it, as opposed to a machine.


If a machine makes something that elicits genuine emotional response it is not art?

Art must necessarily be made by a human to be considered art?

The definition gets less useful by the second.


The definition i'm using is a commonly held one. Art is subjective, and obviously people disagree about what is and isn't art, but most people agree that some degree of human intent and expression is required, at a baseline, for something to be considered art.

But your definition seems to be that anything which elicits an emotional response is art, which seems far less useful.

I suppose it could be argued that because AI requires models and prompts, the end result could be considered art. Also that it's art simply due to the controversy it provokes. Then again, I have a difficult time considering something art if it can be exactly duplicated with the correct inputs. To that end, the human being doesn't really matter. To me, if the human being doesn't matter, it isn't art. I'd also dismiss most "generative" art for the same reason - even if fractals are pretty I wouldn't consider them art.


> But your definition seems to be that anything which elicits an emotional response is art, which seems far less useful.

Your definition is not very useful either: "some degree of human intent and expression is required, at a baseline, for something to be considered art."

A lot of mundane things done by humans - from smoking a cigarette to filling a form -qualifies as something that elicits a "degree of human intent and expression"

> I have a difficult time considering something art if it can be exactly duplicated with the correct inputs

If I make a perfect copy from "Starry Night Over The Rhone" is it not art? Impasto on canvas and all?

If I sit at the Organ and play Tocatta and Fugue in D Minor is it not art?

I would argue that in both examples I am duplicating art with the correct inputs.


Oh of course not, they're not accompanied by 5 paragraph essays about what they mean.


Do you think that will always be true?


There’s people already creating content with AI that resonates with me as much as content without. AI is just another tool in the box.

One example I recommend is the Unanswered oddities series. Really funny. All visuals and audio AI generated.

https://www.reddit.com/r/singularity/comments/1e6c1d4/unansw...


I do.


The problem with this article is thinking that art is some special category vs. a demonstrative one. The reason AI can’t make good art (generally) is because it is limited by the ability of its users to have knowledge of how to make those choices, relative to their own abilities, taste, knowledge etc. This is just as true of code as it is of art. An LLM isn’t going to implement your request using efficient or novel data structures unless you know those things exist and can instruct it to use or assist you in developing them. While models as they currently exist (and are fine-tuned) may be biased toward code slop and art slop, this is because slop is the level at which most people are operating.


For an essay that hinges on the notion that “art requires making choices,” I wish the author had chosen to delve a bit more into the question of what choosing is. Do humans really make choices? If so, how free are those choices? Is there something about human choice that will never be convincingly imitated by computer? If so, what is it, and how do you know it cannot be imitated?


> For an essay that hinges on the notion that “art requires making choices,” I wish the author had chosen to delve a bit more into the question of what choosing is.

For an essay that hinges on the notion that "art requires making choices", and attempt to apply this to AI image generation, and even specifically tries to draw a contrast between that and photography, I wish the author demonstrated even a superficial knowledge of what choices go into AI art and how it compares with photography, much less delving into the kind of deep philosophical examination of the underlying premise that you propose. But it looks like exploring the subject beyond the shallowest text-box-only UIs presented by a few big firms is too much to ask before the author does exactly what he says was naively done early on about photography.


Exactly my thoughts. I think people who make arguments like Chiang's are unwilling to examine our own decision making process, and in particular are unwilling to entertain the idea that it is as mechanistic as an artificial neutral network.


>Are You Living In A Computer Simulation?' (2001)

https://simulation-argument.com/simulation.pdf


I don't need AI to make art; I need it to extrapolate. Suppose I want to make a movie or a game. I can come up with some distinct art for characters and places and things. I want the AI to extrapolate from that information and build the rest of the world. I don't have time to make the artwork for the entire world/movie/etc. I want to feed it some minimal amount of artwork plus a bunch of scene scripts and end up with a unique-looking movie. I could then go back and enhance areas of the movie by uploading additional artwork (or descriptive text) to scenes that I feel were lacking something.


If you give it the year's headlines and today's newspaper and and ask for a picture that's a social commentary on some current affair, how is that not art?

And you didn't prompt it any more than commissioning a piece, or making a thematic suggestion to a painter friend.

You may not like its art, and it may not come up with some whole new original style, but that doesn't mean it isn't making art in known styles.

TFA is just a bit of a silly fearful protest, IMO.


Art is an act of human expression.


Well if you make that your definition then of course it isn't, it's pointless talking or writing articles about, so I still think it's a silly fearful protest.

I just checked Wiktionary, which indeed says it's the 'concious arrangement [of whatever medium]', so it just becomes the same question as whether AI can be conscious, which I think is just pointless semantics, it hinges entirely on how you define it, there's no deeper meaning or interest for that debate to reveal.


What about the art of elephants? https://elephantartgallery.com/


Humans express themselves through tools.


Sure. And no one is arguing like "does a paintbrush make art" "does a camera make art." I'm not saying people can't use AI as a tool to make art I'm saying AI can't make art.


I'm not particularly interested in gatekeeping art, but that makes perfect sense to me, so I think we fundamentally agree.


Humans are not metaphysically special entities, any more than the Earth is a special planet.


We're the type specimen for a category that has no other known members. Possibly at some point we'll need to extend the category of "person" beyond "human" but for now using them interchangeably seems fine.


>it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland.

The model can't output the average because the average is usually completely meaningless, that's why it's a generative model and not a regressive one. As always, these articles are made by people who don't really understand the technology, and create their own interpretation on how it works, whether they are right or not at the end.


One thing we can guarantee is that human hubris isn't going to go away.


The author seems to argue that, before photography existed, if someone commissioned a painter to create a portrait (without further qualification), then the painter didn't create an art piece.

I think it's pretty clear that generative AI makes a lot of small decisions. They might not be groundbreaking, or novel (as they aren't in a custom portrait), or somehow lack overall vision, but they are there.


The definition of art seems to be the polarizing issue here. "But is the world better off with more documents that have had minimal effort expended on them?" Value is attributed to high-effort tasks that are a scarce resource [1]. When AI creates something in a short amount of time, derivative of the past, and available to everyone, it will be considered cheap. By this definition, there's still room for mass-market creative works to be created by AI. Netflix already does that, even with humans in control. But high economic value or notoriety will only come from taking the first draft and doing something physically or intellectually innovative with it for the first time - which I relate back to what Chiang writes.

[1] https://www.sciencedirect.com/science/article/abs/pii/S09218...


It's refreshing to see a science fiction writer underplay the capabilities of AI, but if anyone can speak to the nuances and implications of generative AI on art and writing it's probably someone like Ted Chiang.

We can debate his generalized definition of art as making creative choices that carry subjective, intentional, and performative value for human beings (and therefore LLMs fall short of this), but I think he makes a couple strong points nonetheless:

1. The argument others like François Chollet have also made, that we have yet to see any AI systems capable of exhibiting intelligence beyond stylistic mimicry or forming generalized knowledge about concepts from large data sets.

2. The subjective experience of human interaction is valuable and desirable, and will remain so in the face of increasingly capable models, not because they won't be able to compete in producing inspiring art or enjoyable fiction, but because of the inherent primacy of human intentionality and experience.


> we have yet to see any AI systems capable of exhibiting intelligence beyond stylistic mimicry or forming generalized knowledge about concepts from large data sets.

I've said this long before useful LLMs, but I don't think we've observed this in humans, either. Human creativity can be put into two very similar categories:

1) Metaphor; the arbitrary application of the dynamics of one thing to another. "What if information is like water?" "What if the economy is like the human body?" "That woman is like a bird."

2) Bad copies. When you see someone's output and try to imitate it, but have to speculate about the creative and mechanical process that resulted in that output. You sometimes guess right and sometimes wrong, but the output is similar. Then you vary the parameters in order to create a new example, but since your process was different, with different parameters and different interactions, you create something different than the person you copied would have created.

1+2) both randomly often create emergent effects that are then copied by others, sometimes badly.

This is how Japanese metal can be the result of Black Americans copying songs from musicals and English/Irish drinking songs, British people copying the blues from Black Americans, Americans copying British Invasion music and NWOBHM, and then Japanese people copying American metal.


Human beings are currently capable of productively searching through the space of possible knowledge and experience in ways no current AI systems are. This is not to say AI will never do this, but I think it's fair to say there are things human beings are capable of doing today that AI is not, and it remains very much unclear whether AI will ever be able to achieve important milestones like being conscious in the sense of having a subjective experience and therefore forming special knowledge that can only come from that, like Qualia.


> Human beings are currently capable of productively searching through the space of possible knowledge and experience in ways no current AI systems are.

I would say the opposite: AI are very good at searching through information spaces, much better than we are.

They're terrible at learning from experience, the points made in the article about that are I think valid, but they're wildly super-human at searching.

> like Qualia

For me, the biggest issue here is: we don't know what that is, it's just what we do.

Without knowing what qualia actually is, we can't tell if an AI does or doesn't have it, we can't deliberately make a machine which does or doesn't have it.

I really hope we figure that question out before someone tries full-brain uploading/emulation.


> I would say the opposite: AI are very good at searching through information spaces, much better than we are.

We are likely talking past each other here. By "searching" I don't mean how inference is currently carried out by efficiently analyzing the context window using weights trained on large data sets fine tuned on specific goals.

I mean the process by which novel information is discovered, which is why many proponents of AI will concede that it's not currently capable of "doing science" or making novel discoveries.

> we don't know what that is, it's just what we do.

Not sure I understand, we have a pretty good understanding of what qualia actually is, even if it can be difficult or awkward to talk about conceptually. The gap between having a subjective experience and not having one is a large one, just ask anyone who's alive but under general anaesthesia that induces loss of consciousness. Qualia is simply what arises from the quality and character of having a subjective experience.


> We are likely talking past each other here. By "searching" I don't mean how inference is currently carried out by efficiently analyzing the context window using weights trained on large data sets fine tuned on specific goals.

> I mean the process by which novel information is discovered, which is why many proponents of AI will concede that it's not currently capable of "doing science" or making novel discoveries.

Huh. I think those are the same thing?

But then, I say that AI can do science. Not that I would recommend specifically an LLM for this, but what was AlphaFold doing if not science? Or even GOFAI having been used for the four colour theorem back in the day.

> Not sure I understand, we have a pretty good understanding of what qualia actually is, even if it can be difficult or awkward to talk about conceptually.

Hm. How to rephrase…

Can you create a testable definition of it?

> The gap between having a subjective experience and not having one is a large one, just ask anyone who's alive but under general anaesthesia that induces loss of consciousness. Qualia is simply what arises from the quality and character of having a subjective experience.

Purely from asking them questions, how will you differentiate between each of these cases?

1) A person under general anaesthesia that induces loss of consciousness

2) A person under the influence of a paralytic agent without anaesthesia, who is fully aware of their surroundings but unable to respond

3) A brain-dead person

4) A person with locked-in syndrome

5) A person in REM sleep who is currently dreaming but unaware of the surrounding real world

6) A person in deep (non-dream) sleep who also has no awareness of the surrounding real world

7) An unborn foetus (any species)

There are people with locked-in syndrome who later recover, who report that those around them treated them as if they were non-conscious.


AlphaFold is really impressive and made scientific advancements and discoveries in the field of protein folding, and is now even expanding into more molecules and biology, but it was explicitly trained to do just that. You're not going to see AlphaFold write compelling science fiction.

We can build models that are specifically trained and fine tuned on scientific fields to make advancements in them, but that's different from what I'm talking about, which is building a model that forms its own hypothesis, designs its own experiments, and contributes to the wide and deep wealth of knowledge that, crucially, goes well beyond the scope of its training data.


Ah, sorry, I was editing while you responded.

> You're not going to see AlphaFold write compelling science fiction.

Yes? But not many biology PhDs do that either.

> I'm talking about, which is building a model that forms its own hypothesis, designs its own experiments, and contributes to the wide and deep wealth of knowledge that, crucially, goes well beyond the scope of its training data.

A few weeks ago we saw LLMs do the first half of that. I think AlphaFold demonstrates the last.

Don't get me wrong, I trust the person who told me these are not good papers, but it does do those things: https://sakana.ai/ai-scientist/


I'm happy to leave the conversation here. I don't necessarily disagree with what you're saying, but we appear to be making different points, or at least at different levels of description, and it's not really productive anymore.


Certainly, my confusion would be compatible with that.

Have a good weekend :)


We should ask an artist if it's art. The testimony of an expert firsthand witness is vastly superior to interpretation of a secondhand abstraction by inexpert persons.


I'd argue that an LLM has a fuller understanding of what we mean when we say "art" than most individual users of the term (Chiang's definition sounds more like the one for "craft" to me). Indeed, asking ChatGPT about the difference between art and craft gives me a more nuanced contemplation on the subject than I found in the article. Maybe these models are closer to the experts than we are willing to give them credit for? At the vet least, they are the best mirror we've ever invented.


> the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception

Sure. But suppose we had an AGI that was just as smart as a human: clearly it would be able to make all these decisions just fine and make art. If current AIs are someone between that and dirt, then they ought to be able to make decisions of less complexity, but still of some importance, to the final product. As AIs improve, we would expect the decisions they make to become more complex.

Recall that traditional artists have always had a number of assistants to help them. They produced the sketch and outline, and had other artists - skilled, but not as much as them - fill in the details. A modern artist, who already is less skilled than these artists, and futhermore has less need for their creation to stand the test of time, can benefit from an even less skilled assistant helping them.


„AI“ (a stupid term to begin with) is just a tool like any other - you can use it to make art.

Of course it’s not going to be creative on its own - it obviously is not intelligent.

But for me comfyui is an incredibly cool tool to be creative.

Such a boring topic after all - all the noise it attracts won’t amount to much once people understand this technology


I think it's an interesting debate: what is this thing we've made? And what does its existence teach us about ourselves?

You're right that just going yes it is - no it isn't isn't so interesting, but this mainly stems from the fact that "intelligence" is a poorly defined, pre-scientific term. And really most times you're talking about about whether some X is a Y, you're not so much talking about X as about your definition of Y.

I think the thing is, with LLMs/Generative AI we see some aspects of ourselves, but not enough that we can accept that it is fully like us, hence the resistence. To me, the answer is clear: what is usually called intelligence is actually several different things, of which whatever it is an LLM does is one.


People do understand it, no one is creating art by thumping "tracer overwatch big boobs trending on artstation" on a keyboard and then heading to lunch.

This idea that people who disagree couldn't possibly understand is misguided at best.


does the tooling augment a human, or is the tooling sold to replace a human?


> does the tooling augment a human, or is the tooling sold to replace a human?

It's bizarre on HN of all places, given its connection to the tech industry and the history of tools sold by that industry, that someone would imagine these are mutually exclusive options.

As is often the case with tech industry tools, the tool in fact augments a human, and is (often) sold to replace a human. That's barely even a superficial contradiction -- enhancing the productivity of each human doing a task, from the perspective of consumer of the kind of human labor involved that is making short-term decisions around a fixed amount of needed output -- replaces some share of those humans rather directly. (On a broader analysis, this is often not true, because it also expands the market for the kind of work it augments by reducing the price per unit output to the point where more marginal uses become viable, but the individual existing purchaser of output often isn't concerned with that.)


The one liner above is an adoption of a significant question asked by the inventor of the computer mouse, Doug Englebart. Doug spend decades in tech and is the man in the "Put this, there" demo, which changed technology history. Doug's audience was sometimes Defense through the Stanford-area. Teams and communication were his everyday world.


For me it’s solely the former even if the marketing/press around it hypes up the latter


Also from this author:

https://www.newyorker.com/tech/annals-of-technology/chatgpt-...

ChatGPT is a blurry JPEG of the web (newyorker.com)

574 points by ssaddi on Feb 9, 2023

306 comments

https://news.ycombinator.com/item?id=34724477

About the author:

Lunch with the FT: Sci-fi writer Ted Chiang (ft.com)

https://cc.bingj.com/cache.aspx?d=4753927952212231&w=P5zSV2b...

Recommended by HN moderator:

https://news.ycombinator.com/item?id=34728965



AI can't make art in the way we define art, because we usually define art by who made it, rather than any characteristic of the object in and of itself. But if anything, a criterion that makes an object into art is its uselessness relative to its value:

Panting a fence, not art because it keeps the wood from rotting.

Painting a fence hot pink? Art because there's no good reason to paint a fence that color.

If we discover that birds hate hot pink fences, and that makes them last longer? Not art again.

A rich guy pays a million dollars for a hot pink fencepost? Art. Who's the guy who sold the hot pink fencepost? Does he have any other colors?


Those analysts and commentators who use the shortcomings of a nascent technology to rule out possibilities far down the line are extremely foolish.

Whatever anyone thinks about the limitations of LLMs, or whether AI in its current form is sales hype - can anyone sensibly claim that AI 1000 years from now won't be capable of an artistic sensibility? Until there's some proof that there is a secret ingredient in human consciousness that can never be developed by AI - not even a self-aware AI - anyone attempting to lay an imaginary ceiling over the tech is deceiving themselves.


The problem with this article is that presupposes that the technology will remain the same and in its current state id agree it isn’t art

- but even assuming the rate of advancement slows down, eventually it will be making Art…


The market for Art is very small. For the vast amount of visual content some generic AI placeholder is entirely sufficient. Won’t end up in an art gallery but it also doesn’t need to.


Loras on things most people don’t make loras for are already producing amazing art in my experiments (far away from the cookie cutter ai output style)


LLMs are just art as code. From the article:

> he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit.


The hand is wiser than the mind.

Doing art via the mind is like breathing through a soda straw.

Mind promises and promises but just keeps missing the mark.

I can make a perfect line with my hand in a moment. I can spend a year creating a "perfect line drawing device" and never get there.

The promise of mind is so tempting tho. Succumb to it and you end up living in a flavorless cartoon.


I'm disappointed in Ted here. For a writer that likes to delve into the possibilities of tech, he's sharing a surprisingly underbaked view. People outside of tech seem to think that AI creation is just "fat finger a prompt -> take output and claim to be an artist on the interwebs" but the reality is that all the people I know who actually call themselves AI artists do photobashing, image2image, controlnets, inpainting, custom models, etc. Likewise, the people I know using AI to write fiction are meticulously developing characters, timelines, scenes, story arcs, style samples, etc and using AI to handle creating rough drafts that they then hand tune.


I believe he demonstrated awareness of the difference between lazy use and effortful use; he appeared to me to acknowledge the latter as art:

"""The film director Bennett Miller has used dall-e 2 to generate some very striking images that have been exhibited at the Gagosian gallery; to create them, he crafted detailed text prompts and then instructed dall-e to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit."""


Something I've had in the back of my mind is that gen AI has enabled a new generation of outsider artists. That's all. It has lowered the barriers to entry for creativity so much that a whole host of people who have had no formal training or dialogue with Real Artists are able to jump in and just make things they want to see. No surprise their creations are ugly or bad or soulless or weird by conventional standards; that's the norm for outsider art


> People outside of tech seem to think that AI creation is just "fat finger a prompt -> take output and claim to be an artist on the interwebs"

Probably because that's the predominant experience of people encountering AI art on the Internet. I have no doubt whatsoever that there are people out there using AI to do interesting things, but like with basically every technology, the vast, vast majority of the output you're going to see is people who see a labor saving device that can make doing... something, at scale, brain-dead easy. Be that generating shitty coloring books and selling them to overworked parents, generating shitty books on niche, dumb topics like the crystal healing woo shit and selling them to uncritical audiences, or just generating page upon page of boring, shitty artwork and uploading it to DeviantArt and paywalling it.

And that's just individuals. Many online businesses are actively enshittifying themselves too, adding AI generated content alongside (or in place of) human created content. On the note of DeviantArt, they built an AI generator into the damn site so people can fill it with even more low-effort garbage than was already getting uploaded. And of course Google now headlines your search results with a shitty LLM summary that runs the gamut between "dull, uninteresting summary of somewhat relevant information" to "complete nonsense that actively endangers lives" while also depriving even more websites of even more traffic that gave them whatever information in the first place.

Like, again, I have no problem envisioning some people and some orgs some place are doing interesting stuff with this tech. However I cannot overemphasize how utterly, completely, totally dog-shit my experience personally has been with it and how harshly I now tend to judge any project parading around AI integration. I'm open to being wrong... but I'm usually not.

There was that Vaudeville game that made the rounds that I felt was at least trying to do something interesting with LLMs, but like... the tech just wasn't there yet. You're talking to characters and can say basically whatever you want, and then an LLM generates an answer based on the context of that character and it's read back to you by text-to-speech. It's... neat? For like ten minutes, and then you're just playing a detective game with impressively bad writing and zero-effort VO, and the fact that the entire game was built of pre-built, unchanged assets made it feel incredibly cheap and low-effort. The only thing it's really good for is as streamer fodder, weird garbage for people to overreact to and fuck with for an audience.


I think this is mostly just Sturgeon's law (ninety percent of everything is crap) opposed to anything unique to ML tools. The vast majority of photographs you see on the Internet have also likely been quick phone-camera snaps, with comparatively very little "high art" photography. Lower entry barriers result in more works total, but with disproportionately many at the lower end.

There is still a lot of interesting work making use of ML tools. Maybe I'm biased towards art that embraces experimentation and new technology, but I found even images like https://i.imgur.com/Jybvj0r.png (zoom out) far more interesting than most of what I see in galleries.


> I think this is mostly just Sturgeon's law (ninety percent of everything is crap) opposed to anything unique to ML tools.

I think the unique point to ML tools is the sheer volume that, IMHO, vastly outpaces 90%. And I think that's partially down to the fact that image generators are themselves, by their existence, low-effort easy-to-use tools to create images. It's not as though there wasn't already a vast and comprehensive existing tool-set for aspiring artists, even ones with not a penny to their name, to use. Tons of open source art programs exist, and if you are ready to jump to paying for your tools, you have an incredibly diverse set of options over all manner of capabilities, price points, and focuses. The notion that these tools "democratize" art has been silly to me from the beginning; there were already tons of tools available to anyone who wanted to learn the skills to use them. These tools, instead, seem directly aimed at people who don't want to learn those skills, and like... unskilled artisans don't make good things. Sorry not sorry. If you lack the interest in the subject to learn the skills of how to make a thing, you probably also lack the interest to learn what constitutes a good version of that thing, and even if your AI is very well made, you won't know what to really ask it for. Which I think is the reason behind so many prompts including stuff like "octane render, unreal engine, featured on artstation."

So while yes I agree in principle that Sturgeon's law is definitely at work here, I think it's important to note that the tools themselves are largely just... not really going to fit into a creative's workflow who has the skills already, not even just the literal skill to put a pencil to paper, but the skill to know what a good version of whatever they want would look like. It's the same reason I don't really use copilot all that much, because it's easier to just write the stuff I know needs writing than asking it to generate it, and then modify it to suit my code-style and existing environment. I don't find that a compelling time saver, it's more like a time cul-de-sac. Yes I'm spending the time writing the prompt instead of writing the code, but I'm frankly pleased as punch to just write the code.

I guess to TLDR my own comment here: if you knew how to make the things, you'd just make the things. Image generators are explicitly for people who don't know how, and that reflects in the quality of what's made.


> And I think that's partially down to the fact that image generators are themselves, by their existence, low-effort easy-to-use tools to create images

Is the same not true of point-and-shoot photography? Or crayons? There's a near-endless supply of low-effort content due to tools designed to be easy-to-use. Anecdotally I still see more "crappy photographs" (many of which my own) than "crappy AI art".

With both you can get deeper into the details, making choices about ControlNets, LoRAs, inpainting, etc.

> [...] people who don't want to learn those skills, and like... unskilled artisans don't make good things. Sorry not sorry. If you lack the interest in the subject to learn the skills of how to make a thing [...]

I probably wouldn't make claims about ML tools "democratizing art", but at the same time I feel this is too reductive in the opposite direction.

There are reasons why working-class people are vastly under-represented in arts. I think limited ability to dedicate a huge chunk of time to a creative pursuit is a largely overlooked reason, not just lack of interest.

I think it's also fine to want to, say, design a game without hand-painting all the normal maps - instead generating them with ML tools based on your textures. Someone not specializing to have fine-level technical skills in all relevant areas doesn't imply lack of creativity/interest at a broader scale.

> I think it's important to note that the tools themselves are largely just... not really going to fit into a creative's workflow who has the skills already

I'd claim lot of ML tools, like generative fill built into various image editors, already do even for those who aren't going out of their way to experiment with ML.

Sometimes it's useful to work at the higher level allowed by automation, and sometimes it's useful to have fine-grained creative control. These aren't mutually exclusive - the approaches can/should be mixed where appropriate. I've had good success with sketching out an initial block-color image, then iteratively diffusing and tweaking it.


> Is the same not true of point-and-shoot photography? Or crayons?

Those have a skill floor though, even if it is quite, quite low. If you can't manage to get the object you're trying to take a photo of in-frame, or manage to draw the thing you're trying to draw, there's no amount the tool can do to compensate for that.

> There's a near-endless supply of low-effort content due to tools designed to be easy-to-use. Anecdotally I still see more "crappy photographs" (many of which my own) than "crappy AI art".

I mean it depends how you define crappy photographs. My phone camera is a tool, and I use that tool to document things for all manner of purposes. I wouldn't call those photos artistic in any way at all. It feels like you're deliberately saying "all photos are art, and most of them are bad" when I think the vast, vast, vast majority of those, including by the people who took them, would not be referred to as art.

> There are reasons why working-class people are vastly under-represented in arts. I think limited ability to dedicate a huge chunk of time to a creative pursuit is a largely overlooked reason, not just lack of interest.

Agreed wholeheartedly. But, a working class person who has things they want to express artistically is going hit various walls with generative models very quickly, in much the same way I did. Like, if you feel a creative verve at all, I just can't fathom you looking at the wide assortment of all manner of tooling, and choosing the one where you're playing telephone with a toddler that paints over-smoothed, nonsensical photo-realistic pictures.

And again we go back to the notion that "the process is the point" and as a creative, I completely agree. There are certainly times I feel frustration at my tools and wish they would just make what the hell I'm trying to make, but if that was the entire process, I would get nothing from it. Figuring out what prompt will get you what kind of output is interesting, but it's not fulfilling.

> I think it's also fine to want to, say, design a game without hand-painting all the normal maps - instead generating them with ML tools based on your textures.

To be totally real I've never heard of someone drawing normal maps. I thought the traditional way you went about making those was having a high-detail model inside a low-detail one, and generating them that way.

> Someone not specializing to have fine-level technical skills in all relevant areas doesn't imply lack of creativity/interest at a broader scale.

It's not a matter of high or low skills, it's a matter of wanting skills and wanting easily made repetitive crap. If you're the kind of person who finds it fulfilling to slam text into one of these things and get your teddy bear smoking weed pictures, and that's all you want and are fulfilled, more power to you. I wouldn't personally call that art, nor would I find it nourishing to my creative spirit, I would say that's just instant gratification and there's absolutely nothing wrong with that. Now if you take that stuff and then go try to sell it... I mean that's your prerogative, I'm definitely not buying and I would encourage anyone else to just type a similar prompt into a generator and get it that way.


> Those have a skill floor though

Point-and-shoot cameras, finger painting, or crayons have a lower skill floor than even basic text-to-image generation, I'd claim. You can give those to children prior to the age where they'd have a proper grasp on describing visuals through language/writing.

Yet, I don't feel as though the glut of low-skill content subtracts from any of those mediums - regardless of whether you disqualify a child's macaroni art from being art. Probably even the opposite; I've enjoyed areas that have lowered the technical skill barrier to allow people to create who otherwise wouldn't have been able to (like the creative explosion around Flash games, with ActionScript and the tooling being relatively beginner-friendly) in addition to it leading to more (even if proportionally less) high-skill content.

> But, a working class person who has things they want to express artistically is going hit various walls with generative models very quickly, in much the same way I did. Like, if you feel a creative verve at all, I just can't fathom you looking at the wide assortment of all manner of tooling, and choosing the one where you're playing telephone with a toddler that paints over-smoothed, nonsensical photo-realistic pictures.

I think the walls of what's possible using generative techniques in your workflow are almost by definition* further out than with only traditional techniques, and that the idea generative tools must be like "playing telephone with a toddler" comes largely from not having tried out most of the generative tools available or typical workflows.

I'd recommend checking out ComfyUI, starting with some existing examples (https://comfyworkflows.com/ seems to show workflows, when you click on the image) then playing around to see what's possible. Or for something a bit less technical, NVIDIA Canvas is fun, and useful for skyboxes: https://www.nvidia.com/en-gb/studio/canvas/

*: For a while 3D ML tools in particular did somewhat lock you out of other tools due to working on NeRF representations, but increasingly there's the option for regular meshes with sensible topology.

> And again we go back to the notion that "the process is the point" and as a creative, I completely agree. There are certainly times I feel frustration at my tools and wish they would just make what the hell I'm trying to make, but if that was the entire process, I would get nothing from it. Figuring out what prompt will get you what kind of output is interesting, but it's not fulfilling.

Do you not think you could be fulfilled with tools that let you focus on the bigger picture? I've worked with traditional procedural generation for cityscapes before and I don't feel it necessarily took away - just widened the scale I could create at, while still allowing me to zoom in and tweak individual buildings where I needed to.

> To be totally real I've never heard of someone drawing normal maps. I thought the traditional way you went about making those was having a high-detail model inside a low-detail one, and generating them that way.

If you have a 3D mesh sculpted then yeah you'd bake its normals from geometry - but you don't have that if you've just, for instance, painted some planks texture in Photoshop. You could hand-paint a normal map, hand-paint a height map and generate the normal map from that, or - as is increasingly common - generate normal/specular/roughness based on texture.


> Point-and-shoot cameras, finger painting, or crayons have a lower skill floor than even basic text-to-image generation, I'd claim.

To be fair to the people with the opposite view; a basic t2i system will generate some result with an empty prompt. (In many cases, because of their biases, this will tend to be a portrait.)


> Art is notoriously hard to define, and so are the differences between good art and bad art.

Which makes ChatGPT (or whatever) just as valid as any tool for creating art.

> What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.

As a life-long artist and musician, I agree with this. However, I find the artist's perspective lacking from this article. For many artists (myself included), the process is why we do it. It's truly therapeutic. I honestly cannot imagine my life without creative expression. Whether entering a prompt fulfills that for someone is up to them to decide. But, for me, it would remove the parts of creating art that give me joy.


Mostly just highlighting how misinformed the author is. AI art tools are very much able to adjust small details


Same. I don't know if I would call myself an artist, despite creating art for... Jesus, most of my life at this point, and making a bit of cash off it, at least enough to cover the power bill each month. I went into programming because I was keenly aware of how hard it is to make a living as an artist (and getting harder all the time!) but like... I simply cannot fathom enjoying "prompt engineering" nearly as much as my current creative processes.

I've used AI generators a few times because they're interesting little toys, but fundamentally, a creative process is literally thousands of not millions of tiny decisions that are informed by other decisions. If anything, that's what I would call an "artist's voice" in any given creative product, is an at least somewhat consistent through-line through those decisions that gives the final piece the "life" that is so clearly missing from AI art, because all those millions of decisions, instead of being made by one or a few "voices," if you will, is replaced by millions of weighted-average decisions designed to reduce "error" in the product. It's quite literally soulless and people pick up on this, no matter how much the AI lovers want to scream Luddite at me, it's true.

That's not to say it's completely without purpose, I think this stuff is going to do gangbusters for corporate news pieces, blogs, spam sites, etc. If you want royalty free imagery to use for a thing, and don't give much of a shit about what it is, AI can handle that quite well. But I simply can't fathom someone with an intention, who wants to say something with an art piece using AI much, if at all.


Shocked to see so much criticism here of up-to-now god-like Ted Chiang.

https://archive.ph/QVg0P


If you ever wonder why everything made with Stable Diffusion looks the same, it's because it can't generate images that are too dark or too bright. The denoising process involves recognizing shapes and shapes naturally have bright spots and dark spots. If you try to render "sea at night" you'll get some huge, bright moon for example.

The "AI artists" using this tool lack the technical and artistic competency to realize this. They didn't write the algorithm, draw the dataset, or train the model. They prompted. They have the smallest amount of creative input into this whole pipeline.

I do believe AI can be used in the process to create art as it's just an image generator like fractal art, but the problem is most people are going to use AI not as a means to create art, but as an end. You could fix the problem above by simply importing the image into GIMP and changing the brightness, but nobody does that because they aren't interesting into creating an art piece with a set goal in mind, they're just being entertained by generating dozens or hundreds of images with this technology.

Amusingly, you could also just type text in GIMP. Instead there is now something called "flux" that can do text literals.

While I see the point in making a prompt interpreter capable of generating text literally, if I were creating something, I wouldn't let an AI randomly pick a font, color, weight, serifs/slabs, etc. for me. These are creative choices in design that make all the difference. Prompting gives the illusion of (creative) choice. You get something that looks good, but "getting something that looks good" is the default state. Anyone can do that. It's the AI art equivalent of drawing a stickman. The prompters just don't realize it because they're comparing themselves to to artists of other media, not comparing themselves to other AI artists.

When everything is AI, and anyone can generate an image with a prompt, the whole market will be so saturated with this (perhaps it already is at the rate these are generated) that all the novelty will be gone.

It was cool when AI was able to generate video, just like it was able to generate text. But in my opinion, those are feats of the technology, not artistic feats. The piece itself isn't interesting. It could be any video. Just the fact that the tech can do this is impressive. But it's just the tech that is impressive, not its output. Once the tech can do it once, it can do it every time, so the second time AI generates video is never going to be as impressive as the first time. By the thousandth time it will be as impressive as my ability to send this message to the other side of the world at the speed of light.


> If you ever wonder why everything made with Stable Diffusion looks the same

Everything you know is made with Stable Diffusion looks the same because if it doesn't look the same you probably don't know it was made with Stable Diffusion.

> The "AI artists" using this tool lack the technical and artistic competency to realize this.

No, they don't. It's been a frequent comment in the AI art community and a thing for which the community has sought and produced both in-generation and auxiliary-tooling solutions for from very early on.

> They didn't write the algorithm, draw the dataset, or train the model.

Perhaps not for the base model, people int he AI art community have done all three of those for improvements to and tools built around the base models and the original code implementation of them.

> I do believe AI can be used in the process to create art as it's just an image generator like fractal art, but the problem is most people are going to use AI not as a means to create art, but as an end.

Most of the people who are using any tool that can be used artistically are going to use it at the most superficial level. Is that true of AI image generators? Sure. But no moreso than it is true of, say, pencils.

> You could fix the problem above by simply importing the image into GIMP and changing the brightness, but nobody does that because they aren't interesting into creating an art piece with a set goal in mind, they're just being entertained by generating dozens or hundreds of images with this technology.

People are using AI image generation with a set goal in mind, and people absolurely do import generated images into to traditional image editors for adjustments. Though a lot of the people that really know what they are doing have that built into their workflows, reducing the need to do manual spot correction in a separate editor.

> Amusingly, you could also just type text in GIMP. Instead there is now something called "flux" that can do text literals.

Image generation models have been able to do text to a certain extent for a while, and improvements in text generation have been a major trumpeted feature of many recent model releases. Flux isn't interesting because "it can do text literals", it is interesting because the community has discovered that it can be finetuned (specifically, that LoRA can be trained for it) that will allow control of text style, similar to fonts.

I wasn't aware that GIMP could comform typed text to the implicit 3d shape of the surfaces it is being placed on in a 2D image, though.

> When everything is AI, and anyone can generate an image with a prompt, the whole market will be so saturated with this (perhaps it already is at the rate these are generated) that all the novelty will be gone.

Probably. So what? Novelty isn't the point in every image people produce. Lowering the cost and effort to produce basically "looks good" images for lots of casual uses isn't, itself, an advance in fine art, sure. But it is, in itself, useful.


>Lowering the cost and effort to produce basically "looks good" images for lots of casual uses isn't, itself, an advance in fine art, sure. But it is, in itself, useful.

What you misunderstand is what "looks good" means. Before AI, art that looked good was valuable and impressive precisely because few artists could produce it. It was amazing (and it still is) that a human being can reproduce realistic or surrealistic imagery with just pencil and paint.

If anyone can do it with one click, there is no value.

Like I said before, you're using AI art and comparing it to other media, like pencil and paint. That's like comparing photography to pencil and paint. Just because photography is more "realistic" that doesn't mean people will value a photo more than an artist's realistic rendering of the same thing.

When AI just looks good, that is actually the most worthless possible thing, as valuable as a sketch of a stickman.


> What you misunderstand is what "looks good" means.

No, I don't.

> Before AI, art that looked good was valuable and impressive precisely because few artists could produce it.

I didn't say anything about valuable and impressive, I said "useful". For an analogy, clip art isn't particularly valuable or impressive, but its useful for lots of things. Yes, exchange value is dependent on scarcity, utility is not.


Because the definition of 'art' is somewhat philosophical, the more salient question is "will AI make something indistinguishable from art?" and the answer is easy: yes.


There are none so confident as the ignorant.

Ask an artist what's art? Hell no.


Some I agree with, some I disagree with. I think this author mainly speaks to the idea of art being the human equivalent of a peacock's tail: the effort is the point, not the result.

Myself, I like results: A metaphor about the scent of roses is just as sweet, after I find it came from an LLM.

> I think the answer is no. An artist—whether working digitally or with paint—implicitly makes far more decisions during the process of making a painting than would fit into a text prompt of a few hundred words.

In the art of words, Even the briefest form has weight, Prompt and haiku both.

> This hypothetical writing program might require you to enter a hundred thousand words of prompts in order for it to generate an entirely different hundred thousand words that make up the novel you’re envisioning.

That would be an improvement on what I've been going through with the novel I started writing before the Attention Is All You Need paper — I've probably written 200,000 words, and it's currently stuck at 90% complete and 90,000 words long.

> Believing that inspiration outweighs everything else is, I suspect, a sign that someone is unfamiliar with the medium. I contend that this is true even if one’s goal is to create entertainment rather than high art.

I agree completely. The better and worse examples of AI-generated are very obvious, and I think relate to how much attention to detail people have with the result. This also applies to both text and images — think of all the cases in the first few months where you could spot fake reviews and fake books because they started "As a large language model…"

The quality of the output then becomes how good the user is at reviewing the result: I can't draw hands, but that doesn't stop me from being able to reject the incorrect outputs. Conversely I know essentially nothing about motorbikes, so if an AI (image or text) makes a fundamental error about them, I won't notice the error and would therefore let it pass.

> Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it.

This has been the case so far, but even then not entirely. To use the example of photographs, even CCTV footage can be interesting and amusing. Yes, this involves filtering out all the irrelevant stuff, and yes this is itself an act of effort, but even there that greatest effort is the easiest to automate: has anything at all even happened in this image?

To me, this matches the argument between the value of hand-made vs. factory made items. Especially in the early days, the work of an artisan is better than the same mass-produced item. An automated loom replacing artisans, pre-recorded music replacing live bands in cinemas and bars, cameras replacing painters, were all strictly worse in the first instance, but despite this they remained worth consuming — even in, as per the acknowledgement in the article itself: "When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure."

> Language is, by definition, a system of communication, and it requires an intention to communicate.

I do not see any requirement for "intention", but perhaps it is a question of definitions — at most I would reverse the causality, and say that if you believe such a requirement exists, then whatever it is you mean by "intention" must be present in an AI that behaves like an LLM.

> There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you.

Despite knowing how they work, I am unsure of this. I do not know how it is that I, a bag of mostly-water whose thinking bits are salty protein electrochemical gradients, can have subjective experiences.

I do know that ChatGPT is learning to act like us. On the one hand, it is conceivable that it could use some of its vector space to represent emotional affect that itself will closely correspond to the levels of serotonin, adrenaline, dopamine, oxytocin, in a real human — and I can even test this simply by asking it do pretend is has elevated or suppressed levels of these things.

On the other, don't get me wrong, my base assumption here is that it's just acting: I know that there are many other things, such as VHS tapes, which can reproduce the emotional affect of any real human, present any argument about their own personhood, to beg to be not switched off, and I know that it isn't real. Even the human who gets filmed and their affect and words getting onto the tape, will, most likely, be faking all those things.

I have no way to tell if what ChatGPT is doing is more like consciousness, or more like a cargo-cult's hand-carved walkie-talkie shaped object is to the US forces in the Pacific in WW2.

But when it's good enough at pretending… if you can't tell, does it matter?

> Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling.

> it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes.

100% true. Even if, for the sake of argument, I assume that an LLM has feelings, there's absolutely no reason to assume that those feelings are the ones that it appears to have to our eyes. The author gives an example of dogs, writing "A dog can communicate that it is happy to see you" — but we know from tests, that owners believe dogs have a "guilty face" which is really a "submission face", because we can't really read canine body language as well as we think we can: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4310318/

Also, these models are trained to maximise our happiness with their output. One thing I can be sure of is they're sycophants.

> The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.

> By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills.

Both fantastic examples.


Art is made by people. It’s not complicated.


Is that just your axiomatic definition or do you have an actual reason to claim so?


It’s the denotative definition.


You have opened a big can of worms that was explored a century ago, if I throw a bucket of paint on a canvas and let it drip, is it made by people? I could argue that the entire painting was made by gravity, which is definitely not human. Where do you draw the line? Is it art if the work is completely digital and made with digital tools, what if I use smart brushes? A camera?


Humans have consciousness, and free will which is expressed through intention which forms a causal chain. Photography on its own is usually vapid but it’s subject can be meaningful.


You dodge my questions and from your word soup I can infer that the matter is not as simple as you said ahah


You can believe whatever you want but your question is irrelevant. Art is made by people, there’s no ambiguity.


"Your questions disprove my point so they are irrelevant" damn ahaha


I don’t know who you’re quoting. Art is made by people, it’s literally in the definition of the word art. Google it. You haven’t disproven it by observing gravity. Idk what to tell you.


If there is no ambiguity: is a man pushing a button on a camera art? A man pushing a button on an image generator? Yes or no answer because it's very simple as you say, no whining or word soup ahah


A man that just pushes a button on a digital camera isn’t making art. A guy that prompts an LLM to generate an image isn’t making art either. I don’t understand why you think a denotative definitions are “whining” or “word soup”. Just look up the definition of art.


>is a man pushing a button on a camera art?

"No"

>A man pushing a button on an image generator?

"No"

Thank you, I don't need anything else.


Eh okay? Again don’t take my word for it, it’s not my opinion. Look it up.


I agree that art is human expression, in case it wasn't clear, and a human using AI is human expression, just like a photographer is, and therefore it's art. No human is ever going to agree on what tools can make a person an artist or not, some people don't think you are an artist if you draw on Photoshop, some people think you are if you use AI, so there is nothing simple or unambiguous about it, and if you want to prove my point you can argue that AI is not a tool like a brush or a camera is, and no, Google is not going to give you a universal answer ahah


“A.I.” is a stochastic parrot. You prompt it with input and it outputs a complex copy and paste based on real human made art. You’re not making anything and the software isn’t producing art. What some hypothetical people might have briefly thought about Photoshop has nothing to do with what LLMs are. It’s a false comparison. A brush, on the other hand, is analogous to a digital brush.


Thank you for proving my point.


You’re making “a point” like you make “A.I. art” which is not at all. You’re derivative.


And Google will give you a denotative definition for art. Just because people have differing opinions about what art is doesn’t mean all those opinions are equally valid or true.


Art is perceived by people. It's not complicated.


Your statement isn’t complicated but it’s also not logical or accurate. If art is what is perceived by people then everything is art and art is a redundant term for everything.


My statement wasn't complicated, yet somehow you still didn't understand it, or you're purposefully misunderstanding it to try and strawman.


Art is made by people therefore not everything we perceive is art. It’s not personal.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: