People have said similar things about every advancement in technology that made art easier. I would consider video games to be art but I know people who completely disagree.
The thing I dislike about AI is how mundane it has made “art” where anyone with a weird idea can generate what used to take a lifetime to develop while also being so wrong.
Making art easier by allowing an artist to get the thing they actually wanna make a little bit more practical to realize = halal.
Making “art” easier by making a stochastic model that just copy pastes training data from other artists to make a crude representation of what the user wrote into a text box = haram.
i use stable diffusion to generate npc images in my ttrpgs
it’s handy to be able to churn out a bunch of decent looking tokens without having to trawl the internet for ages
Yeah, I agree, it’s kind of a blurry line. If someone draws something and uses AI to enhance it it’s not the end of the world, and I think it’s still art unless the “enhancement” is totally replacing big parts, or all, of the input. Otherwise, it’s no different than any other tool that has made art easier to make.
But I think in most cases generative AI can’t make anything that could reasonably be considered art, because the substance they’re taking from to make the output isn’t even the user’s. It’s nothing more than a very advanced plagiarism machine where your prompt tells it which works to plagiarize from.
It’s nothing more than a very advanced plagiarism machine where your prompt tells it which works to plagiarize from.
Probably an unpopular opinion but I disagree. People have been learning from other artists for so long that I want to say art is iterative and not just transformative. Just because an art style is copied doesn’t make it less art in my opinion. As soon as you create art you are creating a template to be copied and to be iterated upon. It’s why we have genres in art, it’s why so many songs use the same chords, and why art progresses.
In my opinion AI doesn’t create art not because it copies but because it doesn’t understand what it is making. But if you were to use samples from AI work to piece together something it could have that understanding and it could be art. The same way a photographer might create art of a landscape they didn’t create but they didn’t just copy it. I wouldn’t doubt if 200 years ago there were artists who accused the camera of just copying art but I think we can look at pictures that we can see as art and others that aren’t.
I think the photographer example you put does touch upon an interesting point since there really were people who ridiculed photography as not art. And honestly the criteria I had said kinda would disqualify photography, which is unfair.
Would AI be able to create art if it really did understand how the pieces it’s putting together are part of what the user wants? I think it might be an useless question, because skeptics (like me) can keep shifting the goalposts of what understanding really means. So it’s unfalsifiable in a way. Some techbros claim AI can understand it because they are capable of minimizing a loss function. But I’m not satisfied by that because it amounts to making the claim that if a system performs a task well, the system has the property of having a cognitive understanding of the task. It’s a non sequitur, and I’ve seen AI enthusiasts make the same form of non sequitur a thousand times.
Maybe the conclusion we can draw from it is that trying to define what exactly is and isn’t art is hard, but clearly, the OP is not.
It is incredibly difficult and even among art made by people there are some who would say it isn’t art. Honestly I feel like art is only art when the audience can understand how it was made at least roughly. Personally I think that art can be more than nonsensical that it can have purpose like a smart phone can be art. People will disassemble and hang smartphones or electronics because they see beauty in collection of components.
I don’t know enough to say what is or isn’t art only that I have an opinion of what I see as art. I don’t think that the OP posted art not because a machine made it but because it looks wrong to me. I’m sure there are some who might see it as art and I think we’re allowed to disagree. But here we are with cameras on our person that most of humanity would have killed for and we use it to take shitty selfies of ourselves and the food we’re eating. The tool can not make art 99.99% of the time but still be capable of making art.
I would agree with you, if that was at all how the AIs generate images.
They don’t “copy and paste” anything. The images they make are novel. The AI is only trained on other images. It doesn’t have access to them to copy them once the training ends.
The way the AI generates new images is really similar to humans. It goes over its references and literally creates a brand new image.
Now, just like a person, you can ask it to make something as an exact copy of something that exists. And it can do it like a human, through “technique” and references. But it’s not copying directly, it’s making a new image that is like the one you asked it to copy.
I really wish people would realise this. Idk why the idea image generating AI is “copying” from a database of images is so prevalent…
The database of images is literally only used during training. Once the AI is set the database doesn’t exist to it anymore.
The difference between an artist who studied their whole life, seeing paintings, seeing references, going to classes, to then create new images from their own mind -> to one that traces images from google.
Look, I know how deep learning works. I know it doesn’t literally copy the images from the training dataset. But the entire point of supervised learning is to burn information about the training data into the weights and biases of a neural network in such a way that it generalizes over some domain, and can correlate the desired inputs with the desired outputs. Just because you’re using stochastic methods to indirectly reproduce the training data (of course, in a way that’s invisible to humans because of the nature of deep neural networks), doesn’t suddenly erase the fact that the only substance an AI has to draw from is the training data itself.
I think it’s really oversimplifying how humans make art to say that it’s just going over references and creating something new from it. As humans, we are influenced by the work we’ve seen, but because of our unique experience we inject something completely new into any art we make, no matter how derivative. An AI is incapable of doing the same (except for some random noise), because literally all it’s capable of doing is composing together information that has been baked into its weights and biases. It’s not like when you ask a generative AI to make something for you, it will decide to get funky with it. All it’s doing is drawing from the information that has been baked into it.
Just like how ChatGPT doesn’t actually understand what it’s saying because it’s only capable of predicting statistical relationships between words one word at a time, and has no model of meaning, only of how words go together in the training data, AI that generates images doesn’t actually know what it’s making or why. That is totally different from humans who make a piece of art step by step and do so very deliberately.
Edit: I recommend you watch this video by an astrophysicist who works with machine learning regularly, she makes my point a lot better than I can. https://youtu.be/EUrOxh_0leE
How would you classify those “experiences” people have that influence their art or work other than data? Honest question.
And very interesting video. I still don’t 100% align with this perspective, cause I feel it tries to give something extra to the brain than materiality. While I’m no material reductionist, I don’t think our human creativity is “special” or “metaphysical”. It’s our brain, and it’s physical. It can be physically replicated.
I think AI will have a “soul” or consciousness because I think everything already has it. It’s just our human biology that allows this consciousness to be self-experiential and experience other things, such as thoughts and ideas and feelings. A rock doesn’t have those, but it has a “soul” or consciousness. But I feel I digressed a lot lol
Also to make it clear, I don’t think AI exists already. I think these models and developments we have are part of AI though.
I don’t disagree that experiences are data. The major distinction I’m making is that the human creative process uses more than just data, we have intention, aesthetics, we make mistakes, change our minds, iterate, etc. For a generative AI, the “creative process” is tokenizing a string, running the tokens through an attention matrix, plugging that into a thousand different matrices that then go into a post processing layer and they spit out an image. At no point does it look at what it’s doing and evaluate how it’s gonna fit into the final picture.
As for the rest of your reasoning, I neither agree nor disagree, I think we just don’t have the same definition of consciousness.
I feel your description of what a generative AI does is pretty reductive. The middle part of “plugging the ‘token’ through thousands of different matrices” is not at all well understood. We don’t know how the AI generates the images or text. It can’t explain itself.
And we have ample research showing these models have internal models of the world and can have “thoughts”.
In any case, what would you say consciousness is? This is a more interesting question to me tbh.
people said the same thing about photography. And that was before digital photography, back when some level of knowledge of photochemistry was required, and you needed a dark room to develop in, etc. People said that it was just an imitation of painting. That turned out to not be quite the case and photography developed into its own art form, and painting became less focused on realism and documenting reality since that became the domain of photography. What photography really accomplished was reducing the amount of time and technical ability required to produce art. Same with AI stuff even if it’s reactionary junk a lot of the time, that says more about who’s writing the prompt and who’s curating the database that the model is trained on. I imagine sculptors were also upset when 3D modeling and 3D printing showed up.
I go with Marx on this and stress that the problem isn’t the means of production but who controls it. Even in the context of AI generated art, the labor is reduced to the amount of time needed to think up and write a prompt (the labor of thinking of and writing a prompt is very small) but you can then take the output and manually refine it using traditional methods if you’re capable, or refine/iterate the prompt etc. So there is some creativity going into it. And then of course AI models usually have a database of art that has already been created to draw statistical data from when generating new art. The process of curating/maintaining/labelling that database requires a huge amount of labor, as does the process of writing and maintaining the model itself. Technology is what Marx called constant capital. Constant capital is just dead labor. i.e. labor that was already performed in the past. When you generate AI art it’s not that there’s no labor going into it, it’s just that the labor was performed in the past by countless people. Same as when you use a hammer you bought from a store. You still exercise labor power to use the hammer, it’s just that the labor of making the hammer was performed in the past for you by different people.
It’s also not only prompt writing but also image-to-image. So you can take a crudely drawn input image and have the AI refine it. So that still requires creativity on your part, as well.
is AI generated but I also think it’s creative and it’s not just reactionary slop like the pilgrim shit in the OP.
I’ve generated a lot of images with these tools, but they’re not art.
The problem with image generation is that, unlike a camera which replaces the choices that a painter would make while painting a scene with the choices a photographer makes while shooting it, image generation robs an artist of the ability to make choices while constructing their work and doesn’t replace them with anything substantial.
Now you might be tempted to say that engineering the prompt lets you make choices while generating images. It does so only in the most surface-level way possible, alienating the “artist” from 99% of what goes into creating an image. All of the choices that a painter would make while the paint is hitting the canvas are instead made by the algorithm, and they are made specifically to copy the choices artists in the past have made, not to come up with anything novel or unique. Then the person who wrote the prompt views the output and decides if it’s “close enough” to what they had in mind or not, without exercising artistic control over the process at any point save for the very beginning and end of it.
That is not to say that you can’t take a generated image and start making art with it, but every pixel that was generated that you don’t change is a choice that the bot made that you didn’t, a tiny bit of alienation from your own work that you have invited into the artistic process, and frankly that cheapens it.
but every pixel that was generated that you don’t change is a choice that the bot made that you didn’t
the bot made zero choices. It consulted your prompt (a choice you made) and then it consulted a database full of pre-existing human-made art that has been curated and labelled and statistically sorted. Also at some point some random noise is introduced so it doesn’t generate the same thing twice. Bots do not make choices. These are statistical models. It’s helpful not to mystify them or attribute agency to them.
Also I don’t get the point of gatekeeping art according to technical ability, which just comes down to how much free time you have to practice, your level of educational attainment, how much disposable income you have to pay for said education, and your physical ability. If a person with no arms decides to generate a painting with AI from a carefully written prompt they came up with, and someone says “that’s not real art because you didn’t use your hands”… what is the point of that? If an idea comes to mind, you should be free to make it however you want.
I never said the bot made choices. I said it removed choices from the artist.Whoops thanks for replying in good faith though.
Edit: The important thing is that the choices from the artist are getting taken away.
Also I never said you need technical ability to make art, I’m working from the unstated assumption that it is the choices we make when we create art that makes it… art. A person who is bad a doodling who nevertheless makes a drawing has made art - that same person putting a couple words into a generator prompt has not.
Last thing: don’t fucking come at me with an argument about gatekeeping based on class and wealth when the only reason this fucking toy exists for you to play with in the first place is untold billions of hours of stolen labor from poor countries.
I never said the bot made choices. I said it removed choices from the artist.
you said
but every pixel that was generated that you don’t change is a choice that the bot made that you didn’t
Now that you have clarified what you really meant, that is helpful, but I hope you can see why I was confused by your original wording (bolded above). Also I don’t think it removes choices from the artist since the artist is still free to discard whatever the AI makes and re-generate it or use a more traditional method. The freedom to reject the output if you don’t like it is a choice along with the choice to make the prompt.
Also I never said you need technical ability to make art
That’s fair. I’m sorry for misunderstanding you in that regard. I just find this subject interesting. I’m not coming at this from a place of anger or trying to annoy people.
untold billions of hours of stolen labor from poor countries.
Your correct and I wouldn’t dream of disagreeing with this.
Last thing: don’t fucking come at me with an argument about gatekeeping based on class and wealth when the only reason this fucking toy exists for you to play with in the first place is
Nevertheless I think it’s more of a tool than a toy. The problem is that the tools are made by the workers and owned by the capitalists. We should be reacting to that economic arrangement and not the tools themselves.
every pixel that was generated that you don’t change is a choice that the bot made that you didn’t, a tiny bit of alienation from your own work that you have invited into the artistic process, and frankly that cheapens it.
This is a really good argument I think for excluding most AI content from “art”. You put into words that disappointing, unamused feeling you get after the novelty wears off when you learn an image is AI. Like it’s illegitimate but you can’t explain why because historically art has always been pushing boundaries and making people question art itself. Maybe in the future, people saying this will be considered luddites who can’t appreciate “painting with words” or whatever.
I already said it can be a viable tool and addition to an artist’s work; my point is that on its own, using LLM output directly with no further interaction is like taking random photographs, I mean completely random photographs no framing or lighting considerations or anything of the sort, just flashing the bulb a bunch of times with no further decisions made (yeah yeah that itself could be an intentional art project if we go down that rabbit hole I know I know).
The pic you provided had enough artist input in its parameters to be like an artfully-taken photograph.
I go with Marx on this and stress that the problem isn’t the means of production but who controls it.
content gives it a bit too much credit. there’s almost always body horror (look at the baby’s fingers sinking into her collarbone) and I’m so sick of it that I block AI crap as spam on my feed.
Removed by mod
I think we need to just start calling it “content” AI generated content is not art, especially in a context like this
It’s actually really easy to use an already made distinction, they’re Computer Generated Images. (CGI)
People have said similar things about every advancement in technology that made art easier. I would consider video games to be art but I know people who completely disagree.
The thing I dislike about AI is how mundane it has made “art” where anyone with a weird idea can generate what used to take a lifetime to develop while also being so wrong.
Making art easier by allowing an artist to get the thing they actually wanna make a little bit more practical to realize = halal.
Making “art” easier by making a stochastic model that just copy pastes training data from other artists to make a crude representation of what the user wrote into a text box = haram.
i use stable diffusion to generate npc images in my ttrpgs
it’s handy to be able to churn out a bunch of decent looking tokens without having to trawl the internet for ages
Great for you, but you can hardly claim you created art
point to where i said i did
I’m not arguing with you, that’s just what the conversation is about
the comment i replied to called using an llm to generate images in general haram
i was presenting a use case he may not have thought of
The same fundamental tech can and has been tuned to do both
Yeah, I agree, it’s kind of a blurry line. If someone draws something and uses AI to enhance it it’s not the end of the world, and I think it’s still art unless the “enhancement” is totally replacing big parts, or all, of the input. Otherwise, it’s no different than any other tool that has made art easier to make.
But I think in most cases generative AI can’t make anything that could reasonably be considered art, because the substance they’re taking from to make the output isn’t even the user’s. It’s nothing more than a very advanced plagiarism machine where your prompt tells it which works to plagiarize from.
Probably an unpopular opinion but I disagree. People have been learning from other artists for so long that I want to say art is iterative and not just transformative. Just because an art style is copied doesn’t make it less art in my opinion. As soon as you create art you are creating a template to be copied and to be iterated upon. It’s why we have genres in art, it’s why so many songs use the same chords, and why art progresses.
In my opinion AI doesn’t create art not because it copies but because it doesn’t understand what it is making. But if you were to use samples from AI work to piece together something it could have that understanding and it could be art. The same way a photographer might create art of a landscape they didn’t create but they didn’t just copy it. I wouldn’t doubt if 200 years ago there were artists who accused the camera of just copying art but I think we can look at pictures that we can see as art and others that aren’t.
I think the photographer example you put does touch upon an interesting point since there really were people who ridiculed photography as not art. And honestly the criteria I had said kinda would disqualify photography, which is unfair.
Would AI be able to create art if it really did understand how the pieces it’s putting together are part of what the user wants? I think it might be an useless question, because skeptics (like me) can keep shifting the goalposts of what understanding really means. So it’s unfalsifiable in a way. Some techbros claim AI can understand it because they are capable of minimizing a loss function. But I’m not satisfied by that because it amounts to making the claim that if a system performs a task well, the system has the property of having a cognitive understanding of the task. It’s a non sequitur, and I’ve seen AI enthusiasts make the same form of non sequitur a thousand times.
Maybe the conclusion we can draw from it is that trying to define what exactly is and isn’t art is hard, but clearly, the OP is not.
It is incredibly difficult and even among art made by people there are some who would say it isn’t art. Honestly I feel like art is only art when the audience can understand how it was made at least roughly. Personally I think that art can be more than nonsensical that it can have purpose like a smart phone can be art. People will disassemble and hang smartphones or electronics because they see beauty in collection of components.
I don’t know enough to say what is or isn’t art only that I have an opinion of what I see as art. I don’t think that the OP posted art not because a machine made it but because it looks wrong to me. I’m sure there are some who might see it as art and I think we’re allowed to disagree. But here we are with cameras on our person that most of humanity would have killed for and we use it to take shitty selfies of ourselves and the food we’re eating. The tool can not make art 99.99% of the time but still be capable of making art.
I would agree with you, if that was at all how the AIs generate images.
They don’t “copy and paste” anything. The images they make are novel. The AI is only trained on other images. It doesn’t have access to them to copy them once the training ends.
The way the AI generates new images is really similar to humans. It goes over its references and literally creates a brand new image.
Now, just like a person, you can ask it to make something as an exact copy of something that exists. And it can do it like a human, through “technique” and references. But it’s not copying directly, it’s making a new image that is like the one you asked it to copy.
I really wish people would realise this. Idk why the idea image generating AI is “copying” from a database of images is so prevalent…
The database of images is literally only used during training. Once the AI is set the database doesn’t exist to it anymore.
The difference between an artist who studied their whole life, seeing paintings, seeing references, going to classes, to then create new images from their own mind -> to one that traces images from google.
AI currently does the first, not the latter.
Look, I know how deep learning works. I know it doesn’t literally copy the images from the training dataset. But the entire point of supervised learning is to burn information about the training data into the weights and biases of a neural network in such a way that it generalizes over some domain, and can correlate the desired inputs with the desired outputs. Just because you’re using stochastic methods to indirectly reproduce the training data (of course, in a way that’s invisible to humans because of the nature of deep neural networks), doesn’t suddenly erase the fact that the only substance an AI has to draw from is the training data itself.
I think it’s really oversimplifying how humans make art to say that it’s just going over references and creating something new from it. As humans, we are influenced by the work we’ve seen, but because of our unique experience we inject something completely new into any art we make, no matter how derivative. An AI is incapable of doing the same (except for some random noise), because literally all it’s capable of doing is composing together information that has been baked into its weights and biases. It’s not like when you ask a generative AI to make something for you, it will decide to get funky with it. All it’s doing is drawing from the information that has been baked into it.
Just like how ChatGPT doesn’t actually understand what it’s saying because it’s only capable of predicting statistical relationships between words one word at a time, and has no model of meaning, only of how words go together in the training data, AI that generates images doesn’t actually know what it’s making or why. That is totally different from humans who make a piece of art step by step and do so very deliberately.
Edit: I recommend you watch this video by an astrophysicist who works with machine learning regularly, she makes my point a lot better than I can. https://youtu.be/EUrOxh_0leE
How would you classify those “experiences” people have that influence their art or work other than data? Honest question.
And very interesting video. I still don’t 100% align with this perspective, cause I feel it tries to give something extra to the brain than materiality. While I’m no material reductionist, I don’t think our human creativity is “special” or “metaphysical”. It’s our brain, and it’s physical. It can be physically replicated.
I think AI will have a “soul” or consciousness because I think everything already has it. It’s just our human biology that allows this consciousness to be self-experiential and experience other things, such as thoughts and ideas and feelings. A rock doesn’t have those, but it has a “soul” or consciousness. But I feel I digressed a lot lol
Also to make it clear, I don’t think AI exists already. I think these models and developments we have are part of AI though.
I don’t disagree that experiences are data. The major distinction I’m making is that the human creative process uses more than just data, we have intention, aesthetics, we make mistakes, change our minds, iterate, etc. For a generative AI, the “creative process” is tokenizing a string, running the tokens through an attention matrix, plugging that into a thousand different matrices that then go into a post processing layer and they spit out an image. At no point does it look at what it’s doing and evaluate how it’s gonna fit into the final picture.
As for the rest of your reasoning, I neither agree nor disagree, I think we just don’t have the same definition of consciousness.
I feel your description of what a generative AI does is pretty reductive. The middle part of “plugging the ‘token’ through thousands of different matrices” is not at all well understood. We don’t know how the AI generates the images or text. It can’t explain itself.
And we have ample research showing these models have internal models of the world and can have “thoughts”.
In any case, what would you say consciousness is? This is a more interesting question to me tbh.
Made printing imitations of art easier. Sure, those imitations can be used as part of a larger work, but the point still stands.
people said the same thing about photography. And that was before digital photography, back when some level of knowledge of photochemistry was required, and you needed a dark room to develop in, etc. People said that it was just an imitation of painting. That turned out to not be quite the case and photography developed into its own art form, and painting became less focused on realism and documenting reality since that became the domain of photography. What photography really accomplished was reducing the amount of time and technical ability required to produce art. Same with AI stuff even if it’s reactionary junk a lot of the time, that says more about who’s writing the prompt and who’s curating the database that the model is trained on. I imagine sculptors were also upset when 3D modeling and 3D printing showed up.
I go with Marx on this and stress that the problem isn’t the means of production but who controls it. Even in the context of AI generated art, the labor is reduced to the amount of time needed to think up and write a prompt (the labor of thinking of and writing a prompt is very small) but you can then take the output and manually refine it using traditional methods if you’re capable, or refine/iterate the prompt etc. So there is some creativity going into it. And then of course AI models usually have a database of art that has already been created to draw statistical data from when generating new art. The process of curating/maintaining/labelling that database requires a huge amount of labor, as does the process of writing and maintaining the model itself. Technology is what Marx called constant capital. Constant capital is just dead labor. i.e. labor that was already performed in the past. When you generate AI art it’s not that there’s no labor going into it, it’s just that the labor was performed in the past by countless people. Same as when you use a hammer you bought from a store. You still exercise labor power to use the hammer, it’s just that the labor of making the hammer was performed in the past for you by different people.
It’s also not only prompt writing but also image-to-image. So you can take a crudely drawn input image and have the AI refine it. So that still requires creativity on your part, as well.
is AI generated but I also think it’s creative and it’s not just reactionary slop like the pilgrim shit in the OP.
I’ve generated a lot of images with these tools, but they’re not art.
The problem with image generation is that, unlike a camera which replaces the choices that a painter would make while painting a scene with the choices a photographer makes while shooting it, image generation robs an artist of the ability to make choices while constructing their work and doesn’t replace them with anything substantial.
Now you might be tempted to say that engineering the prompt lets you make choices while generating images. It does so only in the most surface-level way possible, alienating the “artist” from 99% of what goes into creating an image. All of the choices that a painter would make while the paint is hitting the canvas are instead made by the algorithm, and they are made specifically to copy the choices artists in the past have made, not to come up with anything novel or unique. Then the person who wrote the prompt views the output and decides if it’s “close enough” to what they had in mind or not, without exercising artistic control over the process at any point save for the very beginning and end of it.
That is not to say that you can’t take a generated image and start making art with it, but every pixel that was generated that you don’t change is a choice that the bot made that you didn’t, a tiny bit of alienation from your own work that you have invited into the artistic process, and frankly that cheapens it.
the bot made zero choices. It consulted your prompt (a choice you made) and then it consulted a database full of pre-existing human-made art that has been curated and labelled and statistically sorted. Also at some point some random noise is introduced so it doesn’t generate the same thing twice. Bots do not make choices. These are statistical models. It’s helpful not to mystify them or attribute agency to them.
Also I don’t get the point of gatekeeping art according to technical ability, which just comes down to how much free time you have to practice, your level of educational attainment, how much disposable income you have to pay for said education, and your physical ability. If a person with no arms decides to generate a painting with AI from a carefully written prompt they came up with, and someone says “that’s not real art because you didn’t use your hands”… what is the point of that? If an idea comes to mind, you should be free to make it however you want.
I never said the bot made choices. I said it removed choices from the artist.Whoops thanks for replying in good faith though.Edit: The important thing is that the choices from the artist are getting taken away.
Also I never said you need technical ability to make art, I’m working from the unstated assumption that it is the choices we make when we create art that makes it… art. A person who is bad a doodling who nevertheless makes a drawing has made art - that same person putting a couple words into a generator prompt has not.
Last thing: don’t fucking come at me with an argument about gatekeeping based on class and wealth when the only reason this fucking toy exists for you to play with in the first place is untold billions of hours of stolen labor from poor countries.
you said
Now that you have clarified what you really meant, that is helpful, but I hope you can see why I was confused by your original wording (bolded above). Also I don’t think it removes choices from the artist since the artist is still free to discard whatever the AI makes and re-generate it or use a more traditional method. The freedom to reject the output if you don’t like it is a choice along with the choice to make the prompt.
That’s fair. I’m sorry for misunderstanding you in that regard. I just find this subject interesting. I’m not coming at this from a place of anger or trying to annoy people.
Your correct and I wouldn’t dream of disagreeing with this.
Nevertheless I think it’s more of a tool than a toy. The problem is that the tools are made by the workers and owned by the capitalists. We should be reacting to that economic arrangement and not the tools themselves.
This is a really good argument I think for excluding most AI content from “art”. You put into words that disappointing, unamused feeling you get after the novelty wears off when you learn an image is AI. Like it’s illegitimate but you can’t explain why because historically art has always been pushing boundaries and making people question art itself. Maybe in the future, people saying this will be considered luddites who can’t appreciate “painting with words” or whatever.
I already said it can be a viable tool and addition to an artist’s work; my point is that on its own, using LLM output directly with no further interaction is like taking random photographs, I mean completely random photographs no framing or lighting considerations or anything of the sort, just flashing the bulb a bunch of times with no further decisions made (yeah yeah that itself could be an intentional art project if we go down that rabbit hole I know I know).
The pic you provided had enough artist input in its parameters to be like an artfully-taken photograph.
No arguments from me there.
I got ya; no worries
deleted by creator
Fair point; my bad
I was just thinking about the original reaction painters had to photography and extending that to other fields which was lazy on my part.
deleted by creator
Nah you didn’t sound like an ass, no worries
content gives it a bit too much credit. there’s almost always body horror (look at the baby’s fingers sinking into her collarbone) and I’m so sick of it that I block AI crap as spam on my feed.