5/18/2025 at 6:16:31 AM
Some of these are very obviously trained on webtoons and manga, probably pixiv as well. This is very clear due to seeing CG buildings and other misc artifacts. So this is obviously trained on copyrighted material.Art is something that cannot be generated like synthetic text so it will have to be nearly forever powered by human artists or else you will continue to end up with artifacting. So it makes me wonder if artists will just be downgraded to an "AI" training position, but it could be for the best as people can draw what they like instead and have that input feed into a model for training which doesn't sound too bad.
While being very pro AI in terms of any kind of trademaking and copyright, it still make me wonder what will happen to all the people who provided us with entertainment and if the quality continue to increase or if we're going to start losing challenging styles because "it's too hard for ai" and everything will start 'felling' the same.
It doesn't feel the same as people being replaced with computer and machines, this feels like the end of a road.
by kachapopopow
5/18/2025 at 6:53:27 AM
It’s great that you have sympathy for illustrators, but I don’t see a big difference if the training data is a novel, a picture, a song, a piece of code, or even a piece of legal text.As my mom retired from being a translator, she went from typewriter to machine-assisted translation with centralised corpus-databases. All the while the available work became less and less, and the wages became lower and lower.
In the end, the work we do that is heavily robotic will be done by less expensive robots.
by sshine
5/18/2025 at 7:17:05 AM
Here’s the argument:The output of her translations had no copyright. Language developed independently of translators.
The output of artists has copyright. Artists shape the space in which they’re generating output.
The fear now is that if we no longer have a market where people generate novel arts, that space will stagnate.
by earthnail
5/18/2025 at 9:58:17 AM
A translation is absolutely under copyright. It is a creative process after all.This means a book can be in public domain for the original text, because it's very old, but not the translation because it's newer.
For example Julius Caesar's "Gallic War" in the original latin is clearly not subject to copyright, but a recent English translation will be.
by nickserv
5/19/2025 at 4:11:26 AM
So if a machine was to do the translation, should that also be considered a creative work?If not, that would put pressure on production companies to use machines so they don’t have to pay future royalties
by dcsan
5/19/2025 at 7:04:50 AM
Well that's the real question, isn't it?Our current best technology, LLMs, are good enough for translating an email or meeting transcript and getting the general message across. Anything more creative, technical, or nuanced, and they fall apart.
Meaning for anything of value like books, plays, movies, poetry, humans will necessarily be part of the process: coaxing, prompting, correcting...
If we consider the machine a tool, it's easy, the work would fall under copyright.
If we consider the machine the creator, then things get tricky. Are only the parts reworked/corrected under copyright? Do we consider under copyright only if a certain portion of the work was machine generated? Is the prompt under copyright, but not its output?
Without even getting into the issue of training data under copyright...
There is some movement regarding copyright of AI art, legislation being drawn up and debated in some countries. It's likely translations would be impacted by those decisions.
by nickserv
5/19/2025 at 5:32:17 AM
> So if a machine was to do the translation, should that also be considered a creative work?No, but it will be derived work covered by the same copyright as original.
The quality of human translation is better, for now.
by MoonGhost
5/18/2025 at 9:03:57 AM
> The output of artists has copyright.Copyright is a very messy and divisive topic. How exactly can an artist claim ownership of a thought or an image? It is often difficult to ascertain whether a piece of art infringes on the copyright of another. There are grey areas like "fair use", which complicate this further. In many cases copyright is also abused by holders to censor art that they don't like for a myriad of unrelated reasons. And there's the argument that copyright stunts innovation. There are entire art movements and music genres that wouldn't exist if copyright was strictly enforced on art.
> Artists shape the space in which they’re generating output.
Art created by humans is not entirely original. Artists are inspired by each other, they follow trends and movements, and often tiptoe the line between copyright infringement and inspiration. Groundbreaking artists are rare, and if we consider that machines can create a practically infinite number of permutations based on their source data, it's not unthinkable that they could also create art that humans consider unique and novel, if nothing else because we're not able to trace the output to all of its source inputs. Then again, those human groundbreaking artists are also inspired by others in ways we often can't perceive. Art is never created in a vacuum. "Good artists copy; great artists steal", etc.
So I guess my point is: it doesn't make sense to apply copyright to art, but there's nothing stopping us from doing the same for machine-generated art, if we wanted to make our laws even more insane. And machine-generated art can also set trends and shape the space they're generated in.
The thing is that technology advances far more rapidly than laws do. AI is raising many questions that we'll have to answer eventually, but it will take a long time to get there. And on that path it's worth rethinking traditional laws like copyright, and considering whether we can implement a new framework that's fair towards creators without the drawbacks of the current system.
by imiric
5/18/2025 at 10:23:56 AM
Ambiguities are not a good argument against laws that still have positive outcomes.There are very few laws that are not giant ambiguities. Where is the line between murder, self-defense and accident? There are no lines in reality.
(A law about spectrum use, or registered real estate borders, etc. can be clear. But a large amount of law isn’t.)
Something must change regarding copyright and AI model training.
But it doesn’t have to be the law, it could be technological. Perhaps some of both, but I wouldn’t rule out a technical way to avoid the implicit or explicit incorporation of copyrighted material into models yet.
by Nevermark
5/18/2025 at 11:08:30 AM
> There are very few laws that are not giant ambiguities. Where is the line between murder, self-defense and accident? There are no lines in reality.These things are very well and precisely defined in just about every jurisdiction. The "ambiguities" arise from ascertaining facts of the matter, and whatever some facts fits within a specific set of set rules.
> Something must change regarding copyright and AI model training.
Yes, but this problem is not specific to AI, it is the question of what constitutes a derivative, and that is a rather subjective matter in the light of the good ol' axiom of "nothing is new under the sun".
by omeid2
5/19/2025 at 5:39:34 AM
> These things are very well and precisely defined in just about every jurisdiction.Yes, we have lots of wording attempting to be precise. And legal uses of terms are certainly more precise by definition and precedent than normal language.
But ambiguities about facts are only half of it. Even when all the facts appear to be clear, human juries have to use their subjective human judgement to pair up what the law says, which may be clear in theory, but is often subjective at the borders, vs. the facts. And reasonable people often differ on how they match the two up in many borderline cases.
We resolve both types of ambiguities case-by-case by having a jury decide, which is not going to be consistent from jury to jury but it is the best system we have. Attorneys vetting prospective jurors are very much aware that the law comes down to humans interpreting human language and concepts, none of which are truly precise, unless we are talking about objective measures (like frequency band use).
---
> it is the question of what constitutes a derivative
Yes, the legal side can adapt.
And the technical side can adapt too.
The problem isn't that material was trained on, but that the resulting model facilitates reproducing individual works (or close variations), and repurposing individual's unique styles.
I.e. they violate fair use by using what they learn in a way that devalues other's creative efforts. Being exposed to copyrighted works available to the public is not the violation. (Even though it is the way training currently happens that produces models that violate fair use.)
We need models that one way or another, stay within fair use once trained. Either by not training on copyrighted material, or by training on copyrighted material in a way that doesn't create models that facilitate specific reproduction and repurposing of creative works and styles.
This has already been solved for simple data problems, where memorization of particular samples can be precluded by adding noise to a dataset. Important generalities are learned, but specific samples don't leave their mark.
Obviously something more sophisticated would need to be done to preclude memorization of rich creative works and styles, but a lot of people are motivated to solve this problem.
by Nevermark
5/20/2025 at 1:03:27 PM
It seems like your concerns is about how easy it is going to be to create derivative and similar work, rather than a genuine concerns for copyright. Do I understand correctly?by omeid2
5/20/2025 at 10:44:38 PM
No, I am just narrowing down the problem definition to the actual damage.Which is a very fair use and copyright respecting approach.
Taking/obtaining value from works is ok, up until the point where damage to the value of original works happen. And that is not ok. Because copyright protects that value to incentivize the creation and sharing of works.
The problem is that models are shipping that inherently make it easy to reproduce copyrighted works, and apply specific styles lifted from single author's copyrighted bodies of work.
I am very strongly against this.
Note that prohibiting copying of a recognizable specific single author's style is even more strict than fair use limits on humans. Stricter makes sense to me, because unlike humans, models are mass producers.
So I am extremely respectful of protecting copyright value.
But it is not the same thing as not training on something. It is worth exploring training algorithms that can learn useful generalities about bodies of work, without retaining biases toward the specifics of any one work, or any single authored style. That would be in the spirit of fair use. You can learn from any art, if it's publicly displayed, or you have paid for a copy, but you can't create mass copiers of it.
Maybe that is impossible, but I doubt it. There are many ways to train that steer important properties of the resulting models.
Models that make it trivial to create new art deco works, consistent with the total body of art deco works, ok. Models that make it trivial to recreate Erte works, or with an accurately Erte style specifically. Not ok.
by Nevermark
5/21/2025 at 6:06:12 AM
> The problem is that models are shipping that inherently make it easy to reproduce copyrighted works, and apply specific styles lifted from single author's copyrighted bodies of work. > I am very strongly against this. > Note that prohibiting copying of a recognizable specific single author's style is even more strict than fair use limits on humans. Stricter makes sense to me, because unlike humans, models are mass producers.This sounds like gate-keeping rather than genuine copyright concerns.
> Models that make it trivial to create new art deco works, consistent with the total body of art deco works, ok. Models that make it trivial to recreate Erte works, or with an accurately Erte style specifically. Not ok.
Yeah, again, sounds like gate-keeping more than an economic and incentives argument which are, in my opinion, the only legitimate concerns underpinning copyright's moral ground.
Every step of progress has made doing things easier and easier to the point that now arguing with some strange across the world seems trivial, almost natural. Surely there are some arguments to curtail this dangerous machinery that undermines the control of information flow and corrupts the minds of the naive! we must shut it down!
Jokes aside, "making things easier/trivial" is the name of the game of progress. You can't stop progress. Everything will be easier and easier as the time goes on.
by omeid2
5/18/2025 at 10:21:55 AM
>Art created by humans is not entirely original.The catch here is that a human can use single sample as input, but AI needs a torrent of training data. Also when AI generates permutations of samples, does their statistic match training data?
by GoblinSlayer
5/18/2025 at 2:40:41 PM
No human could use a single sample if it was literally the first piece of art they had ever seen.Humans have that torrent of training data baked in from years of lived experience. That’s why people who go to art school or otherwise study art are generally (not always of course) better artists.
by brookst
5/18/2025 at 5:53:50 PM
I don't think the claim that the value of art school simply being more exposure to art holds water.by collingreen
5/18/2025 at 11:09:55 AM
Not without a torrent of pre-training data. The qualitative differences are rapidly becoming intangible ‘soul’ type things.by taneq
5/18/2025 at 12:31:21 PM
A skilled artist can imitate a single art style or draw a specific object from a single reference. But becoming a skilled artist takes years of training. As a society we like to pretend some humans are randomly gifted with the ability to draw, but in reality it's 5% talent and 95% spending countless hours practising the craft. And if you count the years worth of visual data the average human has experienced by the time they can recreate a van Gogh then humans take magnitudes more training data than state of the art ML modelsby wongarsu
5/18/2025 at 3:03:39 PM
In case of an ML model either a very good description or that single reference could be added to the context.by startupsfail
5/18/2025 at 2:27:10 PM
That makes no sense, neither legally nor philosophically.> Language developed independently of translators.
And it also developed independently of writers and poets.
> Artists shape the space in which they’re generating output.
Not writers and poets, apparently. And so maybe not even artists, who typically mostly painted book references. Color perception and symbolism developed independently of professional artists, too. Moreover, all of the things you mention predate copyright.
> The fear now is that if we no longer have a market where people generate novel arts, that space will stagnate.
But that will never happen; it's near-impossible to stop humans from generating novel arts. They just do it as a matter of course - and the more accessible the tools are, the more people participate.
Yes, memes are a form of art, too.
What's a real threat is lack of shared consumption of art. This has been happening for the past couple decades now, first with books, then with visual arts. AI will make this problem worse by both further increasing volume of "novel arts" and by enabling personalization. The real value we're using is the role of art as social objects - the ability to relate to each other by means of experiencing the same works of art, and thus being able to discuss and reference it. If no two people ever experienced the same works of art, there's not much about art they can talk about; if there's no shared set of art seen by most people in a society, there's a social baseline lost. That problem does worry me.
by TeMPOraL
5/18/2025 at 11:28:09 PM
If you think memes are art too and we lack shared consumption of art due to personalization, you clearly don't have kids into YouTube or Minecraft or Frozen, or ...by gregw2
5/19/2025 at 4:29:34 AM
I don't get what you are trying to say here? Yes, memes are arts however foreign it might be to older folks. To your second point, you know about Frozen because everyone else also watches that. We are about to lose that if there are 1 million variations of "Frozen"-esque movie that people can watch.I don't think have an AI partner that is trained from zero from childhood to adulthood with goals such as "make me laugh" is too far fetched. The problem is you will never be able to connect with this child because the AI is feeding it insanely obscure, highly specific videos that matches the neurons of the kid perfectly.
by anon-3988
5/20/2025 at 6:01:50 AM
I actually have kids, two of them in kindergarten. I can already see this problem affecting them, because beyond Frozen and Paw Patrol and a couple others, everyone also has their own favorite series on YouTube that few other kids heard of, and I can see kids trying and failing to bond over those.I never thought I'd be thankful for global, toy-pushing franchises, but they at least serve as a social object for kids, when the current glut of kids videos on YouTube doesn't.
by TeMPOraL
5/18/2025 at 11:02:11 AM
You are wrong. Translations have copyright. That is why a new translation of for example an ancient book has copyright and you are now allowed to reproduce it without permission.by victorbjorklund
5/18/2025 at 8:53:10 AM
I don't think the Berne Convention on Copyright was meant as a complete list of things where humans have valuable input. Translators do shape the space in which they generate output. Their space isn't any single language bit rather the connecting space between languages.Most translation work is simple just as the day-to-day of many creative professions is rather uncreative. But translating a book, comic or movie requires creative decisions on how to best convey the original meaning in the idioms and cultural context of a different language. The difference between a good and a bag translation can be stark
by wongarsu
5/18/2025 at 8:30:01 AM
Makes me wonder if the generous copyright protections afforded to artists had not become so abhorrent (thanks, Disney) then this kind of thing might not have happened.by briansm
5/18/2025 at 12:08:04 PM
Wrong from the first sentence…by wahnfrieden
5/18/2025 at 2:37:25 PM
Translations absolutely have copyright.by brookst
5/19/2025 at 2:28:12 AM
Stagnate just like hand thatched rooves? Or like weavers, ever since Jaquard?I don't see too many people defending artists also calling for people to start buying handmade clothing and fabrics again.
That said and because people on here are feisty, I have many artist friends and I deeply appreciate their work at the same time as appreciating how cool diffusion models are.
The difference being of course that we live in a modern society and we should be able to find a solution that works for all.
That said, humans can't even get in something basic like UBI for people and humans consistently vote against each other in favour of discriminating on skin colour, sex, sexuality, culture. Meanwhile the billionaires that are soon to become trillionaires are actively defended by many members of our species, sometimes even by the poor. The industrial age broke our evolved monkey brains.
by fennecbutt
5/18/2025 at 10:02:18 AM
Piracy is promotion, look at all the fanfiction.Also in case of graphic and voice artists unique style looks more valuable than output itself, but style isn't protected by copyright.
by GoblinSlayer
5/18/2025 at 2:58:00 PM
My prediction:It will be like furniture.
A long time ago, every piece of furniture was handmade. It might have been good furniture, or crude, poorly constructed furniture, but it was all quite expensive, in terms of hours per piece. Now, furniture is almost completely mass produced, and can be purchased in a variety of styles and qualities relatively cheaply. Any customization or uniqueness puts it right back into the hand-made category. And that arrangement works for almost everyone.
Media will be like that. There will be a vast quantity of personalized media of decent quality. It will be produced almost entirely automatically based on what the algorithm knows about you and your preferences.
There will be a niche industry of 'hand made' media with real acting and writing from human brains, but it will be expensive, a mark of conspicuous consumption and class differentiation.
by idiotsecant
5/18/2025 at 10:22:12 PM
This. Except one should also disillusion themselves of the idea that there will always be a higher quality to the 'hand made' versions. AI will almost certainly outpace us in every way, including the ability to make something beautiful that looks 'hand-made', even with artificial flaws and illusions of the history and natural rugged beauty of the piece.The only discernable difference that won't be replicable is a cryptographic signature "Certified 100% Human-Made!" sticker, which will probably become the mark of the niche industry.
Somewhat more accurate analogy would be the custom car market. Beautiful collectible convertibles with fine detailing everywhere, priced thousands of times higher than normal cars, that actually run far worse and basically break apart after a few thousand miles and are impossible to find parts for. Automated factories certainly could churn them out but they don't because they're impractical poorly-designed status items kept artificially scarce for the very rich to peacock with.
Except AI will probably still produce equivalent impractical stuff anyway, just because production (digital and physical) will eventually be easy enough that resources are negligible, and everyone can have flashy impractical stuff. So again, only that "100% Human!" seal will distinguish, eventually.
by dogcomplex
5/18/2025 at 5:59:08 PM
This prediction implies that people will value consuming tailored media, knowing 100% that it was generated because they wanted it (as opposed to because someone wanted to express something), with no deeper story or connection or exploration to it.If people instead care about the creation story and influences (the idea of "behind the scenes" and "creator interviews" for on demand ai generated media is pretty funny) then this won't have much value.
Time will tell - it's an exciting, discouraging time to be alive, which has probably always been the case.
by collingreen
5/20/2025 at 12:25:50 AM
I think the proportion of people who care about 'the making of' type content is vanishingly small. Almost everyone is looking for a dopamine hit and that's it.by idiotsecant
5/18/2025 at 8:02:35 PM
> There will be a niche industry of 'hand made' media with real acting and writing from human brains, but it will be expensive, a mark of conspicuous consumption and class differentiation.This addresses one axis of development.
Meanwhile, there's lots of people around willing to express themselves for advertisement money.
Like with translation: We're going to see tool-assisted work where the tools get more and more sophisticated.
Your example with furniture is good. Another is cars: From horses to robotaxis. Humans are in the loop somewhere still.
by sshine
5/18/2025 at 10:42:23 PM
The reproduction cost for the 2nd copy of media is near zero just like software. Handmade or customized furniture is more expensive because it takes more labor for each copy. With media, the cost is fixed, even if it is large. Once the first version of handmade media has been created, the owner is incentivized to get as much value from it as possible. The optimal demand curve is probably not a few rich people paying as much as possible.by mmcconnell1618
5/19/2025 at 1:46:17 AM
[flagged]by 15123121
5/18/2025 at 8:04:22 AM
> As my mom retired from being a translator, she went from typewriter to machine-assisted translation with centralised corpus-databases. All the while the available work became less and less, and the wages became lower and lower.She was lucky to be able to retire when she did, as the job of a translator is definitely going to become extinct.
You can already get higher quality translations from machine learning models than you get from the majority of commercial human translations (sans occasional mistakes for which you still need editors to fix), and it's only going to get better. And unlike human translators LLMs don't mangle the translations because they're too lazy to actually translate so they just rewrite the text as that's easier, or (unfortunately this is starting to become more and more common lately) deliberately mistranslate because of their personal political beliefs.
by kouteiheika
5/18/2025 at 10:47:35 AM
While LLMs are pretty good, and likely to improve, my experience is OpenAI's offerings *absolutely* make stuff up after a few thousand words or so, and they're one of the better ones.It also varies by language. Every time I give an example here of machine translated English-to-Chinese, it's so bad that the responses are all people who can read Chinese being confused because it's gibberish.
And as for politics, as Grok has just been demonstrating, they're quite capable of whatever bias they've been trained to have or told to express.
But it's worse than that, because different languages cut the world at different joints, so most translations have to make a choice between literal correctness and readability — for example, you can have gender-neutral "software developer" in English, but in German to maintain neutrality you have to choose between various unwieldy affixes such as "Softwareentwickler (m/w/d)" or "Softwareentwickler*innen" (https://de.indeed.com/karriere-guide/jobsuche/wie-wird-man-s...), or pick a gender because "Softwareentwickler" by itself means they're male.
by ben_w
5/18/2025 at 3:22:42 PM
no, "Softwareentwickler" doed NOT mean the person is male. It's the correct german form for either male OR generic. (generisches Maskulinum)by kiney
5/18/2025 at 4:44:00 PM
Same is true in Polish, but the feminist movement insists this is not acceptable and tries to push feminatives.I personally have no strong opinion on this, FWIW, just confirming GP's making a good point there. A translated word or phrase may be technically, grammatically correct, but still not be culturally correct.
by TeMPOraL
5/18/2025 at 6:36:42 PM
> While LLMs are pretty good, and likely to improve, my experience is OpenAI's offerings absolutely make stuff up after a few thousand words or so, and they're one of the better ones.That's not how you get good translations from off-the-shelf LLMs! If you give a model the whole book and expect it to translate it in one-shot then it will eventually hallucinate and give you bad results.
What you want is to give it a small chunk of text to translate, plus previously translated context so that it can keep the continuity.
And for the best quality translations what you want is to use a dedicated model that's specifically trained for your language pairs.
> And as for politics, as Grok has just been demonstrating, they're quite capable of whatever bias they've been trained to have or told to express.
In an open ended questions - sure. But that doesn't apply to translations where you're not asking the model to come up with something entirely by itself, but only getting it to accurately translate what you wrote into another language.
I can give you an example. Let's say we want to translate the following sentence:
"いつも言われるから、露出度抑えたんだ。"
Let's ask a general purpose LLMs to translate it without any context (you could get a better translation if you'd give it context and more instructions):
ChatGPT (1): "Since people always comment on it, I toned down how revealing it is."
ChatGPT (2): "People always say something, so I made it less revealing."
Qwen3-235B-A22B: "I always get told, so I toned down how revealing my outfit is."
gemma-3-27b-it (1): "Because I always get told, I toned down how much skin I show."
gemma-3-27b-it (2): "Since I'm always getting comments about it, I decided to dress more conservatively."
gemma-3-27b-it (3): "I've been told so often, I decided to be more modest."
Grok: "I was always told, so I toned down the exposure."
And how humans would translate it:
Competent human translator (I can confirm this is an accurate translation, but perhaps a little too literal): "Everyone was always saying something to me, so I tried toning down the exposure."
Activist human translator: "Oh those pesky patriarchal societal demands were getting on my nerves, so I changed clothes."
(Source: https://www.youtube.com/watch?v=dqaAgAyBFQY)
It should be fairly obvious which one is the biased one, and I don't think it's the Grok one (which is a little funny, because it's actually the most literal translation of them all).
by kouteiheika
5/19/2025 at 10:09:06 AM
>> While LLMs are pretty good, and likely to improve, my experience is OpenAI's offerings absolutely make stuff up after a few thousand words or so, and they're one of the better ones.> That's not how you get good translations from off-the-shelf LLMs! If you give a model the whole book and expect it to translate it in one-shot then it will eventually hallucinate and give you bad results.
You're assuming something about how I used ChatGPT, but I don't know what exactly you're assuming.
> What you want is to give it a small chunk of text to translate, plus previously translated context so that it can keep the continuity
I tried translating a Wikipedia page to support a new language, and ChatGPT rather than Google translate because I wanted to retain the wiki formatting as part of the task.
LLM goes OK for a bit, then makes stuff up. I feed in a new bit starting from its first mistake, until I reach a list at which point the LLM invented random entries in that list. I tried just that list in a bunch of different ways, including completely new chat sessions and the existing session, it couldn't help but invent things.
> In an open ended questions - sure. But that doesn't apply to translations where you're not asking the model to come up with something entirely by itself, but only getting it to accurately translate what you wrote into another language.
"Only" rather understates how hard translation is.
Also, "explain this in Fortnite terms" is a kind of translation: https://x.com/MattBinder/status/1922713839566561313/photo/3
by ben_w
5/18/2025 at 8:38:31 AM
This is just not true, LLMs struggle very hard with even basic recursive questions, nuances and dialectsby khalic
5/18/2025 at 8:51:38 AM
But as a customer cannot know that, they will tend to consume (and mostly trust) whatever LLM result is given.by zx8080
5/18/2025 at 9:38:00 AM
Yes indeed. After a few years humans will be trained to accept the low tier AI translations as the new normal, hopefully I'm dead by then already.by emsign
5/19/2025 at 3:30:33 AM
ChatGPT is always in my pocket, I can use it effortlessly when I'm travelling.My Chinese isn't good enough to explain the difference between ice cream and gelato to my in-laws but ChatGPT gave me a good-enough output in seconds, this far exceeds anything that has come before. A friend (who speaks zero Chinese) was able to have conversations with his in-laws using one of those in-ear translation devices.
Normal people would never ever hire translators in this type of situations and now our spouses can also relax on vacation :)
by djtango
5/18/2025 at 10:42:45 AM
Maybe for dry text. Translation of art is art too and there's no such thing as higher quality art.by GoblinSlayer
5/18/2025 at 11:14:57 AM
I’m intrigued by this statement. It seems obvious to me that some artworks are ‘higher quality’ than others. You wouldn’t, I’d presume, consider the Sistine Chapel or the Mona Lisa to be the same quality as a dickbutt scribbled on a napkin?by taneq
5/18/2025 at 1:49:41 PM
>You wouldn’t, I’d presume, consider the Sistine Chapel or the Mona Lisa to be the same quality as a dickbutt scribbled on a napkin?To paraphrase Frank Zappa...Art just needs a frame. If you poo on a table...not art. If you declare 'my poo on the table will last from the idea, until the poo dissappears', then that is art. Similarly, banksy is just graffiti unless you understand (or not) the framing of the work.
by Ylpertnodi
5/18/2025 at 9:35:55 AM
You can't compare translation to creating new works of art. Sorry mom, but that's apples and oranges. A dangerously false comparison.by emsign
5/18/2025 at 3:14:07 PM
If you speak more than one language(esp something like Chinese or Japanese) you understand how subjective some choices are. It certainly takes creative decision making.by anton-c
5/18/2025 at 7:45:59 PM
I speak Japanese natively and hell, I'm just going to say, there is no such thing as translation, there is just foreign language ghostwriting.I'm not even sure if bilingualism is real or if it's just an alternate expression for relatively benign forced split personality. Could very well be.
by numpad0
5/18/2025 at 2:35:29 PM
As noted in another sub-thread, translations are indeed works of art. As evidence, my mom has received royalties for her translations for decades, both from sales and from library lending. And she could sue for copyright infringement if someone stole her translation. The only difference is that she needs permission to distribute the translation, unless it’s translated from the public domain.by sshine
5/18/2025 at 12:51:14 PM
Disclaimer: I'm an artist with 30+ years of experience.Downgraded to AI training? Nonsense. You forget artists do more than just draw for money, we also draw for FUN, and that little detail escapes every single AI-related discussion I've been reading for the last 3 years.
by falsaberN1
5/18/2025 at 2:25:19 PM
Not an artist myself. I think some artists may become more like head chefs in some Chinese restaurant, who is more like QA and give direction to cooks to improve their work. I think it is hard to notice the details and give concrete feedback if you are not working on it professionally for a long time.by pca006132
5/18/2025 at 2:52:26 PM
This is probably true. I've noticed some people have better critical eye with the AI output than others. People with artistic skill can make stuff of much higher quality, it seems. I guess they get immediately bored of the default settings which compose most of the low-quality slop being pushed around.by falsaberN1
5/19/2025 at 1:01:31 AM
This is exactly what I meant when I said "people can draw what they want *for fun*.by kachapopopow
5/18/2025 at 1:57:14 PM
The issue is whether the artists creating things for love of the game will be crowded out even further by studios churning out slop (or in HN terms, Minimal Viable Products) for cash. There are probably 15 disposable reality TV shows created for every scripted sitcom or drama that needs good writers, set designers and directors.by rchaud
5/18/2025 at 2:38:18 PM
They already are; have been for decades now. AI is amplifying this, true, but art done for love and for money are already pretty much disjoint ventures, and in areas where they mix (like TV shows), it's an uphill battle for the artist - and they're not always right, either; a good show is more than just great writing or beautiful art.by TeMPOraL
5/19/2025 at 5:32:58 AM
I'd argue that better graphical genai is a solution for this.It's a fact of life that creative production companies will always attempt to optimize costs -- which means most efficiently using any human labor.
In the 00s/10s, animation studios especially tried to do this with... mixed results (coughToeicough)
More capable models should allow better keyframe-to-keyframe animation.
by ethbr1
5/18/2025 at 2:58:53 PM
The fact that those TV shows exist indicates the root of the problem has nothing to do with AI.Those show are cheap because they employ fewer people. They still need to employ some people though. To me the greater tradgedy is that they make a product where those people who make it do not care about it. People are working to make things they don't like because they need income to survive.
The problem is not that AI is taking jobs, it is that it is taking incomes. If we really are heading to a world where most jobs can be done by AI (I have my doubts about most, but I'll accept many), we need a strategy to preserve incomes. We already desperately need a system to prevent massive wealth inequality.
We need to have a discussion about the future we want to have, but we are just attacking the tools used by people making a future we don't want. We should be looking at the hands that hold the tools.
Discussions like this often lead to talking about universal basic income. I think that is a mistake. We need a broader strategy than that. The income needs to be far better than 'basic'. Education needs to change to developing the individual instead of worker units.
Imagine a world where the only TV shows were made were the ones who could attract people who care about the program enough that they would offer their time to work on it.
That too would generate a lot of poor quality content, because not everyone is good at the things they like to do. It would be heartless to call it slop though. More importantly those people who are afforded the lifestyle that enables them to produce low quality things are doing precisely the work they need to be doing to become people who produce high quality things.
Some of those hands learning to make high quality things may be holding the tools of AI. People making things because they want to make will produce some incredible things with or without AI. A lot of astounding creations we haven't even seen or perhaps even imagined will be produced by people creatively using new tools.
(This is what I get for checking HN when I let the dog out to toilet in the middle of night)
by Lerc
5/18/2025 at 2:18:20 PM
The ones doing it because they like it don't need to care about the mass produced slop.by rererereferred
5/18/2025 at 10:47:26 AM
> So it makes me wonder if artists will just be downgraded to an "AI" training position, but it could be for the best as people can draw what they like instead and have that input feed into a model for training which doesn't sound too bad.Doesn’t sound too bad? It sounds like the premise of a dystopian novel. Most artists would be profoundly unhappy making “art” to be fed to and deconstructed by a machine. You’re not creating art at that point, you’re simply another cog feeding the machine. “Art” is not drawing random pictures. And how, pray tell, will these artists survive? Who is going to be paying them to “draw whatever they like” to feed to models? And why would they employ more than two or three?
> it still make me wonder (…) if we're going to start losing challenging styles (…) and everything will start 'felling' the same.
It already does. There are outliers, sure, but the web is already inundated by shit images which nonetheless fool people. I bet scamming and spamming with fake images and creating fake content for monetisation is already a bigger market than people “genuinely” using the tools. And it will get worse.
by latexr
5/18/2025 at 2:02:50 PM
> You’re not creating art at that point, you’re simply another cog feeding the machine.That's the definition of commercial art, which is what most art is.
> “Art” is not drawing random pictures.
It's exactly what it is, if you're talking about people churning out art by volume for money. It's drawing whatever they get told to, in endless variations. Those are the people you're really talking about, because those are the ones whose livelihoods are being consumed by AI right now.
The kind of art you're thinking of, the art that isn't just "drawing random pictures", the art that the term "deconstruction" could even sensibly apply to - that art isn't in as much danger just yet. GenAI can't replicate human expression, because models aren't people. In time, they'll probably become so, but then art will still be art, and we'll have bigger issues to worry about.
> There are outliers, sure, but the web is already inundated by shit images which nonetheless fool people. I bet scamming and spamming with fake images and creating fake content for monetisation is already a bigger market than people “genuinely” using the tools. And it will get worse.
Now that is just marketing communications - advertising, sales, and associated fraud. GenAI is making everyone's lives worse by making the job of marketers easier. But that's not really the fault of AI, it's just the people who were already making everything shitty picking up new tools. It's not the AI that's malevolent here, it's the wielder.
by TeMPOraL
5/19/2025 at 1:01:57 AM
I don't consider facebook a good sample group.by kachapopopow
5/18/2025 at 11:17:15 AM
Surely we’re way past the point now that models could be improved via RLHF using upvotes, or something equally banal?by taneq
5/18/2025 at 11:45:21 AM
The situation will get worse, not the models.by latexr
5/18/2025 at 10:31:24 AM
The problem I have with the whole copyright AI thing is that the big ones benefit. If you reference any famous Copyright in chatgpt etc. you will get blocked but a small artist's stuff is not.Open it for all or nothing.
by sschueller
5/18/2025 at 12:48:15 PM
We probably should just stop enforcing copyright. “Stealing” my idea doesn’t deprive me of its use. Think about what the US market might look like if scaling and efficiency were rewarded rather than legal capture of markets. That large companies can buy and bury technology IP to maintain a market position is a tremendous loss for the rest of us.by dughnut
5/18/2025 at 2:01:32 PM
"Might makes right" is how we got here. Airbnb and Uber can break hotel and taxi regulations openly, but if you start your own ride-for-cash service, the state will shut you down for any number of by-law violations. They have law firms and lobbyists on retainer and you don't. Similarly, copyright infringement could be a jail sentence for you, but a "legal gray area" for them.by rchaud
5/18/2025 at 7:09:24 AM
I find it interesting that you echo the concerns of people who defend artists’ copyright claims, while stating that you are very pro AI in terms of copyright.It’s a very emotionally loaded space for many, meaning most comments I read lean to the extremes of either argument, so seeing a comment like yours that combines both makes me curious.
Would be interesting to hear a bit more about how you see the role of copyright in the AI space.
by earthnail
5/19/2025 at 1:03:26 AM
At first it will obviously make it easier for artists to create what they want at the expense of doing everything yourself which will take the fun out of it. At first we might see some raise in the money some people can make, but as I said the choice artists will have in the end is being someone who draws pictures for a machine to be trained on.I also think AI is the next evolution of humanity.
by kachapopopow
5/18/2025 at 7:29:15 AM
Not GP, though I agree with their views, and make my money from copyrighted work (writing novels).The role of the artist has always been to provide excellent training data for future minds to educate themselves with.
This is why public libraries, free galleries, etc are so important.
Historically, art has been ‘best’ when the process of its creation has been heavily funded by a wealthy body (the church or state, for example).
‘Copyright’, as a legal idea, hasn’t existed for very long, relative to ‘subsidizing the creation of excellent training data’.
If ‘excellent training data for educating minds’ genuinely becomes a bottleneck for AI (though I’d argue it’s always a bottleneck for humanity!), funding its creation seems a no-brainer for an AI company, though they may balk at the messiness of that process.
I would strongly prefer that my taxes paid for this subsidization, so that the training data could be freely accessed by human minds or other types of mind.
Copyright isn’t anything more than a paywall, in my opinion. Art isn’t for revenue generation - it’s for catalyzing revenue generation.
by gabriel666smith
5/18/2025 at 3:41:46 PM
"The role of the artist has always been to provide excellent training data for future minds to educate themselves with."We are not aware of the implications of this sentence. This is it. The only "source" is play. Joyful play.
by blaeks
5/18/2025 at 7:09:18 AM
Artists push the envelope.With AI tools artists will be able to push further, doing things that AI can't do yet.
by wordpad
5/18/2025 at 9:40:42 AM
Audiences too. People loses interest fast for anything that something faceless can provide, whether the thing is machines or humans, or whether the act is drawing art or assembling iPhone.by numpad0
5/18/2025 at 10:51:49 AM
Push further can only artists that weren't crippled by AI.by GoblinSlayer
5/18/2025 at 1:09:49 PM
What do you mean? How can AI cripple an artist? Even if the AI can do stuff better than I can in less time, it doesn't affect my art at all. It's the same thing as human artists better than them existing. Then again, I've seen people who get jealous to a raging degree because artist X can do better than them, so...Every artist worth anything strives to be better at their craft on the daily, if that artist gets discouraged because there's something "better", that means that artist is not good because those negative emotions are coming from a competitive place instead of one of self-improvement and care for their craft or the audience. Art is only a competition with oneself, and artists that don't understand or refuse this fact are doomed from the start.
by falsaberN1
5/18/2025 at 1:13:25 PM
Nice idealistic view. It doesn't pay the bills. Artists quit doing art when they have to flip burgers instead. And AI is absolutely unconditionally a competitor in that arena.by JoeAltmaier
5/18/2025 at 2:44:56 PM
Then they were never real artists. I spend 14 hours a day at the office in a rather stressful job and still make time to draw, and I'm everything but a superhuman.by falsaberN1
5/18/2025 at 5:47:43 PM
The No True Scotsman can always be counted on to rear it's head.by JoeAltmaier
5/18/2025 at 1:54:58 PM
Online artists are more likely to be consultants and marketing experts. They "flip burgers", or rather make PowerPoints and lays out magazine articles, 12 hours a day for 8 days a week anyway. So AI only "financially" hurts them in the sense that it hurts their dopamine income.by numpad0
5/18/2025 at 3:02:09 PM
This is more like it. Every dedicated artist I know does something else to pay the bills, from actual burger flippers to sysadmins like me. They will make time to draw things because they simply like doing it.by falsaberN1
5/18/2025 at 8:52:08 PM
I really think this is why a lot of discussions and projects around generative AI and AI-relevant art don't go well. It's a one-way outside influence that also affect economy as second order effect to cultural impacts. Because economical impacts of these online arts are mere downstream effects, manipulations in that domain just don't work.by numpad0
5/18/2025 at 9:08:19 AM
> Art is something that cannot be generated like synthetic text10 years ago: "real real text cannot be generated like stock phrases, so writing will be nearly forever powered by human writers."
by exe34
5/18/2025 at 11:42:49 AM
I think "text" is irrelevant, the distinction is between art and the synthetic, where art might be written or visual. It's a vague term that's often used to mean "graphics", confusing matters, and the meaning of art is endlessly debated, like the meaning of intelligence.Obviously we have synthetic graphics (like synthetic text). So something else must be meant by "art" here.
by card_zero
5/18/2025 at 5:26:15 PM
If somebody comes up with a new pun, is that art?by exe34
5/18/2025 at 6:03:24 PM
Atelier later.by card_zero
5/18/2025 at 6:51:33 PM
What?by exe34
5/18/2025 at 6:53:01 PM
I tried to invent a new pun, it seemed a crime not to make the effort.by card_zero
5/18/2025 at 8:45:03 PM
how did you come up with it?by exe34
5/18/2025 at 9:02:57 PM
Let's say I used an AI. Actually I browsed unrelated word lists in Onelook - I think I was on synonyms for "confusion" - until I remembered the word atelier because it turns up in fantasy anime a lot. But that's a kind of machine assistance, so let's pretend it was an LLM, if that helps with where you're taking this. Now what?by card_zero
5/19/2025 at 9:16:29 AM
it's exactly where I was going with it. Just because you used tools or have some mechanical process that you used along the way doesn't distract from the fact that you might be able to come up with a good pun. So if you have a computer model that can predict whether people will like a given pun (or a given response for any other domain), then the work produced will have the same effect on the audience. You could try to ascribe the prime mover title to the human that says "write me a pun, a good one!", but ultimately the machine is producing art.The machine could also just produce lots of examples and test them on a large number of humans - in which case none of them individually is the artist, but the art is still being produced.
by exe34
5/22/2025 at 12:02:24 PM
Much hinges on the meaning of "like". This resembles a focus group. It's strong on testing for approval, but sloppy on design, inspiration, and intention. So how interesting can the resulting art be? I'd say "moderately". But less so in some contexts, such as the context of drowning in slop already.by card_zero
5/19/2025 at 1:06:13 AM
That was a happy accident.by kachapopopow
5/18/2025 at 12:51:49 PM
Now: "you can block that AI slop with uBlock, switch to Firefox if you haven't"by numpad0
5/18/2025 at 5:25:17 PM
If you could, I imagine universities wouldn't be so worried about students using chat gpt to cheat.by exe34
5/18/2025 at 7:39:46 AM
I think the “paper rock cross blade” short films by Corridor is absolute great and can by all accounts be called art and if they make a 3rd they will probably use this model.In terms of losing styles, that is already been happening for ages. Disney moved to xeroxing instead of inking, changed the style because inking was “too hard”. In the late 90s/early 2000s we saw a burst of cartoons with a flash animation style on TV because it was a lot easier and cheaper to animate in flash.
by wodenokoto
5/18/2025 at 10:57:59 AM
I disagree with the positive characterisation. Those videos have a funny schtick of exaggerating anime tropes for a couple of minutes and that’s the extent of it. The animation is all over the place, reactions, expressions, mouth movements often fail, style changes from frame to frame. It maybe kind of works precisely because it’s a short exaggerated parody and we have a high tolerance for flaws in comedy, but even then the seams are showing. Anything even remotely more substantive would no longer have worked.by latexr
5/19/2025 at 4:55:26 AM
I think it’s a successful creative endeavor for two reasons:They took the weaknesses of last years style transfer models and used them as a style, working around and with it’s shortcomings and weaknesses. That is a far cry from “type a prompt and be do e with it”.
Secondly I think the story is fun and the whole thing is fun, not in a will smith eats spaghetti kind of why but fun as in an actually fun short film.
I think it shows that AI can be a tool that empowers creativity and creative work and more than a power point stock photo generator.
by wodenokoto
5/19/2025 at 1:00:58 PM
> I think it’s a successful creative endeavorFair. But I wouldn’t say that automatically translates to “absolute great” or that it “by all accounts be called art”. Though there is a high degree of subjectivity there, which why I simply said I disagree.
> actually fun short film
Sure, you like what you like, no judgement. But again, “it’s a short exaggerated parody and we have a high tolerance for flaws in comedy”. Had they tried to make something more substantial, serious, provocative, emotional, or slower paced, I believe they would’ve fallen flat.
> I think it shows that AI can be a tool that empowers creativity and creative work
Plenty of actual artist disagreed, though. And the backlash wasn’t just limited to the end product, but Corridor Crew’s attitude to it (creator intentions matter a lot when defining something as art) and lack of understanding on the very real and very negative impacts on the industry.
by latexr
5/18/2025 at 11:33:29 AM
> Art is something that cannot be generatedOf course it can be, you're seeing it first hand with your very own eyes.
by perching_aix
5/18/2025 at 12:37:13 PM
I think we're seeing machine generation of derivative visual materials.There's a difference, in my mind at least. "Art" is cultural activity and expression, there needs to be intent, creativity, imagination..
A printer spooling out wallpaper is not making art, even if there was artistry involved in making the initial pattern that is now being spooled out.
by sbarre
5/18/2025 at 1:27:50 PM
The way I think of art has two main components: the aesthetics and the higher level impressions invoked through those aesthetics. For me, art is specifically about the experimentation-with and the then-intentional use of aesthetics, to evoke a specific set of impressions within its audience. A form of communication, a transfer of experiences, frames of mind, and ideas. The more effectively and intelligently one can do this, the more skilled of an artist they are in my book.When I see generative AI produced illustrations, they'll usually be at least aesthetically pleasing. But sometimes they are already more than that. I found that there are lots and lots of illustrations that already deliver higher level experiences that go beyond just their quality of aesthetics delivery. They deliver on the goal they were trying to use those aesthetics for to begin with. Whether this is through tedious prompting and "borrowed" illustrational techniques I think is difficult to debate right now, but just based on what I've seen so far of this field and considering my views and definitions, I have absolutely zero doubts that AI will 100% generate artworks that are more and more "legitimately" artful, and that there's no actual hard dividing line one can draw between these and manmade art, and what difference does exist now I'm confident will gradually fade away.
I do not believe that humans are any more special than just the fact that they're human provides to them. Which is ultimately ever-dwindling now it seems.
by perching_aix
5/18/2025 at 3:14:02 PM
It's human intent.AI is technically another tool, and it can be used poorly (what people refer to "AI slop", using default settings, some LoRA and calling it a day) and it can be used properly (forcing compositions, editing, fixing errors...) to convey an idea or emotion or tell a story. Critical eye does the rest.
After all, the machine doesn't do anything on its own, it needs a driver. The quality of the output is directly proportional to the operator's amount of passion.
by falsaberN1
5/18/2025 at 7:24:21 PM
Sure, but intent is a very fickle thing.Consider zero and single click deployments in IT operations. With single click deployments, you need to have everything automated, but the go sign is still given by a human. With zero click, you'll have a deployment policy instead - the human decision is now out of the critical path completely, and only plays part during the authoring and later editing of said policy. And you can also then generate those policies, and so on.
Same can be applied to AI. You can have canned prompts that you keep refining to encode your intent and preferences, then you just use them with a single click. But you can also build a harness that generates prompts and continuously monitors trends and the world as a whole for any kind of arbitrary criteria (potentially of its own random or even shifting choice), and then follows that: a reward policy. And then like with regular IT, you can keep layering onto that.
Because of this, I don't think that intent is the point of differentiation necessarily, but the experience and shared understanding of human intent. That people have varying individual, arbitrary preferences, and are going through life of arbitrary and endless differences, and then source from those to then create. Indeed, this is never going to be replicated, exactly because of what I said: this is humans being human, and that giving them a unique, inalienable position by definition.
It's like if instead of planes we called aircraft "mechanical birds" and dunked on them for not flying by flapping their wings, despite their more than a century long history and claims of "flying". But just like I think planes do actually fly, I do also think that these models produce art. [0]
by perching_aix
5/18/2025 at 4:46:03 PM
You know, I wouldn't short what AI can do in the future, even if not trained on lots of art. It does not seem far out to me to think an AI could be trained to identify in images concepts like structure, balance, contrast, composition, narrative, etc, and then to pursue generation of such in procedural, iterative loops of drawing/painting using test time compute and a prompt for an objective.by SubiculumCode
5/18/2025 at 9:39:40 AM
>So this is obviously trained on copyrighted material.Is it? I have no knowledge of this product, but I recall Novel AI paid for a database of tagged Anime style images. Its not impossible for something similar to have happened here.
by protocolture
5/18/2025 at 1:18:26 PM
That wouldn’t change the fact that the images are copyrighted material.by layer8
5/18/2025 at 10:09:15 AM
If you believe NovelAI is only trained on legally acquired content, I have a bridge to sell you.by raincole
5/18/2025 at 10:47:58 PM
The argument they made, at least for the 1.0 of their image model, is that the database of images they purchased was heavily tagged reducing the work they had to do.That isn't to say that they purchased everything they have ever used. Nor do I care if they have.
by protocolture
5/18/2025 at 3:42:55 PM
AI is just going to absolutely blow the bottom 50% out of any market it's in.Examples:
Disney isn't going to start using AI art. But all those gacha games on the iOS app store are ABSOLUTELY going to. And I suspect gacha apps support at least 10-100x more artists than Disney staffs.
Staff engineers aren't going anywhere - AI can't tell leadership the truth. But junior engineers are going to be gutted by this, because now their already somewhat dubious direct value proposition - turning tickets into code while they train up enough to participate more in the creative and social process of Software Engineering - now gets blasted by LLMs. Mind you, I don't personally hold this ultra-myopic view of juniors - but mgmt absolutely does, and they pick headcount.
Hmm yknow I could actually see Big Books getting the "top" end eaten by AI instead of the bottom, actually. All the penny dreadfuls you see lining the shelves of Barnes and Noble. Vs the truly creative work already happens at the bottom anyway, and is self-published.
Also, as someone who's watched copyright from the perspective of a GPL fanboy, good fucking luck actually enforcing anything copyright related. The legal system is pay to play and if you're a small (or even medium!) fry, you will probably never even know your copyright is being violated. Much less enforcing it or getting any kind of judgement.
by atomicnumber3
5/18/2025 at 7:56:28 AM
I think many artists will see that if they publish anything original then AI companies will immediately use it as training data without regards to copyright.The result will be less original art. They will simply stop creating it or publishing it.
IMO music streaming has similarly lead to a collapse in quality music artistry, as fewer talented individuals are incentivised to go down that path.
AI will do the same for illustration.
It won’t do the same for _art_ in the “contemporary art” sense, as great art is mostly beyond the abilities of AI models. That’s probably an AGI complete task. That’s the good news.
I’m kinda sad about it. The abilities of the models are impressive, but they rely on harvesting the collective efforts of so many talented and hardworking artists, who are facing a double whammy: their own work is being dubiously used to put them out of a job.
Sometimes I feel like the tech community had an opportunity to create a wonderful future powered by technology. And what we decided to do instead was enshittify the world with ads, undermine the legal system, and extract value from people’s work without their permission.
Back in the day real hackers used to gather online to “stick it to the man”. They despised the greed and exploitation of Wall Street. And now we have become torch bearers for the very same greed.
by rhubarbtree
5/18/2025 at 2:24:23 PM
> music streaming has similarly lead to a collapse in quality music artistry, as fewer talented individuals are incentivised to go down that path.Is there data for this? I feel there's more musicians than ever and there's more very talented musicians than ever and the most famous ones are more famous than ever so I would like to see if that's correct.
by rererereferred
5/18/2025 at 5:04:57 PM
I’ve heard other people say that.I think there are more musicians with reach than ever.
I would say it’s very likely there are far fewer musicians making a living out of their music than there were in the last. That’s the key difference.
And the truth is that for most people incentives matter, so not being able to make a living from music means very talented people who are financially motivated (ie most of them) do something else instead.
by rhubarbtree
5/18/2025 at 10:17:17 PM
The internet has allowed for more artists to get exposure for sure, but it's still down to luck / prettiness / virality etc. In terms of slop (not necessarily AI) there's shit like this https://www.musicbusinessworldwide.com/spotify-denies-its-pl...by robotblake
5/18/2025 at 9:46:04 AM
I don't think future tense is appropriate here as it's been few years since appearance of open weights image models. We're already transitioning into the gap phase between Napster to Vocaloid.by numpad0
5/18/2025 at 10:35:19 AM
100% Agree.I wonder if there is a mitigation strategy for this. Is there a way to make (human-made-art) scraping robustly difficult, while leaving human discovery and exploration intact?
by aaclark
5/18/2025 at 11:56:58 AM
Yes, going offline/physical only. If it's digital, it can be scraped/ingested/trained on.by danielbln
5/18/2025 at 8:47:36 AM
It is a fluke visual training sets are far less amenable to sabotage than textual ones. Not that I suggest engaging in such a horrible, terrible, very bad manners, do I?by xyzal
5/18/2025 at 3:33:51 PM
I'm sorry to inform you that the mere automated pre-processing used in building of a training set will most likely disable any form of poisoning because the image is being altered before training. All popular training tools do this.Art stealing is a thing. I've had by art stolen regularly. Multiple Doom mods use sprites I made and only one person (the DRLA guy) asked for permission. I've had my art traced and even used in advertisements with me only finding out by sheer chance. I've had people use it for coloring without crediting the source. This has happened for more than thirty years. You can only learn to live with it, lest you risk going absolutely insane. If you are popular, people will do stupid stuff with your stuff. And if you aren't popular, you art is not going to be used to train, anyway (sets are ordered by popularity and only the top stuff gets used. The one with 3 upvotes is not going in.)
by falsaberN1
5/18/2025 at 10:10:57 PM
Well said.by robotblake
5/18/2025 at 8:18:12 AM
[dead]by varelse
5/19/2025 at 3:10:15 AM
> Art is something that cannot be generated like synthetic text so it will have to be nearly forever powered by human artists or else you will continue to end up with artifacting.The rise of GPT slop is making it increasingly clear to me that this distinction doesn't exist, and it's just an under-appreciation of the skill that goes into good writing. That thing where LLMs generate overly-wordy mealy-mouthed text is just what bad writing looks like: the writing equivalent of a bad drawing. Subtle inaccuracies and ill-fitting metaphors are just the text version of visual artifacts.
Not to diminish the plight of art and artists, but it's the same as the plight of writers and writing. Writers are also having their copyrighted works used against their will to destroy their own industry. LLMs also need big human-written datasets to keep the magic running, that are drying up as they get poisoned by their own output.
by rossy