this post was submitted on 30 Jul 2023
221 points (100.0% liked)

Technology

37739 readers
500 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski's style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski's art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

you are viewing a single comment's thread
view the rest of the comments
[–] Thevenin@beehaw.org 4 points 1 year ago (1 children)

It’s absolutely true that the training process requires downloading and storing images

This is the process I was referring to when I said it makes copies. We're on the same page there.

I don't know what the solution to the problem is, and I doubt I'm the right person to propose one. I don't think copyright law applies here, but I'm certainly not arguing that copyright should be expanded to include the statistical matrices used in LLMs and DPMs. I suppose plagiarism law might apply for copying a specific style, but that's not the argument I'm trying to make, either.

The argument I'm trying to make is that while it might be true that artificial minds should have the same rights as human minds, the LLMs and DPMs of today absolutely aren't artificial minds. Allowing them to run amok as if they were is not just unfair to living artists... it could deal irreparable damage to our culture because those LLMs and DPMs of today cannot take up the mantle of the artists they hedge out or pass down their knowledge to the next generation.

[–] Fauxreigner@beehaw.org 2 points 1 year ago

Thanks for clarifying. There are a lot of misconceptions about how this technology works, and I think it's worth making sure that everyone in these thorny conversations has the right information.

I completely agree with your larger point about culture; to the best of my knowledge we haven't seen any real ability to innovate, because the current models are built to replicate the form and structure of what they've seen before. They're getting extremely good at combining those elements, but they can't really create anything new without a person involved. There's a risk of significant stagnation if we leave art to the machines, especially since we're already seeing issues with new models including the output of existing models in their training data. I don't know how likely that is; I think it's much more likely that we see these tools used to replace humans for more mundane, "boring" tasks, not really creative work.

And you're absolutely right that these are not artificial minds; the language models remind me of a quote from David Langford in his short story Answering Machine: "It's so very hard to realize something that talks is not intelligent." But we are getting to the point where the question of "how will we know" isn't purely theoretical anymore.