• FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    28
    arrow-down
    3
    ·
    1 year ago

    Their previous release used existing public domain art, were they already at the “not interested in quality and want to make $$ by cutting all corners” level when they did that?

    • MoogleMaestro@kbin.social
      link
      fedilink
      arrow-up
      12
      arrow-down
      16
      ·
      1 year ago

      They were,

      But AI is industrial plagiarism. There’s a big difference in the legality of using AI vs using publically licensed materials.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        15
        arrow-down
        7
        ·
        1 year ago

        But AI is industrial plagiarism.

        You don’t know how generative AI works. Or what plagarism means. Or possibly both.

        BTW, what if these folks are using Adobe Firefly? It was trained entirely on licensed materials.

        • circuitfarmer@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          1 year ago

          I work in generative AI, specifically curated training sets.

          The issue is training on “licensed materials”. If that happened with all AI, no one would have a problem. But its disingenuous to suggest that’s how most AI is currently being trained. A lot of materials have been scraped off of the web, especially for image generation, meaning some portion of the training data was used without the author’s consent or, often, even their knowledge. It’s important to note that scraping training data in this way usually breaks a TOS.

          The amount of people I’ve seen supporting AI usage in this context is staggering, with one commenter even telling me it was about the “greed” of the artists, whose work may be in a training set without consent, wanting royalties for slightly changing a parameter with their art (that is, of course, a strawman fallacy).

          To me, the only issue here is handling the ethics of what goes into training data and what doesn’t. Authors should have the choice of their materials not being used. Adobe understood this, which is why Firefly being trained on explicitly licensed materials makes it a different beast, to which you allude.

          But it’s clear a lot of people don’t understand why using data without consent is a bad thing in this context, and for that reason, some other people will choose not to support companies using it until the issue is resolved. It seems quite reasonable to me.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            The issue is training on “licensed materials”.

            People usually say that’s the issue, until you show them that it’s possible to generate images and whatnot from models trained on “fully licensed” data. Then they come up with some other reason why evil AI is awful and evil. I’ve been involved in these debates for a long time now and those goalposts have well-worn tracks from how frequently they shift that way.

            But it’s clear a lot of people don’t understand why using data without consent is a bad thing in this context

            No, they don’t agree that using data without consent is a bad thing. Saying “they don’t understand” it is begging the question, in the literal sense. You’re saying that people who disagree about that are simply being ignorant of some underlying “truth.”

            • circuitfarmer@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              No, they don’t agree that using data without consent is a bad thing.

              If this developer doesn’t mind taking data without consent, I hope they don’t have an issue with people pirating their game. That’s a slippery slope if I ever saw one.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                2
                ·
                edit-2
                1 year ago

                “Slippery slope” is also a fallacy. Training an AI and copying a game are two different things and it’s entirely reasonable to hold the position that one is ok and the other is not.

                • circuitfarmer@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  1 year ago

                  You’re missing the point. Both are using data (work of the dev on a game, work of an artist on art) without consent.

                  • FaceDeer@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    I’m not missing the point. Just because they’re both “using data without consent” doesn’t mean they’re the same thing. Playing baseball and smashing someone’s car both involve swinging a bat but that’s where the similarity ends.

                    There are many ways that you can “use data without consent” that are perfectly legal.

        • goldenbug@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          If you pose the ignorance of a person replying, the custom should be to explain the concepts they do not understand.