‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products

      • dhork@lemmy.world
        link
        fedilink
        English
        arrow-up
        85
        arrow-down
        8
        ·
        edit-2
        1 year ago

        ¿Porque no los dos?

        I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.

        Even if we went back to the pre-1976 term of 28 years, renewable once for a total of 56 years, there’s still a ton of recent works that AI are using without any compensation to their creators.

        I think it’s because people are taking this “intelligence” metaphor a bit too far and think if we restrict how the AI uses copyrighted works, that would restrict how humans use them too. But AI isn’t human, it’s just a glorified search engine. At least all standard search engines do is return a link to the actual content. These AI models chew up the content and spit out something based on it. It simply makes sense that this new process should be licensed separately, and I don’t care if it makes some AI companies go bankrupt. Maybe they can work adequate payment for content into their business model going forward.

        • deweydecibel@lemmy.world
          link
          fedilink
          English
          arrow-up
          28
          arrow-down
          6
          ·
          edit-2
          1 year ago

          It shouldn’t be cheap to absorb and regurgitate the works of humans the world over in an effort to replace those humans and subsequently enrich a handful of silicon valley people.

          Like, I don’t care what you think about copyright law and how corporations abuse it, AI itself is corporate abuse.

          And unlike copyright, which does serve its intended purpose of helping small time creators as much as it helps Disney, the true benefits of AI are overwhelmingly for corporations and investors. If our draconian copyright system is the best tool we have to combat that, good. It’s absolutely the lesser of the two evils.

          • lolcatnip@reddthat.com
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            5
            ·
            1 year ago

            Do you believe it’s reasonable, in general, to develop technology that has the potential to replace some human labor?

            Do you believe compensating copyright holders would benefit the individuals whose livelihood is at risk?

            the true benefits of AI are overwhelmingly for corporations and investors

            “True” is doing a lot of work here, I think. From my perspective the main beneficiaries of technology like LLMs and stable diffusion are people trying to do their work more efficiently, people paying around, and small-time creators who suddenly have custom graphics to illustrate their videos, articles, etc. Maybe you’re talking about something different, like deep fakes? The downside of using a vague term like “AI” is that it’s too easy to accidently conflate things that have little in common.

            • EldritchFeminity@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              9
              ·
              1 year ago

              There’s 2 general groups when it comes to AI in my mind: Those whose work would benefit from the increased efficiency AI in various forms can bring, and those who want the rewards of work without putting in the effort of working.

              The former include people like artists who could do stuff like creating iterations of concept sketches before choosing one to use for a piece to make that part of their job easier/faster.

              Much of the opposition of AI comes from people worrying about/who have been harmed by the latter group. And it all comes down the way that the data sets are sourced.

              These are people who want to use the hard work of others for their own benefit, without giving them compensation; and the corporations fall pretty squarely into this group. As does your comment about “small-time creators who suddenly have custom graphics to illustrate their videos, articles, etc.” Before AI, they were free to hire an artist to do that for them. MidJourney, for example, falls into this same category - the developers were caught discussing various artists that they “launder through a fine tuned Codex” (their words, not mine, here for source) for prompts. If these sorts of generators were using opt-in data sets, paying licensing fees to the creators, or some other way to get permission to use their work, this tech could have tons of wonderful uses, like for those small-time creators. This is how music works. There are entire businesses that run on licensing copyright free music out to small-time creators for their videos and stuff, but they don’t go out recording bands and then splicing their songs up to create synthesizers to sell. They pay musicians to create those songs.

              Instead of doing what the guy behind IKEA did when he thought “people besides the rich deserve to be able to have furniture”, they’re cutting up Bob Ross paintings to sell as part of their collages to people who want to make art without having to actually learn how to make it or pay somebody to turn their idea into reality. Artists already struggle in a world that devalues creativity (I could make an entire rant on that, but the short is that the starving artist stereotype exists for a reason), and the way companies want to use AI like this is to turn the act of creating art into a commodity even more; to further divest the inherently human part of art from it. They don’t want to give people more time to create and think and enjoy life; they merely want to wring even more value out of them more efficiently. They want to take the writings of their journalists and use them to train the AI that they’re going to replace them with, like a video game journalism company did last fall with all of the writers they had on staff in their subsidiary companies. They think, “why keep 20 writers on staff when we can have a computer churn out articles for our 10 subsidiaries?” Last year, some guy took a screenshot of a piece of art that one of the artists for Genshin Impact was working on while livestreaming, ran it through some form of image generator, and then came back threatening to sue the artist for stealing his work.

              Copyright laws don’t favor the small guy, but they do help them protect their work as a byproduct of working for corporate interests. In the case of the Genshin artist, the fact that they were livestreaming their work and had undeniable, recorded proof that the work was theirs and not some rando in their stream meant that copyright law would’ve been on their side if it had actually gone anywhere rather than some asshole just being an asshole. Trademark isn’t quite the same, but I always love telling the story of the time my dad got a cease and desist letter from a company in another state for the name of a product his small business made. So he did some research, found out that they didn’t have the trademark for it in that state, got the trademark himself, and then sent them back their own letter with the names cut out and pasted in the opposite spots. He never heard from them again!

        • AnneBonny@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          7
          ·
          1 year ago

          I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.

          Would you characterize projects like wikipedia or the internet archive as “sucking up all human knowledge”?

          • MBM@lemmings.world
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            1
            ·
            1 year ago

            Does Wikipedia ever have issues with copyright? If you don’t cite your sources or use a copyrighted image, it will get removed

          • dhork@lemmy.world
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            2
            ·
            1 year ago

            In Wikipedia’s case, the text is (well, at least so far), written by actual humans. And no matter what you think about the ethics of Wikipedia editors, they are humans also. Human oversight is required for Wikipedia to function properly. If Wikipedia were to go to a model where some AI crawls the web for knowledge and writes articles based on that with limited human involvement, then it would be similar. But that’s not what they are doing.

            The Internet Archive is on a bit less steady legal ground (see the resent legal actions), but in its favor it is only storing information for archival and lending purposes, and not using that information to generate derivative works which it is then selling. (And it is the lending that is getting it into trouble right now, not the archiving).

            • phillaholic@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              The Internet Archive has no ground to stand on at all. It would be one thing if they only allowed downloading of orphaned or unavailable works, but that’s not the case.

          • assassin_aragorn@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            1 year ago

            Wikipedia is free to the public. OpenAI is more than welcome to use whatever they want if they become free to the public too.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          5
          ·
          1 year ago

          I don’t understand why people are defending AI companies

          Because it’s not just big companies that are affected; it’s the technology itself. People saying you can’t train a model on copyrighted works are essentially saying nobody can develop those kinds of models at all. A lot of people here are naturally opposed to the idea that the development of any useful technology should be effectively illegal.

          • dhork@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            2
            ·
            1 year ago

            I am not saying you can’t train on copyrighted works at all, I am saying you can’t train on copyrighted works without permission. There are fair use exemptions for copyright, but training AI shouldn’t apply. AI companies will have to acknowledge this and get permission (probably by paying money) before incorporating content into their models. They’ll be able to afford it.

            • lolcatnip@reddthat.com
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              1 year ago

              What if I do it myself? Do I still need to get permission? And if so, why should I?

              I don’t believe the legality of doing something should depend on who’s doing it.

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                2
                ·
                1 year ago

                Yes you would need permission. Just because you’re a hobbyist doesn’t mean you’re exempt from needing to follow the rules.

                As soon as it goes beyond a completely offline, personal, non-replicatible project, it should be subject to the same copyright laws.

                If you purely create a data agnostic AI model and share the code, there’s no problem, as you’re not profiting off of the training data. If you create an AI model that’s available for others to use, then you’d need to have the licensing rights to all of the training data.

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            2
            ·
            1 year ago

            You can make these models just fine using licensed data. So can any hobbyist.

            You just can’t steal other people’s creations to make your models.

            • lolcatnip@reddthat.com
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              7
              ·
              1 year ago

              Of course it sounds bad when you using the word “steal”, but I’m far from convinced that training is theft, and using inflammatory language just makes me less inclined to listen to what you have to say.

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                10
                arrow-down
                2
                ·
                1 year ago

                Training is theft imo. You have to scrape and store the training data, which amounts to copyright violation based on replication. It’s an incredibly simple concept. The model isn’t the problem here, the training data is.

      • HelloThere@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        5
        ·
        edit-2
        1 year ago

        I’m no fan of the current copyright law - the Statute of Anne was much better - but let’s not kid ourselves that some of the richest companies in the world have any desire what so ever to change it.

        • Gutless2615@ttrpg.network
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          9
          ·
          1 year ago

          My brother in Christ I’m begging you to look just a little bit into the history of copyright expansion.

            • Gutless2615@ttrpg.network
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              2
              ·
              1 year ago

              I only discuss copyright on posts about AI copyright issues. Yes, brilliant observation. I also talk about privacy y issues on privacy relevant posts, labor issues on worker rights related articles and environmental justice on global warming pieces. Truly a brilliant and skewering observation. Youre a true internet private eye.

              Fair use and pushing back against (corporate serving) copyright maximalism is an issue I am passionate about and engage in. Is that a problem for you?

                  • LWD@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    arrow-down
                    2
                    ·
                    edit-2
                    1 year ago

                    Creators in your circles? Does that include your clients, because apparently small artists hire you?

                    In my current job i fight back against the tech giants and try to reign in specifically Google Amazon and Meta with consumer protection regulations.

                    Well now I’m intrigued.

                    Exactly how do you prevent your clients from getting their content stolen by a corporation created by Sam Altman, who is worth half a billion dollars on his own?

      • Fisk400@feddit.nu
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        3
        ·
        1 year ago

        As long as capitalism exist in society, just being able go yoink and taking everyone’s art will never be a practical rule set.

    • S410@lemmy.ml
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      14
      ·
      1 year ago

      Every work is protected by copyright, unless stated otherwise by the author.
      If you want to create a capable system, you want real data and you want a wide range of it, including data that is rarely considered to be a protected work, despite being one.
      I can guarantee you that you’re going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that’s compiled with permission of every copyright holder involved.

      • Exatron@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        8
        ·
        1 year ago

        How hard it is doesn’t matter. If you can’t compensate people for using their work, or excluding work people don’t want users, you just don’t get that data.

        There’s plenty of stuff in the public domain.

      • HelloThere@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        5
        ·
        edit-2
        1 year ago

        I never said it was going to be easy - and clearly that is why OpenAI didn’t bother.

        If they want to advocate for changes to copyright law then I’m all ears, but let’s not pretend they actually have any interest in that.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        1 year ago

        I can guarantee you that you’re going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that’s compiled with permission of every copyright holder involved.

        You make this sound like a bad thing.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        1 year ago

        And why is that a bad thing?

        Why are you entitled to other peoples work, just because “it’s hard to find data”?

        • S410@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          1 year ago

          Why are you entitled to other peoples work?

          Do you really think you’ve never consumed data that was not intended for you? Never used copyrighted works or their elements in your own works?

          Re-purposing other people’s work is literally what humanity has been doing for far longer than the term “license” existed.

          If the original inventor of the fire drill didn’t want others to use it and barred them from creating a fire bow, arguing it’s “plagiarism” and “a tool that’s intended to replace me”, we wouldn’t have a civilization.

          If artists could bar other artists from creating music or art based on theirs, we wouldn’t have such a thing as “genres”. There are genres of music that are almost entirely based around sampling and many, many popular samples were never explicitly allowed or licensed to anyone. Listen to a hundred most popular tracks of the last 50 years, and I guarantee you, a dozen or more would contain the amen break, for example.

          Whatever it is you do with data: consume and use yourself or train a machine learning model using it, you’re either disregarding a large number of copyright restrictions and using all of it, or exist in an informational vacuum.

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            1 year ago

            People do not consume and process data the same way an AI model does. Therefore it doesn’t matter about how humans learn, because AIs don’t learn. This isn’t repurposing work, it’s using work in a way the copyright holder doesn’t allow, just like copyright holders are allowed to prohibit commercial use.

            • S410@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              edit-2
              1 year ago

              It’s called “machine learning”, not “AI”, and it’s called that for a reason.

              “AI” models are, essentially, solvers for mathematical system that we, humans, cannot describe and create solvers for ourselves, due to their complexity.

              For example, a calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly. For the device to be useful, however, the creator will have to analyze mathematical works of other people (to figure out how math works to begin with) and to test their creation against them. That is, they’d run formulas derived and solved by other people to verify that the results are correct.

              With “AI” instead of designing all the logic manually, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for some incredibly complex system. Such as languages or images.

              If we were training a regular calculator this way, we might feed it things like “2+2=4”, “3x3=9”, “10/5=2”, etc.

              If, after we’re done, the model can only solve those three expressions - we have failed. The model didn’t learn the mathematical system, it just memorized the examples. That’s called overfitting and that’s what every single “AI” company in the world is trying to avoid. (And to do so, they need a lot of diverse data)

              Of course, if instead of those expressions the training set consisted of Portrait of Dora Maar, Mona Lisa, and Girl with a Pearl Earring, the model would only generate those tree paintings.

              However, if the training was successful, we can ask the model to solve 3x10/5+2 - an expression it has never seen before - and it’d give us the correct result - 8. Or, in case of paintings, if we ask for a “Portrait of Mona List with a Pearl Earring” it would give us a brand new image that contains elements and styles of the thee paintings from the training set merged into a new one.

              Of course the architecture of a machine learning model and the architecture of the human brain doesn’t match, but the things both can do are quite similar. Creating new works based on existing ones is not, by any means, a new invention. Here’s a picture that merges elements of “Fear and Loathing in Las Vegas” and “My Little Pony”, for example.

              The major difference is that skills and knowledge of individual humans necessary to do things like that cannot be transferred or lend to other people. Machine learning models can be. This tech is probably the closest we’ll even be to being able to shake skills and knowledge “telepathically”, so to say.

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                I’m well aware of how machine learning works. I did 90% of the work for a degree in exactly it. I’ve written semi-basic neural networks from scratch, and am familiar with terminology around training and how the process works.

                Humans learn, process, and most importantly, transform data in a different manner than machines. The sum totality of the human existence each individual goes through means there is a transformation based on that existence that can’t be replicated by machines.

                A human can replicate other styles, as you show with your example, but that doesn’t mean that is the total extent of new creation. It’s been proven in many cases that civilizations create art in isolation, not needing to draw from any previous art to create new ideas. That’s the human element that can’t be replicated in anything less than true General AI with real intelligence.

                Machine Learning models such as the LLMs/GenerativeAI of today are statistically based on what it has seen before. While it doesn’t store the data, it does often replicate it in its outputs. That shows that the models that exist now are not creating new ideas, rather mixing up what they already have.