Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Alternatively, you might be vastly overestimating human “understanding and insight”, or how much of it is really needed to create stuff.

    • frog 🐸@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 months ago

      Average humans, sure, don’t have a lot of understanding and insight, and little is needed to be able to draw a doodle on some paper. But trained artists have a lot of it, because part of the process is learning to interpret artworks and work out why the artist used a particular composition or colour or object. To create really great art, you do actually need a lot of understanding and insight, because everything in your work will have been put there deliberately, not just to fill up space.

      An AI doesn’t know why it’s put an apple on the table rather than an orange, it just does it because human artists have done it - it doesn’t know what apples mean on a semiotic level to the human artist or the humans that look at the painting. But humans do understand what apples represent - they may not pick up on it consciously, but somewhere in the backs of their minds, they’ll see an apple in a painting and it’ll make the painting mean something different than if the fruit had been an orange.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        it doesn’t know what apples mean on a semiotic level

        Interestingly, LLMs seem to show emerging semiotic organization. By analyzing the activation space of the neural network, related concepts seem to get trained into similar activation patterns, which is what allows LLMs to zero shot relationships when executed at a “temperature” (randomness level) in the right range.

        Pairing an LLM with a stable diffusion model, allows the resulting AI to… well, judge by yourself: https://llm-grounded-diffusion.github.io/

        • frog 🐸@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I’m unconvinced that the fact they’re getting better at following instructions, like putting objects where the prompter specifies, or changing the colour, or putting the right number of them, etc means the model actually understands what the objects mean beyond their appearance. It doesn’t understand the cultural meanings attached to each object, and thus is unable to truly make a decision about why it should place an apple rather than an orange, or how the message within the picture changes when it’s a red sports car rather than a beige people-carrier.