It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

  • Big P@feddit.uk
    link
    fedilink
    arrow-up
    51
    ·
    1 year ago
    • it’s expensive to run, openAI is subsidising it heavily and it will come back to bite us in the ass soon
    • it can be both intentionally and unintentionally biased
    • the text it generates has a certain style to it that can be easy to pick up on
    • it can mix made up information with real information
    • it’s a black box
    • Feyter@programming.dev
      link
      fedilink
      arrow-up
      20
      ·
      1 year ago

      Did we mentioned that it is closed source proprietary service controlled by only one company that can dictate the terms of it’s usage?

      • TehPers@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        LLMs as a whole exist outside OpenAI, but ChatGPT does run exclusively on OpenAI’s services. And Azure I guess.

        • Feyter@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Exactly. ChatGPT is just the most prominent service using a LLM. Would be less concerned about the hype if all the free training data from thousand of users would go back into an open system.

          Maybe AI is not stealing our jobs but if you get depending on it in order to keep doing your job competitive, it would be good if this is not controlled by a single company…

          • blindsight@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            But there’s been huge movement in open source LLMs since the Meta source code leak (that in a few months evolved to use no proprietary code at all). And some of these models can be run on consumer laptops.

            I haven’t had a chance to do a deep dive on those, yet, but I want to spin one up in the fall so I can present it to teachers/principals to try to convince schools not to buy snake oil “AI detection” tools that are doomed to be ineffectual.