The literal judgement is in on using AI to write speeches

  • Fisk400@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yeah, the illusion is quickly dispelled once you spend any time with it. I was trying it out when I was doing worldbuilding for a story I was writing. If you ask it to name 20 towns and describe them it spits out a numbered list. Same with characters. But then I asked it to make the names quirkier and it just used the same names again and described all the towns as quirky. I also asked it to make characters with names that are also adjectives and then describe them. Names like Able, Dusty, Sunny, Major.

    The first iteration had a list of names and a description but the description always related to the adjective. Sunny had a sunny disposition and a bright smile. I told it the description should be unrelated to the name and it did the same thing again. I told it to change the name but not the description and it still rewrite the descriptors to match the name but didn’t change the structure.

    Nothing I told it could make it move off the idea that a man called sunny must be sunny. It basically can’t be creative or even random when completing tasks.

    This is fine when writing dry material that nobody will read but if you want someone to enjoy reading or listening to what is written then the human spark is required.

    • Stumblinbear@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I just tested the character name thing and it got it on the first try. Maybe GPT-4 just handles it better?

      • Fisk400@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        It was gpt-4 I was using. It could be that you wrote it as one instruction and your intensions were very clear from the beginning while I explained it across multiple changes and clarifications when I noticed it wasn’t giving me quite what I wanted.

        Part of it is that I was intentionally being very human in my instructions, leaving it open to interpretation and then clarifying or adding things as I brainstormed. Its a messy way of doing it but if AI needs to be able to handle messy instructions in order to be considered on par with people.

        Edit: turns out it wasnt gpt-4 I was using i was using the free chat on openais website. I was not aware that they were different.