• gay_king_prince_charles [she/her, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    9 days ago

    They’re not talking about how great it is at counting letters. This is just using a technology for something it wasn’t meant for and then going on about how it’s useless. If you want to disprove the hype, using evidence that hadn’t been known for the entire production run of commercial LLMs would probably be better.

    • chgxvjh [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      28
      ·
      edit-2
      9 days ago

      It sucks at other things too. Counting errors are just really easy to objectively verify.

      People like Altman claim they can use LLM for creating formal proofs, advancing our knowledge of physics and shit. Fat chance when it can’t even compete with a toddler at counting.

    • Nacarbac [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      19
      ·
      8 days ago

      If it cannot be used for something it wasn’t intended, then it isn’t intelligence. And since language processing is both what it is made from and intended for, this shows that there is no emergent intelligent understanding of its actual speciality function, it’s just a highly refined autocomplete with some bolted-on extras.

      Not that more research couldn’t necessarily find that mysterious theoretical threshold, but the focus on public-facing implementations and mass application is inefficient to the point of worthlessness for legitimate improvement. Good for killing people and disrupting things though.

      • robot_dog_with_gun [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 days ago

        If it cannot be used for something it wasn’t intended, then it isn’t intelligence.

        no shit. death to ad men. but LLMs aren’t for most of these stunts. that’s part of the problem but it’s like saying my bike is bad at climbing trees. at least the bike isn’t being advertised for arbory

        • chgxvjh [he/him, comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 days ago

          But what is it for? Other than be a bottomless pit for resources.

          It does seem cultish to present the thinking machine but when it is presented with easily verifiable tasks it regularly completely shits the bed but we are supposed to blindly trust it with more complicated matters that aren’t easily verified.

          • robot_dog_with_gun [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            it’s not a thinking machine, that’s the advertisers lying again. if you want a thinking machine, that simply doesn’t exist. Maybe wolfram.alpha or IBM’s Watson are better for the tasks you have in mind. an llm would probably give you a correct pythons script to check the last character of each string in an array and even populate that array with NFL team names and that code wold tell you ero of them end with non-“s” chars. it might also end the code with an open-ended block quote you need to delete.

            LLMs are statistical models and they’re sometimes useful for outputting text that’s similar to existing related text, this is why they’re sometimes better than google search because search is so degraded by SEO and advertising. They’re very bad at solving new programming tasks so if you wanted to implement something in godot where you’re the first person to be doing it and there’s no tutorial in the training data it’s just going to fuck up constantly.