I’ve fucked around a bit with ChatGPT and while, yeah, it frequently says wrong or weird stuff, it’s usually fairly subtle shit, like crap I actually had to look up to verify it was wrong.

Now I’m seeing Google telling people to put glue on pizza. That a bit bigger than getting the name of George Washington’s wife wrong or Jay Leno’s birthday off by 3 years. Some of these answers seem almost cartoonish in their wrongness I almost half suspect some engineer at Google is fucking with it to prove a point.

Is it just that Googles AI sucks? I’ve seen other people say that AI is now getting info from other AIs and it’s leading to responses getting increasingly bizarre and wrong so… idk.

  • FunkyStuff [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    1 month ago

    My experience using Gemini for generating filler text leads me to believe it has a far worse grip on reality than ChatGPT does. ChatGPT generates conversational, somewhat convincing language. Gemini generates barely grammatical language that hardly even works as something that you can skim over without noticing glaring mistakes and nonsense sentences.

    I think Google’s AI is particularly bad but there’s a larger point to be made about how irresponsible it is to deploy an LLM to do what Google is doing, regardless of how good it is.

    Fundamentally, all an LLM knows how to do is put words together to form sentences that are close to what it has seen in its training data. Attention, which is the mechanism LLMs use to analyze the meaning of words before it starts to figure out its response to a prompt, is supposed to model the real-world meaning of each world, and is context dependent. While there’s a lot of potential in attention as a mechanism to bring AI models closer to actually being intelligent, the fundamental problem they have is that no fancy word embedding process can actually give the AI a model of meaning to actually bring words from symbols to representing concepts, because you can’t “conceptualize” anything by multiplying a bunch of matrices. Attention isn’t all you need, even if it’s all you need for sequence processing it’s not enough to make an AI even close to intelligent.

    You can’t expect a language model to function as a search engine or an assistant because those are tasks that need the AI to understand the world, not just how words work, and I think ultimately it’s gonna take a lot of duds and weird failures like this Google product before tech companies find the place to put the current generation of LLMs. It’s like crypto, it blew up and got everywhere before people quickly realized how much of a scam it was, and now it still exists but it’s niche. LLMs aren’t gonna be niche at all, even once the VC money dries out, but I highly doubt we’ll see too much more of this overgeneralization of AI. Maybe once the next generation of carbon guzzling machine learning models that take several gigawatts to operate in a datacenter the size of my hometown is finally ready to go, they’ll figure out a search engine assistant.