Archive link

Silicon Valley has bet big on generative AI but it’s not totally clear whether that bet will pay off. A new report from the Wall Street Journal claims that, despite the endless hype around large language models and the automated platforms they power, tech companies are struggling to turn a profit when it comes to AI.

Microsoft, which has bet big on the generative AI boom with billions invested in its partner OpenAI, has been losing money on one of its major AI platforms. Github Copilot, which launched in 2021, was designed to automate some parts of a coder’s workflow and, while immensely popular with its user base, has been a huge “money loser,” the Journal reports. The problem is that users pay $10 a month subscription fee for Copilot but, according to a source interviewed by the Journal, Microsoft lost an average of $20 per user during the first few months of this year. Some users cost the company an average loss of over $80 per month, the source told the paper.

OpenAI’s ChatGPT, for instance, has seen an ever declining user base while its operating costs remain incredibly high. A report from the Washington Post in June claimed that chatbots like ChatGPT lose money pretty much every time a customer uses them.

AI platforms are notoriously expensive to operate. Platforms like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint. At the same time, the infrastructure to run AI systems—like powerful, high-priced AI computer chips—can be quite expensive. The cloud capacity necessary to train algorithms and run AI systems, meanwhile, is also expanding at a frightening rate. All of this energy consumption also means that AI is about as environmentally unfriendly as you can get.

  • Lvxferre@lemmy.ml
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    1 year ago

    Okay… let’s call wine “wine” and bread “bread”: the acronym “AI” is mostly an advertisement stunt.

    This is not artificial intelligence; and even if it was, “AI” is used for a bag of a thousand cats - game mob pathfinding, chess engines, swarm heuristic methods, so goes on.

    What the article is talking about is far more specific, it’s what the industry calls "machine learning"¹.

    So the text is saying that machine learning is costly. No surprise - it’s a relatively new tech, and even the state of art is still damn coarse². Over time those technologies will get further refined, under different models; cost of operation is bound to reduce over time. Microsoft and the likes aren’t playing the short game, they’re looking for long-term return of investment.

    1. I’d go a step further and claim here that “model-based generation” is more accurate. But that’s me.
    2. LLMs are a good example of that; GPT-4 has ~2*10¹² parameters. If you equate each to a neuron (kind of a sloppy comparison, but whatever), it’s more than an order of magnitude larger than the ~1*10¹¹ neurons in a human brain. It’s basically brute force.
    • interolivary@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      The comparison of GPT parameters to neurons really is kinda sloppy, since they’re not at all comparable. To start with, “parameters” encompasses both weights (ie. the “importance” of a connection between any two neurons) and biases (sort of the starting value of an individual neuron, which then biases the activation function) so it doesn’t tell you anything about the number of neurons, and secondly biological neurons have way more dynamic behavior than what current “static” NNs like GPT use so it wouldn’t really be surprising if you needed much more of them to mimic the behavior of meatbag neurons. Also, LLM architecture is incredibly weird so the whole concept of neurons isn’t as relevant as it is in more traditional networks (although they do have neurons in their layers)

      • Lvxferre@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Another sloppiness that I didn’t mention is that a lot of human neurons are there for things that have nothing to do with either reasoning or language; making your heart beat, transmitting pain, so goes on. However I think that the comparison is still useful in this context - it shows how big those LLMs are, even in comparison with a system created out of messy natural selection. The process behind the LLMs seems inefficient.

        • interolivary@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I wouldn’t discount natural selection as messy. The reason why LLMs are as inefficient as they are in comparison to their complexity is exactly because they were designed by us meatbags; evolutionary processes can result in some astonishingly efficient solutions, although by no means “perfect”. I’ve done research in evolutionary computation and while it does have its problems – results can be unpredictable, it’s ridiculously hard to design a good fitness function, designing a “digital DNA” that mimics the best parts of actual DNA is nontrivial to say the least etc etc – I think it might be at least part of the solution to building, or rather growing, better neural networks / AI architectures.

          • Lvxferre@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            It’s less about “discounting” it and more about acknowledging that the human brain is not so efficient as people might think. As such, LLMs using an order of magnitude more parameters than the number of cells in a brain hints that LLMs are far less efficient than language models could be.

            I’m aware that evolutionary algorithms can yield useful results.

            • interolivary@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              But the point is that not only is the human brain actually remarkably efficient for what it is, and that you’re still confusing parameter count and neuron count. The parameter count is essentially the number of connections between neurons plus the count of neurons in a network.

              If I recall correctly the average human brain has something like 80 billion neurons, and each neuron can have anywhere from 1 000 to 10 000 connections. This means that in neural net technology terms, we meatbags have brains with trillions of parameters. I just meant that it wouldn’t be surprising if an “artifial brain” needed more neurons to do (a part of) the same thing as our brains do since they’re vastly simpler

    • Fushuan [he/him]@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      But ML is being used in the industry in tons of places, and it’s definitely cost effective. There’s simple models that take the input of machinery sensors and detect when something is faulty or needs repairing, not just malfunctioning parts but worn out parts too. It’s used heavily in image processing, tiktok is used by a lot of people and the silly AR thingies use image recognition and tracking in real time through your phone. I’m not saying that this features created revenue directly, but they do get viral and attract users, so yeah. Image processing is also used in almost any supermarket to control the amount of people in the store, at least since covid I see a dynamically updated counter in every supermarket I visit.

      It is also used in time estimations, how much traffic influences a trip is not manually set at all, it gets updated dynamically with real time data, through trained models.

      It is also used in language models, for real usages like translation, recommendation engines (search engines, store product recommendation…).

      The article is talking about generative models, more specifically, text prediction engines (ChatGPT, Copilot). ChatGPT is a chatbot, and I don’t see a good way to monetise it while keeping it free to use, and Copilot is a silly idea to me as a programmer since it feels very dangerous and not very practical. And again, not something I would pay for, so meh.

      • Lvxferre@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        But (+> contradiction) ML is being used in the industry in tons of places […] store product recommendation…).

        By context it’s rather clear which type of machine learning I’m talking about, given the OP. I’m not talking about the simple models that you’re talking about and that, as you said, already found economically viable applications.

        Past that “it’s generative models, not machine learning” is on the same level as “it’s a cat, not a mammal”. One is a subset of the other, and by calling it “machine learning” my point was to highlight that it is not a “toothed and furry chicken” as the term AI implies.

        The article is talking about generative models

        I’m aware, as footnote #1 shows.

        • Fushuan [he/him]@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          By context it’s rather clear which type of machine learning I’m talking about

          Eh, it was to you and me, but we are not in a specialised community. This is a general one about technology, and since people tend to misunderstand stuff I prefer to specify. I get that you then wrote footnote #1, but why write statements like this one:

          So the text is saying that machine learning is costly. No surprise - it’s a relatively new tech, and even the state of art is still damn coarse²

          I know which branch of ML you are talking about, but in written form on a public forum that people might use as a reference, I’d prefer to be more specific. Yeah you then mention LLMs as an example, but the new ones are basically those, there’s several branches with plenty maturity.

          “it’s generative models, not machine learning”

          IDK why you are quoting me on that, I never said that. I’d just want people to specify more. I only mentioned several branches of machine learning, and generative models are one of them.

          Also, what’s that about contradiction? In the first paragraph I was mentioning the machinery industry, since I talk about machines. Then I talked about language models and some of their applications, I don’t get why that contradicts anything. Store product recommendations are done with supervised ML models that track your clicks, views, and past purchases to generate an interest model about you, and it’s combined with the purchases people with similar likes that you do do to generate a recommendation list. This is ML too.

          Dunno, you read as quite angry, misquoting me and all.