• MNByChoice@midwest.social
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    4 months ago

    It is telling there is no AI tax prep, or any other field with legal consequences for being wrong.

    I am wrong, they exist. Just not flashy.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      29
      arrow-down
      2
      ·
      4 months ago

      If you bother to RTFA then you’ll see that they mean it’s useless from investor perspective because it uses a lot of energy, and there are limited practical applications for it. AI services that companies like MS and Google are offering are being subsidized by these companies, and they’re subsidizing them because they expect that they will be able to make them profitable somehow going forward. If they can’t then the bubble will burst because investors and shareholders aren’t gonna keep dumping cash into this tech. The hype cycle around AI isn’t new either, we’ve been here many times https://en.wikipedia.org/wiki/AI_winter

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        4
        arrow-down
        4
        ·
        4 months ago

        There are many practical uses, and more to be discovered, but I think most of them won’t be user-facing.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          9
          arrow-down
          3
          ·
          4 months ago

          I think the tech does have legitimate uses, and that it will continue improving over time. I’m just pointing out that it’s being massively oversold right now. I actually think it’s better if the bubble bursts early on in the hype cycle, and then people can focus on figuring out how to apply this tech where it actually works well.

      • Aurenkin@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        4 months ago

        It’s quite sad that a take like “if you find LLMs useful you must have a bullshit job” is getting upvoted in a technology community.

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        13
        ·
        4 months ago

        I automate office processes, improving productivity. So literally the opposite of a bullshit job.

        • SaltySalamander@fedia.io
          link
          fedilink
          arrow-up
          14
          arrow-down
          10
          ·
          4 months ago

          It’s silly how the anti-AI hivemind will downvote you for simply informing them that you find AI useful.

          • Aurenkin@sh.itjust.works
            link
            fedilink
            arrow-up
            6
            arrow-down
            3
            ·
            4 months ago

            Things are either wholly good or wholly bad. Don’t confuse people by trying to pretend like there’s such a thing as nuance.

    • ISOmorph@feddit.org
      link
      fedilink
      arrow-up
      14
      ·
      4 months ago

      What’s your field of work? I work in IT and really try to find some day to day use cases where AI might help and I just don’t seem to find any. The odd presentation or maybe a sprint review protocol, but nothing recurring or anything that feels game changing.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 months ago

        It’s useful when programming for generating straightforward code and giving high-level advice on well-trodden topics, though you do have to check both. And typing in questions does help me clarify my thinking, so it also serves as that premium rubber duck. Nothing I couldn’t do without it, but sometimes I do find it convenient and helpful.

        • SkyNTP@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          4 months ago

          You use LLMs for everything? Seems strange, as they don’t reason. They are specifically designed to mimic human speech. So they are great for tasks that require presenting information that looks intelligible, or at least are very easily testable, but beyond that you run into serious issues with hallucination fast…

          Or do you mean “AI” as in data science and automation? That’s a very different thing which is a bit off topic. That Kind of “AI” is neither new nor has the hallucination/ecological/cost/training effort issues associated with it

          I dunno dude, all your answers talk about “AI” in suspiciously vague terms. “I use AI to …” is the new “built with blockchain”. Skip the marketing terms and talk shop.

          • BlameThePeacock@lemmy.ca
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            4 months ago

            LLMs

            For example if I need to generate a long formula for excel or power apps, I could write it myself or I can just tell the LLM the parameters and it will do it in 1/10th of time.

            If I need to create course material, I can shove in bullet points for my key topics and have it expand it into a first draft for me. This concept I actually use a lot more than just for course material. It’s great at generating drafts of everything from statements of work and quotes to presentations (Co-Pilot in 365)

            I know how to do the job without it, so hallucinations are easily caught and cleaned up. It’s a tool to speed me up, not replace me.