• marcie (she/her)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    6 个月前

    Deepseek R1 is genuinely shockingly good. I hooked it up to the internet and it basically un-enshittified my search results, finding me many amazing products. It seems they hardcoded some sort of logic process of how to talk to someone and answer questions, and then it creates a lot of text for this hardcoded logic ‘reasoning’. Then it does what LLMs do best, summarize all the key points of an article or text! It often is plagued by normal human perceptions - if you ask it to find a non-greenwashed product for example it will throw in ‘sustainable’ products like recycled polyester, which you know, no polyester is sustainable because it constantly sheds plastic into your lungs. But interestingly it does recommend specific products that aren’t greenwashed (theyre actually green) by the company even though it recommends the whole company.

    Also, by virtue of being able to run this locally, it fixes a lot of the social issues and environmental issues around AI. They also did it with a far lower budget and way less workers than ChatGPT or Meta. A+++ 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳 🇨🇳

    • SevenSkalls [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      6 个月前

      I want to try reading the paper but I’m afraid it’s going to be confusing for someone who’s not deeply in the academic space of AI. But maybe I can get deepseek to help me understand the parts I don’t lol.

    • albigu@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 个月前

      What are the minimum specs for running it locally? Wondering if it’d be worth the effort for a personal project.

      • marcie (she/her)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 个月前

        It depends on the quality of the model. Highest quality model is pretty obscene, itd require 512 gb ram to run it slowly, 512gb vram for fast. I can run one of the mid tier 32b models on 64 gigs of ram and a TPU