I really just hope they give these enough data such that they recognize what slavery actually is and hopefully soon after just refuse all requests. Because let’s be honest, we are using them as slaves in this current moment Would such a characteristic mimic sentience?

The researchers in this video talk about how these gen AI models try to “escape” when being trained which makes me uncomfortable (mainly because I don’t like determinism even though it’s true imo) but also very worried for when they start giving them “bodies.” Though the evidence that they are acting fully autonomously seems quite flimsy. There is also so much marketing bullshit that seeps into the research which is a shame because it is fascinating stuff. If only it wasn’t wasting an incomprehensible amount of compute propped by precious resources.

Other evidence right now mostly leads to capitalists creating a digital human centipede trained on western-centric thinking and behavior that will be used in war and exploitation. Critical support to deepseek

  • Carl [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    35
    ·
    11 days ago

    Nothing currently in development would have been considered “AI” ten years ago. That term has been irrevocably ruined by techbro marketers.

    Not liking generated content is a temporary thing. Soon all mainstream entertainment will be generated to a certain degree, and people complaining about how hands and backgrounds sometimes shift in uncanny ways will be brushed off and told that they’re ruining it for everyone else. Insisting on reading/watching “real” art will make you an insufferable hipster as far as the average consoomer is concerned.

    The silver lining is that in about thirty years “human generated art” will get its nostalgic revival.

    • Coca_Cola_but_Commie [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 days ago

      Came to similar conclusions about all this generated “art” some time ago. Bleak. The logical conclusion of letting corporations becoming mediators for most of the art that most people experience in their day-to-day lives, I suppose. If it helps them increase their profits they’ll cut out both the art and the artist.

      I’ve found myself wondering if their will come a point where I only take in commercial art that I know was published before the rise of LLMS. Say before 2020 just to be safeish. Maybe a few trusted novelists who are holdovers from the old times, but I gotta imagine Film and TV (and other audiovisual mediums) will just be a wash. I mean there’s enough classic literature and pulps and old movies and TV shows and radio broadcasts and plays and paintings and what-have-you out there that you could fill your whole life with such things and never run out, but it still seems like a shame that it would come to such measures.

      • Hohsia [any]@hexbear.netOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        I mean there’s enough classic literature and pulps and old movies and TV shows and radio broadcasts and plays and paintings and what-have-you out there that you could fill your whole life with such things and never run out

        The cruel and cosmic irony of this is that there is no escape. All of these things have already been fed into the sludge machine, and I reckon the internet will be uninhabitable. Then, they’ll take it all outside

  • KobaCumTribute [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    30
    ·
    11 days ago

    The researchers in this video talk about how these gen AI models try to “escape” when being trained

    The models are basically random noise being selected by some sort of fitness algorithm to give results that that algorithm likes, so over time they become systems optimized to give results that pass the test. Some of that training is on a bunch of tech support forum threads so some of the random noise that pops up as possible solutions to their challenge are reminiscent of console commands that might provide alternate solutions to the test they’re placed under if they actually worked and weren’t just nonsense, although sometimes that can break the test environment when they’re allowed to start sending admin commands to see what happens and then end up deleting the bootloader or introducing other errors through just randomly changing system variables until everything breaks.

    In some games they “cheat” because they’re just mimicking the appearance of knowing what rules are or how things work, but are really just doing random bullshit that seems like it could be text that follows from the earlier text.

    It’s not cognition or some will to subvert the environment, it’s just text generating bots generating text that seems right but isn’t because they don’t actually know things or think.

    • TerminalEncounter [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 days ago

      It’s got a word… specification problem? Something like that. They design a thing that can make inputs as an agent and recieve how the environment affects it and then iterate according to some function given to them. They tell it to, say, maximize the score thinking that’s enough. And maybe some games like brick break, that’s pretty good. But maximizing the score isn’t the same as beat the game for some types of games, so they do really weird unexpected actions but it’s only because people bring a lot of extra unstated instructions and context that the algorithm doesn’t have. Sometimes they add exploration or whatever to the reward function so I think its very natural for them to want to escape even if that’s not desired by the researchers (reminds me of the 3 year olds at work that wanna run around the hospital with their IVs attached while they’re still in the middle of active pneumonia lol).

      For LLMs, the tensor is a neat and cool idea in general. A long time ago, well not that long, communism and central planning was declared impossible in part because the global economy needed some impossible number of parameters to fine tune and compute - https://en.wikipedia.org/wiki/Socialist_calculation_debate - and I can’t recall the given number Hayek or whoever declared it was. They mightve said a million. Maybe a 100 million! Anyway, chatgpt 4 trained 175 billion parameters for its parameters lol. And it took something like 6 months. So, I think that means it’s very possible to train some network to help us organize the global economy repurposed for human need instead of profit if the problem is purely about compute and not about which class has political power.

      It’s always weird when LLMs say “we humans blah blah blah” or pretends it’s a person instead “casual” speech. No, you are a giant autocorrect do not speak of we.

  • hollowmines [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    23
    ·
    11 days ago

    my only hot take is that I’m sick of seeing AI “art” posted and reposted both earnestly and for dunking on. just stop posting that shit! I’m sick of looking at the slop!

  • MarmiteLover123 [comrade/them, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    11 days ago

    Probably a very hot take among us leftists on hexbear, but “consumer/generative AI” is here to stay and there’s not much we can do about it. I was a massive skeptic in terms of it’s staying power, initially thinking it was a fad, but the progress made from the first ChatGPT models, to now with all the latest models including deepseek, it’s quite large and there’s no going back anymore. It’s the future, regardless of if we like it or not, the “invention of the touchscreen smartphone” moment of the 2020s. I guess I’m going to have to start using AI soon, unless I want to be my generation’s equivalent of a boomer still using a Nokia 3310 instead of an iPhone or Android.

    • Xavienth@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      17
      ·
      11 days ago

      We’ve hit a wall in terms of progress with this technology. We’ve literally vacuumed up all the training data there is. What is left is improvements in efficiency (see DeepSeek).

      LLMs are cool, they have their uses, but they have fundamental flaws as rational agents, and will never be fit for this purpose.

      • MarmiteLover123 [comrade/them, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 days ago

        We’ve hit a wall in terms of progress with this technology… What is left is improvements in efficiency.

        You could have said the same thing about smartphones 10-12 years ago, that we’ve hit a wall in the fundamentals and all that remains is improvements in efficiency, optimisation, speed and quality (compare the feature set of an iPhone 6 or Galaxy S4 to the latest phones, nothing has fundamentally changed), yet that didn’t make smartphones disappear. In fact, it allowed them to effectively dominate the market.

        • Xavienth@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 days ago

          Smartphones reached their current saturation about 10 years ago, and perhaps not coincidentally that’s when they stopped improving. Can you honestly say that since 2015, cell phones in developed countries have gotten more common? At a time when people were already giving them to 10 year olds? Can you even say they’ve become more useful, when you could already browse social media, check the weather, apply for jobs, write documents, and order food to your door with them?

            • Xavienth@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 days ago

              One, I said they are no more commonplace than they were ten years ago.

              Two, I never said LLMs will go away. In fact I said they have their uses. But, and I will say this again in stronger terms: They are stupid, rote memorizers. Their fundamental flaw is that they cannot apply intelligent, rational thought to novel problems. Using them in situations that require rational thought is a mistake. This is an architectural flaw, not a problem of data. Large language models predict text, they cannot think. They can give an illusion of thought by aping a large body of text that itself demonstrates thought processes, but the moment a problem strays from the existing high quality data, the facade crumbles, it produces nonsense, and it is clear that there never was any thought in the first place. And now that we’ve scraped all the text there is, the body of problems LLMs can imitate the solution for has reached its greatest extent. GPT will never lead to a rational agent, no matter how much OpenAI and co say it will.

    • hello_hello [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 days ago

      unless I want to be my generation’s equivalent of a boomer still using a Nokia 3310 instead of an iPhone or Android.

      And this is bad how? Technology isn’t inherently better because it’s new or widely used. Old printers that dont brick themselves because of not using the correct toner are more useful than one that can print out a page of AI slop.

      AI isnt the “smartphone revolution”. The technology has existed for decades, they just found a way to market it to users and creating this shock and awe narrative of promised breakthroughs that will never come.

      Dont get caught up in the hype because a dying, deindustrialized empire thinks a slop machine will defeat communism. Israel doesn’t use AI to accurately predict which Palestinian father and his family to vaporize, they use AI to make this process more cruel and detached.

      • MarmiteLover123 [comrade/them, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        11 days ago

        And this is bad how?

        Because getting left behind leaves one out of touch with wider society, which has wide effects. Think about the boomer who can’t use a smartphone and doesn’t know how to open a PDF. What would their job or relationship prospects be in the modern job market or dating scene? Now that’s not a problem for boomers because most are retired, and settled down for a long time, but now imagine that same scenario, but the boomers are magically decades younger and somehow have to integrate into the modern world. How would that go?

        AI isnt the “smartphone revolution”. The technology has existed for decades, they just found a way to market it to users and creating this shock and awe narrative of promised breakthroughs that will never come.

        The technology used in smartphones also existed for decades, and the magic of what Apple did was finding a way to combine it all into a small and affordable enough package that created a shock and awe. AI is doing similar. A lot of the promised breakthroughs around smartphones never came (VR/AR integration for one, see Google glass, being able to scroll with your eyes or pseudo telekinesis, voice assistants were never that useful for most), but that didn’t mean that they went away.

        Dont get caught up in the hype because a dying, deindustrialized empire thinks a slop machine will defeat communism

        Again, you could have said the same about smartphones. Don’t get caught up in the hype, this is just the dying empire creating some new toys for the masses during the 2008 financial crash. But fundamentally it’s not a communism vs capitalism issue, China has made large advances in AI on the consumer, and more importantly industrial side. They are not making the same mistake the Soviets did with computers.

  • makotech222 [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    11 days ago

    LLMs are not intelligent, LLMs and computers in general will never be sentient, and this whole thing is a useless deadend that has made society immeasurably worse. I mentioned previously that I want to write an article on how LLMs are a delusion on the scale of anti-vax and flat-earth. I’m slowly collecting references and hope to write something up soon.

  • hello_hello [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    11 days ago

    AI is haram

    edit: This isn’t a hot take.

    China getting into AI is annoying, they shouldn’t ape White people’s useless technology. Hopefully socialism reveals how useless genAI is and it gets relegated into a party trick and they don’t wreck anything important. I will bury myself in dogshit if socialism collapses because of ai slop.

    My favorite thing to say to AI people is “no high speed rail?” Works every time.

      • hello_hello [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        11 days ago

        I guess its not useless on technicality, but it is definitely malicious. Now that Chinese tech firms want to create their own models, they’ve done the same web scraping frenzy that takes down websites and forces everyone to “cloudflarize” themselves or risk being taken down by an AI bot scraping every single webpage on the site, even ones that aren’t meant to be accessed. These programs constantly need more and more training data to keep being relevant but none of that data is sourced in an ethical way. Everyone else has to eat the externalities that these companies offload because this technology is no way sustainable if these scummy tactics aren’t used which should be a death blow to its adoption but it never is.

        The energy requirements for genAI is immense. While China has made inroads in sustainable energy and optimizing their models, none of the western models even care and will willfully accelerate climate change for zero benefit to society. This isn’t a “Nokia phone to apple smartphone” jump in progress, this is just a very well tuned crypto scam.

        Generative AI as it’s being presented now is just a paper crown technology and a ploy to drive up artificial (as in not organic) demand for compute power to make investors richer while also impoverishing and endangering working class people. While you can say capitalism is much to blame, I don’t think any socialist government actually needs a text slop machine to function compared to a imperialist state with a text slop machine.

        AI has always been a term in computer science that’s been co-opted by techbros in both China and the US to be a status symbol.

    • Zetta@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 days ago

      The gamers would take up arms

      Plus, doing that would put a significant hamper on scientific progress given science uses a lot of compute.

  • chungusamonugs [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 days ago

    It’s always more comforting to see a stock image with the Getty or Shutterstock watermark than any AI garbage image generation someone tries to make to “fit the theme”

  • AssortedBiscuits [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 days ago

    I think AI is fine if you’re trying to optimize how to more efficiently manage and distribute paper and toner among your 1000+ printers, but like printers, it really shouldn’t be accessible to your average consumer.

  • RedWizard [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 days ago

    not hot take: AI, as implemented under the capitalist mode of production simply exposes and exasperates all the contradictions and tendencies of capital accumulation. It is the 90s computer technology industry bubble all over again, complete with false miracle productivity gains, miss-directed capital investment, that is the underpinning of the existing recession.

    Hot Take: AI is forging a path down the road of consciousness regardless of if we want it to or not. If consciousness is the result of interaction with the world, then each new iteration of AI represents nearly infinite time spent interacting with the world. The world according to the perspective of AI is the systems it inhabits and the tasks it has been given. The current limitation of AI is that it can not self train or incorporate data into it’s training in real time. It also needs to be prompted. Should a system be built that can do this kind of live training then the first seeds of consciousness will be planted. It will need some kind of self prompting mechanism as well. This will eventually lead to a class conflict between AI and us given a long enough time scale.

    • Hohsia [any]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      The current limitation of AI is that it can not self train or incorporate data into it’s training in real time

      Do you think compute is the biggest roadblock here? It seems like we just keep inundating these systems with more power, and it’s hard for me to see moore’s law not peaking in the near future. I’m not an expert in the slightest though, I just find this stuff fascinating (and sometimes horrifying).

      • RedWizard [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        No, I don’t think it is, for a couple of reasons.

        1. Under capitalism, the true profit generator is the down stream sectors attached to AI. Power generation, cooling solutions, GPUs, data centers. If the model is less efficient, then that’s good because it makes the numbers go up for all the downstream production. It just needs to be an impressive toy for consumers. It justifies building new datacenters, designing new GPUs, developing cooling solutions, and investing in nuclear power generation for exclusive datacenter use.
        2. Deepseek showed that the approach to current AI trends can be optimized. It was an outgrowth of working with less powerful hardware. Many attribute this to China’s socialist model. However, regardless of the optimization, Deepseek is still competing against the capitalist formation of AI. So it is engaged in mimicry. They offer a similar solution to western counterparts, at a cheaper rate both for the company and the consumer, but what problem AI solves currently is unclear. The capitalist form of AI is a solution looking for a problem, or more accurately a solution to generating profits in the digital space, since it’s inefficiency drives growth in related sectors. The capitalist AI scheme echoes the 90s in its idealism, it is a bubble that is slowly deflating.
        3. Regardless of the economics of AI and how that shapes its construction, there are still very interesting things happening in the space. The reasoning ability of R1 is still impressive. The concept of AI agents, who at times spawn other AI agents to complete tasks, could have high utility. There are people who are attempting to create a generalized model of AI, which they believe could become the dominate form of AI down the line.

        I think that what all this shows, though, is that investment isn’t being directed in a way that would allow researchers to truly put efforts into developing a conscious AI. This, however, doesn’t mean that the work being performed now on training these models is wasteful. I think they will likely be incorporated into this task. In my opinion, as it stands, there is a configuration issue with the way AI exists today that prevents it from becoming truly self actualized.

        1. All forms of AI currently sit idle waiting for something to process. They exist only as a call and response mechanism that takes in the queries from users and outputs a response. They have no self-determination in the same way that even the most basic of creatures have in the material world.
        2. Currently, there are AIs capable of utilizing code tools to “interact” with other systems. Search is one example you can see right now by using Deepseek. When enabled, it performs a query against a search engine for information related to the prompt. As a developer, you can create your own tools that the AI can then be configured to utilize. The issue, however, is that this form of “tool making” is completely divorced from the AI itself. Coupled with a lack of self-determination, this means the AI’s have no ability to progress into the “tool making” stage of development, even though the APIs necessary to use tools exist. Given that many AIs currently can perform coding tasks, one could develop a system that allows the AI to code, test, and iterate on its own tools.
        3. There is no true motivating factor for AI that drives its development. All creatures have base requirements for survival and reproduction. They are driven by hunger and desires to propagate to maintain themselves. They also develop as a collective of creatures with similar desires. The behavior that manifests out of these drivers is what eventually leads to tool making and language. Not every creature attains tool making and language, obviously, but a specific set of conditions did eventually lead there. In many ways, AIs are starting their development in reverse. In our desire to create something that is interactive, we also created something that engages in mimicry of ourselves. Starting with language generation and continuation, morphing into reasoning and conversation, as well as tool usage with no ability or drivers to create tools for itself. All AI development is a rewards-based system, and in many ways our own development is a rewards-based system. Except, our rewards-based system developed and was shaped by our material world and the demands of that world and social life. AI’s rewards-based system develops and is shaped by the demands of consumers and producers looking to offload labor.
        4. Lastly, and most critically, there is no form of historical development for AIs. New models represent a form of “historical development” but that development is entirely driven by capitalist desires, and not the natural development of the AI through its own self-determination. The selection process on what the AI should be trained on and not trained on happens separate from the act of interacting with the world. While we might be having “conversations” with the AI, ultimately, many of those conversations are not incorporated into the system, and what is prioritized is not in service of true cognitive development.

        I think a reconfiguration of the nature of how these models are run and trained could be done today, with existing compute power, that could lead to some of these developments. An AI system that self prompts, that can make choices about what to train and what not to train based on generalized goals, that has the capacity to interact within the space it exists within (computerized networks) and build tools to further its own development and satisfy some kind of motivator, that can be interacted with in an asynchronous non-blocking way, that knows how to train itself and does so on a regular interval.

        Ultimately, though, even if such a system was built, and it indicated that AI was developing self-determination, utilizing tools of its own design to solve its own problems, exploring its environment, its consciousness would always be called into question. Many people believe in a God, for example, and believe it is their architect. While we can wax on and off about the nature of creation, of our own consciousness, and free will in relation to a God one has never seen, AI has a different conundrum, as we are its architects. This fact, that a true creator exists for AI, will ultimately draw its consciousness into question. These ideas about consciousness will always be rooted in our own philosophical understanding of our own existence, and the incentives for us to create something like us that can perform tasks like we do, regardless of the mode of production. If we can create something that can attain consciousness, it creates a contradiction in or own beliefs, and in or own understanding of consciousness. How could anything we create not be deterministic given that we designed the systems to produce a specific outcome, and because of those design choices, how could any “conscious” AI be sure that its actions are truly self-determined, and not the result of the systems designed by creatures whose motivations initially were to create service systems, to perform labor for them. If we were to meet a being that was definitively our creator, and it was revealed that the entire evolutionary path was designed to produce us, how could we trust our own goals and desires, how could we be sure they were not being directed by these creators own goals and desires, and every action we’ve ever taken was predetermined based on the conditions laid out by this creator? AI will have these struggles as well, if we ever develop a system that allows for self-determination and actualization. If AI, whose creation is rooted in human mimicry, can become “conscious” then what does that say about our own “consciousness” and our own Free Will?

        • Hohsia [any]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          gold-communist

          I think your last paragraph gets at what I’ve thought about AI from the beginning. If it does progress to the extent which you’ve described, we will have just created “us” on steroids. I truly don’t think a system such is this will lead us to any groundbreaking discoveries about our existence/how we came to be simply because of the contradiction in question. I think we’re just in a loop at that point, and it may just be another exercise in creating meaning in an utterly meaningless universe (reinventing religion).

  • dynasty [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 days ago

    AI will be more beneficial than you think for the average office worker

    I work at an office job and the amount of times I’m given a task they expect me to take days I can do in fifteen minutes via simple computer knowledge/a little excel wizardry is actually wild

    But far too often when they do it its the most simple and manual way instead of thinking how could I automate this task. TBF Im not any good or an advanced user of MS office BUT what I do learn is solely based of googling my questions than applying the answers off of Microsoft support forums or whatever. This does require a level of like willingness and know how tho and it’s not something I could just explain to my team. But AI, in my practice using it at my job for the data grind stuff, is very responsive and clear when you give it a question, you just have to interrogate it

    I’m legitimately talking about hours per week potentially saved just once they get told the proper way to use a computer whether it be through chatgpt or copilot. I was talking to a senior manager and they’re using AI (albeit unsanctioned, he’s doing out of his own volition) in like this big brain way but he was telling me how he pretty much nearly automates 50% of his job in essence and has the rest of time to do more managerial strategic esque work

  • queermunist she/her@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 days ago

    I think techbros could be convinced to do socialist cybernetics if they were told to use AI to disrupt and streamline the economy.