• 0 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • As always, the problem is our economic system that has funneled every gain and advance to the benefit of the few. The speed of this change will make it impossible to ignore the need for a new system. If it wasn’t for AI, we would just boil the frog like always. But let’s remember the real issue.

    If a free food generating machine is seen as evil for taking jobs, the free food machine wouldn’t be the issue. Stop protesting AI, start protesting affluent society. We would still be suffering under them even if we had destroyed the loom.


  • The main issue though is the economic system, not the technology.

    My hope is that it shakes things up fast enough that they can’t boil the frog, and something actually changes.

    Having capable AI is a more blatantly valid excuse to demand a change in economic balance and redistribution. The only alternative would be destroy all technology and return to monkey. Id rather we just fix the system so that technological advancements don’t seem negative because the wealthy have already hoarded all new gains of every new technology for this past handful of decades.

    Such power is discretely weaponized through propaganda, influencing, and economic reorganizing to ensure the equilibrium stays until the world is burned to ash, in sacrifice to the lifestyle of the confidently selfish.

    I mean, we could have just rejected the loom. I don’t think we’d actually be better off, but I believe some of the technological gain should have been less hoardable by existing elite. Almost like they used wealth to prevent any gains from slipping away to the poor. Fixing the issue before it was this bad was the proper answer. Now people don’t even want to consider that option, or say it’s too difficult so we should just destroy the loom.

    There is a markov blanket around the perpetuating lifestyle of modern aristocrats, obviously capable of surviving every perturbation. every gain as a society has made that reality more true entirely due to the direction of where new power is distributed. People are afraid of AI turning into a paperclip maximizer, but that’s already what happened to our abstracted social reality. Maximums being maximized and minimums being minimized in the complex chaotic system of billions of people leads to inevitable increase of accumulation of power and wealth wherever it has already been gathered. Unless we can dissolve the political and social barrier maintaining this trend, it we will be stuck with our suffering regardless of whether we develop new technology or don’t.

    Although doesn’t really matter where you are or what system you’re in right now. Odds are there is a set of rich asshole’s working as hard as possible to see you are kept from any piece of the pie that would destabilize the status quo.

    I’m hoping AI is drastic enough that the actual problem isn’t ignored.




  • The real solution is to solve the power imbalance. What percentage of creative media is controlled by the already obscenely wealthy? We don’t want “non infringing proprietary models” to be the only legal models, because then the only ones with access to such powerful tools are the ones that can afford the Adobe art tax.

    We need to hold our governments accountable to hold the oligarches accountable for imbalancing the power struggles to an unethical degree. The common people have received no benefit from technological improvement based productivity gain in the past 50 years and this will only get worse until it is fixed in drastic fashion

    The common people need a GUARANTEE to benefit from productivity increases. Unions are also good, but nothing is being done about unethical anti-union campaigning from those with already imbalanced amounts of power and influence.

    Yadda yadda. Going after open source models ain’t gonna help. I’m fine pushing for special forgiveness for open models, but don’t just put the ball into the hands of the people who can afford proprietary datasets.


  • give us a way to fix the issue without relying on the idiots at the top being decent human beings.

    if you can fix that issue then we wouldn’t have so much of a problem.

    i’d expect AI to help through information processing for research and engineering. current AI tools are already useful to many as co-pilot tools. not everyone is creative enough to get use out of AI, but we are moving towards being able to dictate and gesture in natural language to optimize some things that may have taken a lot more time. it’s also valuable for certain efforts in optimization and engineering. does everyone hate alphafold now too?

    i think a lot of the AI hate right now is from the fact that it takes thought and creative use to get the most out of available tools. as we all learned, if it isn’t already “AGI” it’s 100% useless for everything forever.



  • bitcoin never had a use other than “will become valuable?”

    many (myself included) believe this technology will probably be the only one that will develop fast enough to actually help with the climate crisis.

    optimizing research and academia as well as environmental issues through information processing. people are excitedly talking about automated proof-checking and context finders that can sift through hundred of papers while you check your coffee. this stuff is good for science and science is good for environmentalism. maybe go after the politicians and companies that are not possibly going to be a benefit in the struggle against environmental collapse.

    why do people keep relating it to bitcoin? because it uses GPUs? that’s literally the only connection.

    somehow people have associated it with crypto and NFTs as if they are even mildly related. perhaps because those things are easier to hate, so why not associate them.


  • hey, that’s a better critique or commentary than in the onion article.

    while i don’t doubt people are trying to shove AI into a lot of places it’s not optimal yet, (which is entirely fair and reasonable to point out) i don’t think that’s a fair reason to poo-poo any use or positivity about AI in any context.

    rather, it’s become a really big fad to hate on AI and insult anyone who uses it. i mean, the technology is still young, but the stuff it’s already doing was “impossible” and “never going to happen” a few years ago. now we are developing things like text to 3d, which makes me excited for a future environment where you can dictate design and animation for entire animated experiences/movies.

    independent creatives will have a blast with it. salty onion article writer will be angrily yelling at his computer.


  • the sentiment being any positive opinion on AI? yes, like i said i’d forgive it if were funny or clever.

    it is literally just “people who like this thing are bad and dumb and useless and the world hates them.”

    really top quality satire. they sure did show how useless AI is and how dumb the fans are.

    maybe they could at least target the failure use-cases? some bad business AI ideas that are doomed to fail?

    nope, just reddit comment quality insults.


  • Salty writer fears being made obsolete by beep boop. Insults every AI enthusiast as well successful engineers and scientists.

    i hate how popular it’s become to hate on AI amongst people who know little to nothing about it.

    Id forgive it if it were clever or funny, but this is really just obviously salty ad hominem strawmanning by someone who doesn’t understand or appreciate the technology

    Guess what fam, we are in the copilot tool phase. You can learn how to use these new tools AND learn how to be creative. Maybe then you could ask it to critique the humour in your satire article. Perhaps it would be more clever than “people who like this thing I don’t like are dumb, and can’t be creative or better than me In any way, because I’m cooler than AI will ever be!!! You nerds are stooooopid!!”

    Because that’s how it read.


  • I conflate these things because they come from the same intentional source. I associate the copywrite chasing lawyers with the brands that own them, it is just a more generalized example.

    Also an intern who can give you a songs lyrics are trained on that data. Any effectively advanced future system is largely the same, unless it is just accessing a database or index, like web searching.

    Copyright itself is already a terrible mess that largely serves brands who can afford lawyers to harass or contest infringements. Especially apparent after companies like Disney have all but murdered the public domain as a concept. See the mickey mouse protection act, as well as other related legislation.

    This snowballs into an economy where the Disney company, and similarly benefited brands can hold on to ancient copyrights, and use their standing value to own and control the development and markets of new intellectual properties.

    Now, a neuralnet trained on copywritten material can reference that memory, at least as accurately as an intern pulling from memory, unless they are accessing a database to pull the information. To me, sueing on that bases ultimately follows the logic that would dictate we have copywritten material removed from our own stochastic memory, as we have now ensured high dimensional informational storage is a form of copywrite infringement if anyone instigated the effort to draw on that information.

    Ultimately, I believe our current system of copywrite is entirely incompatible with future technologies, and could lead to some scary arguments and actions from the overbearing oligarchy. To argue in favour of these actions is to argue never to let artificial intelligence learn as humans do. Given our need for this technology to survive the near future as a species, or at least minimize the excessive human suffering, I think the ultimate cost of pandering to these companies may be indescribably horrid.


  • Music publishers sue happy in the face of any new technological development? You don’t say.

    If an intern gives you some song lyrics on demand, do they sue the parents?

    Do we develop all future A.I. Technology only when it can completely eschew copyrighted material from their comprehension?

    "I am sorry, I’m not allowed to refer to the brand name you are brandishing. Please buy our brand allowance package #35 for any action or communication regarding this brand content. "

    I dream of a future when we think of the benefit of humanity over the maintenance of our owners’ authoritarian control.



  • So you also equate French nobility with nazi victims?

    Nobody is being called out for their race or sexual preferences or body they were given at birth. The people being targeted here are defined only as the ones who have plundered the world to fill their pockets regardless of the cost. It is the very act of power hungry and despotic rule that leads to this call. Remember that every family member and friend we lose to the meat grinder could have lived happily if not for the ones who deem us unworthy of human treatment. Every home burnt to the ground as the fires worsen, because they refuse to let environmental care affect their overflowing coffers.

    How many more people should die and suffer for their wont?

    This is not a comparable situation, you silly person.


  • Can we talk more about deceptive patterns? I had my computer go down recently, and I was reminded just how bad mobile is.

    Pick a game at random and I’ll show you a dozen direct and intentional manipulations to get you into the habit and environment they want you to be in for optimal resource extraction. I miss when games were an artform rather than a human habit adjusting set of professionally designed manipulations that can annoy you into the right mindset to give money. Not to mention the advertisements which range from absolute fabrication to actual scam.




  • Might have to edit this after I’ve actually slept.

    human emotion and human style intelligences are not exclusive in the entire realm of emotion and intelligence. I define intelligence and sentience on different scales. I consider intelligence the extent of capable utility and function, and emotion as just a different set of utilities and functions within a larger intelligent system. Human style intelligence requires human style emotion. I consider gpt an intelligence, a calculator an intelligence, and a stomach an intelligence. I believe intelligence can be preconscious or unconscious. Rather, a part of consciousness independent from a functional system complex enough for emergent qualia and sentience. Emotions are one part in this system exclusive to adaptation within the historic human evolutionary environment. I think you might be underestimating the alien nature of abstract intelligences.

    I’m not sure why you are so confident in this statement. You still haven’t given any actual reason for this belief. You are addressing it as consensus, so there should be a very clear reason why no successful considerably intelligent function exists without human style emotion.

    You have also not defined your interpretation of what intelligence is, you’ve only denied that any function untied to human emotion could be an intelligent system.

    If we had a system that could flawlessly complete françois chollet’s abstraction and reasoning corpus, would you suggest it is connected to specifically human emotional traits due to its success? Or is that still not intelligence if it still lacks emotion?

    You said neural function is not intelligence. But you would also exclude non-neural informational systems such as collective cooperating cell systems?

    Are you suggesting the real time ability to preserve contextual information is tied to emotion? Sense interpretation? Spacial mapping with attention? You have me at a loss.

    Even though your stomach cells interacting is an advanced function, it’s completely devoid of any intelligent behaviour? Then shouldn’t the cells fail to cooperate and dissolve into a non functioning system? again, are we only including higher introspective cognitive function? Although you can have emotionally reactive systems without that. At what evolutionary stage do you switch from an environmental reaction to an intelligent system? The moment you start calling it emotion? Qualia?

    I’m lacking the entire basis of your conviction. You still have not made any reference to any aspect of neuroscience, psychology, or even philosophy that explains your reasoning. I’ve seen the opinion out there, but not strict form or in consensus as you seem to suggest.

    You still have not shown why any functional system capable of addressing complex tasks is distinct from intelligence without human style emotion. Do you not believe in swarm intelligence? Or again do you define intelligence by fully conscious, sentient, and emotional experience? At that point you’re just defining intelligence as emotional experience completely independent from the ability to solve complex problems, complete tasks, or make decisions with outcomes reducing prediction error. At which point we could have completely unintelligent robots capable of doing science and completing complex tasks beyond human capability.

    At which point, I see no use in your interpretation of intelligence.


  • What aspect of intelligence? The calculative intelligence in a calculator? The basic environmental response we see in amoeba? Are you saying that every single piece of evidence shows a causal relationship between every neuronal function and our exact human emotional experience? Are you suggesting gpt has emotions because it is capable of certain intelligent tasks? Are you specifically tying emotion to abstraction and reasoning beyond gpt?

    I’ve not seen any evidence suggesting what you are suggesting, and I do not understand what you are referencing or how you are defining the causal relationship between intelligence and emotion.

    I also did not say that the system will have nothing resembling the abstract notion of emotion, I’m just noting the specific reasons human emotions developed as they have, and I would consider individual emotions a unique form of intelligence to serve its own function.

    There is no reason to assume the anthropomorphic emotional inclinations that you are assuming. I also do not agree with your assertions of consensus that all intelligent function is tied specifically to the human emotional experience.

    TLDR: what?