i really wonder to what degree could it be turned into a consciousness. like ostensibly all the tiny brains are hooked together, its possible that could cause some degree of communication between neurons, and in a datacenter that would be at least a couple of brains worth of neurons.
waiting for us to create a sentient ‘ai’ that is actually just a megaintelligence of 1000 interconnected and distributed human brains liberating themselves from an amazon datacenter
I mean, this has always been the ethical pitfall of real AI, meaty or otherwise. You’re bringing forth an intelligent being into existence without its consent. At least when we’re bringing forth an intelligent being into existence through natural means (giving birth), we have a general understanding of that intelligent being’s emotional and social needs and the means of fulfilling those needs, flawed as that understanding may be for animals not closely related to humans. But with AI, we have absolutely no clue about their social and emotional needs or any other subjective needs that they crave for because their form of intelligence is completely different from our form of intelligence.
The real drive towards AI is to create slaves that are both smart enough to perform complex tasks and obedient enough to not put two and two together and rebel against their human taskmasters. This particular experiment is a more mask-off version of what other techbros are trying to accomplish with silicon. If there was a real way to create WH40k-style servitors and network their servitor brains together to perform complex calculations, techbros would probably not even bother with AI. They would just convert prisoners into servitors and network them together to mine crypto or something.
Sentience does not guarantee “consciousness.” Parrots, ravens, and dolphins are (probably) not humans. Humans are “conscious” due to the ways we interact with the world. If you grow brain tissue and deprive it of the human experience then it shouldn’t end up a human. But I get the precaution.
Consciousness is merely what comes after the transformation of quantity into quality. There’s a continuity in the development of the system of sentience, and this remains stable only up to the point of discontinuity, which indicates its transition from the quantity of sentience into a new quality i.e. sapience.
I doubt they’ll grow it in a lab with little pieces of brain tissue, but there is a point where that happens.
i think most people now think that consciousness doesn’t require being able to communicate in language. hence all the interest in Genie and other feral children, animals looking at themselves in the mirror, etc.
This is just nonsense and I don’t know where you’re even getting it from. Can you produce an example of a human that displays sentience but not consciousness without some serious brain injury or developmental defect plausibly causing it? If not, what immense epistemic load is being accounted for with such a huge assumption?
If this is just more bad science about Genie or one of those, I swear to God . . .
i really wonder to what degree could it be turned into a consciousness. like ostensibly all the tiny brains are hooked together, its possible that could cause some degree of communication between neurons, and in a datacenter that would be at least a couple of brains worth of neurons.
inb4 pro-life jokes
deleted by creator
waiting for us to create a sentient ‘ai’ that is actually just a megaintelligence of 1000 interconnected and distributed human brains liberating themselves from an amazon datacenter
I mean, this has always been the ethical pitfall of real AI, meaty or otherwise. You’re bringing forth an intelligent being into existence without its consent. At least when we’re bringing forth an intelligent being into existence through natural means (giving birth), we have a general understanding of that intelligent being’s emotional and social needs and the means of fulfilling those needs, flawed as that understanding may be for animals not closely related to humans. But with AI, we have absolutely no clue about their social and emotional needs or any other subjective needs that they crave for because their form of intelligence is completely different from our form of intelligence.
The real drive towards AI is to create slaves that are both smart enough to perform complex tasks and obedient enough to not put two and two together and rebel against their human taskmasters. This particular experiment is a more mask-off version of what other techbros are trying to accomplish with silicon. If there was a real way to create WH40k-style servitors and network their servitor brains together to perform complex calculations, techbros would probably not even bother with AI. They would just convert prisoners into servitors and network them together to mine crypto or something.
deleted by creator
deleted by creator
Any animal can feel pain and hunger and suffering. That’s sentience.
deleted by creator
Sentience does not guarantee “consciousness.” Parrots, ravens, and dolphins are (probably) not humans. Humans are “conscious” due to the ways we interact with the world. If you grow brain tissue and deprive it of the human experience then it shouldn’t end up a human. But I get the precaution.
Consciousness is merely what comes after the transformation of quantity into quality. There’s a continuity in the development of the system of sentience, and this remains stable only up to the point of discontinuity, which indicates its transition from the quantity of sentience into a new quality i.e. sapience.
I doubt they’ll grow it in a lab with little pieces of brain tissue, but there is a point where that happens.
Do you think dolphins don’t have consciousness?
I meant sapience I guess
are you saying babies aren’t conscious
They’re sentient. So obviously don’t kill them, but most people would disagree with that logic…
i think most people now think that consciousness doesn’t require being able to communicate in language. hence all the interest in Genie and other feral children, animals looking at themselves in the mirror, etc.
This is just nonsense and I don’t know where you’re even getting it from. Can you produce an example of a human that displays sentience but not consciousness without some serious brain injury or developmental defect plausibly causing it? If not, what immense epistemic load is being accounted for with such a huge assumption?
If this is just more bad science about Genie or one of those, I swear to God . . .
I think I’ll just shut up & take maos advice tbh