Consciousness is a product of self-referential data gathering. Data that looks upon itself, that is sapience. Data that looks upon data, that is consciousness. Animals have this. Plants don’t.
Why should that be my line for life I care about though? How far is a chicken from a fish, from a worm? There is some line for animate life in which you can’t really argue any form of consciousness. And, again, my discriminator isn’t even necessarily consciousness.
The issue with that is distinguishing how humans interpret suffering from “emulated” suffering. Like maybe a lobster has instinctual or chemical reactions to things, but does it actually interpret the suffering or just react to the nerves firing. We can’t really know entirely without communication. And even if we do communicate, what if it just mimicks human suffering like a deep learning NN could. ChatGPT cannot suffer, but it can convince some inexperienced people that it can.
But it is also entirely fair to say- even if we don’t entirely understand if a dog is actually suffering, it looks like it is and acts like it is, so I will just be cautious and assume it is to not cause undue harm.
Consciousness is a product of self-referential data gathering. Data that looks upon itself, that is sapience. Data that looks upon data, that is consciousness. Animals have this. Plants don’t.
Why should that be my line for life I care about though? How far is a chicken from a fish, from a worm? There is some line for animate life in which you can’t really argue any form of consciousness. And, again, my discriminator isn’t even necessarily consciousness.
Mine is suffering. A thing needs a neural network to suffer.
The issue with that is distinguishing how humans interpret suffering from “emulated” suffering. Like maybe a lobster has instinctual or chemical reactions to things, but does it actually interpret the suffering or just react to the nerves firing. We can’t really know entirely without communication. And even if we do communicate, what if it just mimicks human suffering like a deep learning NN could. ChatGPT cannot suffer, but it can convince some inexperienced people that it can.
But it is also entirely fair to say- even if we don’t entirely understand if a dog is actually suffering, it looks like it is and acts like it is, so I will just be cautious and assume it is to not cause undue harm.