The issue with that is distinguishing how humans interpret suffering from “emulated” suffering. Like maybe a lobster has instinctual or chemical reactions to things, but does it actually interpret the suffering or just react to the nerves firing. We can’t really know entirely without communication. And even if we do communicate, what if it just mimicks human suffering like a deep learning NN could. ChatGPT cannot suffer, but it can convince some inexperienced people that it can.
But it is also entirely fair to say- even if we don’t entirely understand if a dog is actually suffering, it looks like it is and acts like it is, so I will just be cautious and assume it is to not cause undue harm.
The issue with that is distinguishing how humans interpret suffering from “emulated” suffering. Like maybe a lobster has instinctual or chemical reactions to things, but does it actually interpret the suffering or just react to the nerves firing. We can’t really know entirely without communication. And even if we do communicate, what if it just mimicks human suffering like a deep learning NN could. ChatGPT cannot suffer, but it can convince some inexperienced people that it can.
But it is also entirely fair to say- even if we don’t entirely understand if a dog is actually suffering, it looks like it is and acts like it is, so I will just be cautious and assume it is to not cause undue harm.