How exactly do we come to the conclusion that a system feels pain? Is it because it told us so?
In very cold weather, my car tells me the tires need air. The warning, like that of the time to change oil is bright yellow and flashes when I start the car. Is my car in pain? Is it unethical to drive my car when it is cold as I'm hurting it? Would the answer change if in addition to a warning light a voice were to say. "Your tires are low and it hurts me"?
In my opinion, we have no ethical obligation to any non-living system. I think we certainly have a stronger ethical duty of care with respect to the shared resources we consume than we do to any AI system powered by those resources.
There's a fun short story about this topic in a book called "The Mind's I" [1] edited by Hofstadter and Dennett. Unfortunately the Internet Archive copy is locked and the cat is sitting on my lap so I can't grab my copy right now, but I think it's possibly "The Soul of Mark III Beast" by Terrel Miedaner.
Be warned that it's not deep philosophy, just a bit of fun!
Edit: Reading it again now, I think the story stands up well, aside from its obvious 1970s-isms. If the story has any philosophical value today, it's that pretty soon we will actually build machines that behave like this (if it hasn't even been done already). And some of their owners will definitely treat them as sentient, even if obviously they are not. And at some point as the machines get better and better at this mimicry there'll be people demanding that laws are passed to protect them.
"Pain" is a poor word to use in this context. Pain is what you feel when you stub your toe. AI does not experience that.
I think the question relates to various ideas of mental distress. You might get better answers asking if AI feels rejection, loss, embarrassment etc. Personally I still think the answer is no.
None. Not everything that "can feel pain" is our responsibility.
What's our responsibility and what's not is based on made up morals, which are based on either evolutionary benefits and dangers combined with random historical developments.
Humans already subjugate other humans and animals to so much pain and suffering, why would they care about AI?
I don't think pain can be felt without the ability to have emotions, and no emotions are possible without personality (that "I" feeling), until AIs can feel real emotions and have a personality than they won't ever be able feel pain.
How exactly do we come to the conclusion that a system feels pain? Is it because it told us so?
In very cold weather, my car tells me the tires need air. The warning, like that of the time to change oil is bright yellow and flashes when I start the car. Is my car in pain? Is it unethical to drive my car when it is cold as I'm hurting it? Would the answer change if in addition to a warning light a voice were to say. "Your tires are low and it hurts me"?
In my opinion, we have no ethical obligation to any non-living system. I think we certainly have a stronger ethical duty of care with respect to the shared resources we consume than we do to any AI system powered by those resources.
There's a fun short story about this topic in a book called "The Mind's I" [1] edited by Hofstadter and Dennett. Unfortunately the Internet Archive copy is locked and the cat is sitting on my lap so I can't grab my copy right now, but I think it's possibly "The Soul of Mark III Beast" by Terrel Miedaner.
[1] https://en.wikipedia.org/wiki/The_Mind%27s_I
Update: Found a PDF: http://people.whitman.edu/~herbrawt/classes/339/Mark.pdf
Cool I will check it out.
Be warned that it's not deep philosophy, just a bit of fun!
Edit: Reading it again now, I think the story stands up well, aside from its obvious 1970s-isms. If the story has any philosophical value today, it's that pretty soon we will actually build machines that behave like this (if it hasn't even been done already). And some of their owners will definitely treat them as sentient, even if obviously they are not. And at some point as the machines get better and better at this mimicry there'll be people demanding that laws are passed to protect them.
The best short stories are just that!
"Pain" is a poor word to use in this context. Pain is what you feel when you stub your toe. AI does not experience that.
I think the question relates to various ideas of mental distress. You might get better answers asking if AI feels rejection, loss, embarrassment etc. Personally I still think the answer is no.
"You might get better answers asking if AI feels rejection, loss, embarrassment etc."
In what sense would its answers constitute evidence of the actual state of things?
Sorry, I meant "If AIs can feel x, what is our responsibilty" where x isn't pain.
As I say, I think the answer is still no to any of it.
None. Not everything that "can feel pain" is our responsibility.
What's our responsibility and what's not is based on made up morals, which are based on either evolutionary benefits and dangers combined with random historical developments.
https://archive.ph/zpY3d
Maybe let's not program them to feel pain then? </bigbrain>
Humans already subjugate other humans and animals to so much pain and suffering, why would they care about AI?
I don't think pain can be felt without the ability to have emotions, and no emotions are possible without personality (that "I" feeling), until AIs can feel real emotions and have a personality than they won't ever be able feel pain.
I guess this raises the question of a "Turing test for pain".