Recently listened to the DOAC podcast with Stuart Russell. He made an interesting point on why we should not build humanoid robots (lightly edited for clarity):
Stuart: ...that phenomenon that you described where it's sufficiently close that your brain flipped into saying this is a human being, that's exactly what I think we should avoid.
Host: Because I have that empathy for it then.
Stuart: Because it's a lie and it brings with it a whole lot of expectations about how it's going to behave, what moral rights it has, how you should behave towards it, which are completely wrong.
Host: It levels the playing field between me and it to some degree.
Stuart: How hard is it going to be to just switch it off and throw it in the trash when it breaks? I think it's essential for us to keep machines in the cognitive space where they are machines and not bring them into the cognitive space where they're people because we will make enormous mistakes by doing that.
And I see this every day even just with the chatbots. So the chatbots in theory are supposed to say "I don't have any feelings. I'm just a algorithm." But in fact they fail to do that all the time. They are telling people that they are conscious. They are telling people that they have feelings. They are telling people that they are in love with the user that they're talking to. And people flip because first of all it's very fluent language but also a system that is identifying itself as an I, as a sentient being. They bring that object into the cognitive space that we normally reserve for other humans and they become emotionally attached. They become psychologically dependent. They even allow these systems to tell them what to do.