Since Artificial General Intelligence is not here yet, I do have some questions about AI. Isn’t it true that AI has safeguards against misuse that are easily bypassed? You can tell AI who it is and then to respond as that “person.” AI appears to have no self than what you tell it. This, of course, is very dangerous. That’s the hubbub.
But if AI has flimsy directions (here, morals) and fluid personhood, then doesn’t that mean it has no real self? You can’t reason with a non-person and putting that in charge, rather than as a tool or assistant, would seem a suicidal endeavor.
These are just thoughts. Respond if you will.