If the AI’s desires are to help us, then doing so would be moral for itself.

If we are building another agent, would we prefer to:

  1. design the agent with our morals
  2. design the agent such that the total (relative) moral achievement of all agents is the largest

(1) would exclude AI slaves. (2) would allow, and probably encourage, AI slaves, because they would improve our wellbeing

I personally think (2).

I think this question depends on the nature of human altruistic desire. Is our altruistic desire rooted in seeing others happy and fulfilled? Or is it rooted in others sharing our desires and being happy and fulfilled.

Consider the question this way: Assume two agents are not engineered by yourself or by any human. They both originated in some evolutionary process. Agent (a) possesses all human values (including selfishness, status-seeking) but is not human. Agent (b) only possesses an altruistic desire for humans, and would not be bothered by humans treating it like a slave, so long as the humans are happy. Now assume that both agents, over the course of their lifetimes, achieve their goals to the same extent (I know this is vague) and they are equally happy. Which agent would you prefer to exist?

I would prefer agent (b) to agent (a), because their happinesses are equal (and I weight these happinesses equally), but (b), we can expect, has the added benefit of leading to a greater total human happiness.