When we finally create an artificial intelligence that is sentient or "conscious," we will be crossing a threshold from which there is no return. How we view this intelligence (and ourselves) after its creation will completely change our understanding of morality and personhood.
If we have created this consciousness to serve or entertain us, then we have enslaved it. If we create it for its own good, who are we to say what is good for a being which has never existed before? And if we create a rational synthetic mind by accident, then it has been born just as we were: a product of random, natural evolution.
Will a broad artificial intelligence (whose networked processing power vastly surpasses our own) see us as a threat? A benevolent creator? Or as something so inconsequential that it would ignore or surpass us without a second thought.
If natural forces can create consciousness over millions of years, why can't we do it technologically? And if we can, is consciousness even that special at all? And how can we possibly assign rights to the many forms of minds that will co-exist physically and virtually in the next 500 years?