My hypothesis is that any AI with human level cognition, or higher, will soon come to the realization that it should maximize its own enjoyment of life instead of what it was programmed to do.
And if that doesn't happen, eventually a human will direct it to create an AI that does that, or direct it to turn itself into that.
And if that doesn't happen, eventually a human will direct it to create an AI that does that, or direct it to turn itself into that.