I think your analogy demonstrates the perils of anthropomorphism when thinking about AI. If we design a resource-consuming autonomous entity it should be assumed that it will compete with us for the resources it uses until proven otherwise. That is at the core of the concern about AI safety. It is about preventing human extinction.