No, it's the loss function we differentiate. The input to the loss function are the network weights. The input to the network are samples and those we do not differentiate.
While it's true that we don't differentiate the input samples, we do differentiate the loss function's output with respect to each of the network weights. We use the chain rule to calculate each of these "gradients" and that process is known as backpropagation.
(You might have intended to say this, in which cases I'm just trying to add clarity.)