SageMaker has three parts. Hosted notebooks, similar to case #2. API access to ML algorithms which are "optimized" and deployment by providing API access to trained models.
I would agree, most data scientists are not required to deploy their own models, probably only for those in startups or small companies. I also introduce how Docker can be used for development. I think it is a big enough trend in CS that any data scientist should know some basics, similar to data science from the command line.
It seems like the baseline of DS skill sets is all over the place, depending on where you started off.
I would have thought most DS wouldn't be interested in learning docker just to deploy. But then again, I could be wrong. What made you write the post? Was it because you saw lots of DS wanting to deploy, or something else?