Hey folks - author here, happy to answer any questions about the feature or what we're hoping to do with it.
Broadly we just want to lower barriers to containerization for all .NET developers. Jib/Ko/etc are proven patterns in this field, and we saw an opportunity to use the existing infrastructure of MSBuild to reduce the amount of concepts our users would need to know in order to be successful in their journey to the cloud. On top of that, having the feature in SDK provides some opportunities to help users adhere to conventions around container labeling (or customize container metadata entirely!) so we can make .NET containers good citizens in the container ecosystem overall.
First, let me just say that this seems great. It looks like a perfect way to use reasonable defaults (project name, Version) and use the existing `dotnet publish` infrastructure to make containers. And I love how the blog post has both a simple CLI example, and a GitHub Actions yaml example! So thank you.
Now for the problem:
I still don't understand why other people compile dotnet projects in containers. Today, we have a many containers built on a monolith, and it looks like if I make containers via `-p:PublishProfile=DefaultContainer`--for example, 20 containers--then that CI build is going to compile our codebase 20 separate times. With `-p:PublishProfile=DefaultContainer`, the long build is mostly duplicated in each container. Right?
So I have one major problem preventing me from adopting this: it's compiling in the container, which balloons our build time.
It's entirely possible I'm missing something obvious or misinterpreting the situation, and if so, please let me know. I'm mostly immune to feeling shame and appreciate feedback.
There is some benefit to building inside a container - it keeps your build environment consistent across team members and makes it easier to replicate your CI.
Having said that, because the .Net toolchain is capable of cross-targeting this feature should enable broad swaths of users to not need to build inside a container to get a container created. So I completely agree with your puzzlement here and would hope that this feature leads to a reduction in that particular pattern.
> it keeps your build environment consistent across team members
I have never had .NET build issues due to environment inconsistencies across team members. I think NuGet is pretty good at making the dependencies consistent. No need for containers.
I personally appreciate the ability to build on any machine. A newly setup dev machine, or a new build machine, without having to worry about if I all the various dependencies installed for a successful build. Not all of my build dependencies can be handled with nuget.
IMO running a managed runtime like .NET inside a container isn't done as a security measure (like sandboxing) - instead it's done for uniformity and ease of deployment to the infinite number of cloud services/hosting providers that understand containers. Making it easy to make containers for .NET applications means that it's easy to go to any hosting model of your choice, instead of waiting for $NEXT_BIG_CLOUD to provide .NET-runtime runners for their bespoke service.
> Jib/Ko/etc are proven patterns in this field, and we saw an opportunity to use the existing infrastructure of MSBuild to reduce the amount of concepts our users would need to know in order to be successful in their journey to the cloud.
Hah, I don't know - my experiences with Jib have only ever been negative. Having something like a Dockerfile that lets you customize everything that goes into the container and only having to worry about your app as a .jar file seemed like a better option to me, rather than having some plugin that integrates with your build tooling and feels infinitely more opaque all of the sudden: https://cloud.google.com/java/getting-started/jib
Essentially if you'd need a bunch of custom packages, e.g. some non-open-source fonts so your PDF export in your Java app would work correctly, you'd still probably need a custom base image, thus slightly negating the benefits of this apparent simplification: https://cloud.google.com/java/getting-started/jib#base-image
In addition, the images that were generated (last I tried) didn't have proper timestamps and thus showed up in Docker as created decades ago, which might be good from a reproducible build perspective (same code --> same image), but still felt unintuitive when you actually looked at the images.
But hey, maybe I'm just used to Dockerfiles and not needing a different plugin for each separate technology stack - looking at any application as just a Dockerfile (or a similar equivalent) regardless of whether it runs Java, Ruby, .NET, Python, Node or something else under the hood has always seemed like a good idea.
I'm glad that people who like alternative approaches have those options!
Personally (bit of a tangent here), I also found things like dealing with memory limits in JVM (e.g. the container needs a bit of free memory not to OOM, so JVM needs to leave a bit free, but Xmx is not the actual limit and will still be exceeded, alas there is no actual JVM_MAX_MEMORY_LIMIT_MB parameter so it's a bit of a pain if you want stable containers that don't crash). to be problematic, so it's nice that various different technologies are getting attention, be it Jib, .NET or something else!
.NET just generally seems like a pretty sane and performant option (primarily for web development, but for other use cases as well), especially with how it feels like most of what you need comes out of the box vs the more fragmented nature of other stacks (e.g. Spring and its plugins like Hibernate/myBatis/jOOQ in Java land).
In summary: I still believe that this (much like the other tools in the space) will be good for people who don't want to learn all of the concepts of what Docker/Buildah/... provide you with and will make building containers for your particular stack more easy. Though this will come at the expense of having multiple separate tools for different tech stacks, which may or may not erase some of the benefits, depending on how polyglotic your stack is.
You make great points about the need for customization and the boundaries of solutions that aren't based on Dockerfiles. Our approach to that problem is twofold, though both parts are still only in the planning stage:
* eventually providing an 'eject' mechanism to create the matching Dockerfile for a given project. this serves as a basis for any customization you might need, as well as a base language that many existing tools can understand.
* making it easy to include arbitrary image layers by reference in your container through a syntax like `<ContainerLayer Include="<layer SHA ref>" />`. This makes it easy to grab already-built components and inject them into your build.
I entirely agree with your summary. More choices, but all built on the same standard foundation :)
What problem is this solving? I have been building containerized .NET applications for a couple of years now. It is super easy to use docker command to build x86 Linux and ARM docker images. I don't think I'll switch to Microsoft's half-assed solution since the docker command works just fine.
Here's the problem Microsoft should be solving instead: Once a docker image is built, how can my customer (not me) deploy it to Azure using their Azure account? I would like to provide a "Deploy to Azure" button similar to Heroku's "Deploy to Heroku". My customer should be able to deploy a web application using my docker image with a single click, using their Azure subscription. Heroku even provisions a Postgres database in the process. And it was all free until a couple of days ago.
Honestly having .NET figure out what is needed for the image, the build context, etc. is a much better way to build and it doesn't require having the extra tooling.
Full disclosure: Engineer at Microsoft (and previously Docker) not at all involved in .NET.
I work on moby (aka docker project) and internal builds of docker and CNCF related projects.
Does it support docker running on MacOS M1, or Raspberry Pi? I need both of those. What about the dozens of features supported by Dockerfile? If you need even one of those features (for example, VOLUME), then you're back to writing Dockerfiles. And there's nothing wrong with writing Dockerfiles to begin with. You really don't need something coming in between, it's only going to get in the way.
> Does it support docker running on MacOS M1, or Raspberry Pi?
Not at the moment, from the article:
"We have focused on the Linux-x64 image deployment scenario for this initial release. Windows images and other architectures are key scenarios we plan to support for the full release, so watch out for new developments there."
So, ignoring that, this seems like a first cut.
The rational for this seems to be streamlining docker builds when working with dotnet:
"This Dockerfile works very well, but there are a few caveats to it that aren’t immediately apparent, which arise from the concept of a Docker build context. The build context is a the set of files that are accessible inside of a Dockerfile, and is often (though not always) the same directory as the Dockerfile. If you have a Dockerfile located beside your project file, but your project file is underneath a solution root, it’s very easy for your Docker build context to not include configuration files like Directory.Packages.props or NuGet.config that would be included in a regular dotnet build. You would have this same situation with any hierarchical configuration model, like EditorConfig or repository-local git configurations.
This mismatch between the explicitly-defined Docker build context and the .NET build process was one of the driving motivators for this feature. All of the information required to build an image is present in a standard dotnet build, we just needed to figure out the right way to represent that data in a way that container runtimes like Docker could use."
Why is this a problem in your opinion? I'm not trying to catch you out, but as someone who's not using docker for .net builds/deployments I'm trying to understand why you're dismissing this feature.
Yeah I read that paragraph and don't get what they are talking about. I don't understand what problem they are solving. I build docker images for my .NET web app multiple times a day and it works fine. You do a publish and then in Dockerfile do:
COPY bin/Release/net5.0/publish .
It works fine. WTF are they talking about a mismatch between explicit definition and the .NET build process?
The cynic is me would say that this is a MS attempt to take ownership and mindshare over a piece of “commodity” tooling so that devs in the ecosystem become more familiar with that than the original system, add MS/dotnet specific features and push their own system.
A modern variation of “embrace, extend, extinguish”. I doubt MS has the power or desire to extinguish containers, but getting a foot in to the deployments space would be a win for them “use dotnet because it deploys better than other solutions”.
I kinda tire of this sort of comment. I lived/worked through the EEE phase of MS back in the 90's, for good or for bad, but ultimately every large scale vendor wants to own the current "hot" space with their vendor specific tooling. No corporation is your friend.
Mainly what I was getting at here is that there are often repo-level files that are part of the .NET build process that are easy to forget to include in your build context/as part of the initial COPY command that most folks to do to a package restore before build to take advantage of Dockerfile layer caching. Right now as a user you have to be aware of the repo/file layouts and get your build contexts right if you build inside of a multi-stage Dockerfile.
IMO for a significant part of the user base there's no reason to have to manage that at all - .NET is capable of cross-targeting enough to not need to perform the build inside of a container. That keeps the user in the 'build context' that they are used to, and we can use all of that context to still end up at the ideal result - a correct container, with all of their app dependencies.
Look at the example dockerfile farther down in the article. They're performing the build inside of a dotnet sdk container then creating the application container. It sounds like you're performing the build outside of a container and just packaging the artifacts.
.net knows what version it is, it knows what deps your project has, etc.
Instead of declaring this effectively twice (once for .net and once for Docker), .net is handling everything.
I also see the big call out to cases where the dockerfile and project are not in the same directory, This looks like a more straight forward improvement.
Broadly we just want to lower barriers to containerization for all .NET developers. Jib/Ko/etc are proven patterns in this field, and we saw an opportunity to use the existing infrastructure of MSBuild to reduce the amount of concepts our users would need to know in order to be successful in their journey to the cloud. On top of that, having the feature in SDK provides some opportunities to help users adhere to conventions around container labeling (or customize container metadata entirely!) so we can make .NET containers good citizens in the container ecosystem overall.