#Understanding Docker: An analogy

Remember last time you set up your notebook? In preparation of installing your operation system, you’ve probably downloaded a disk image and either burned it onto a DVD or copied it onto a USB drive. When booting your notebook, you were able to launch an installation wizard to install your operation system onto your hard drive.

Ubuntu even allows you to launch a minified version of your operation system directly from this disk image. This enables you to try out your operation system before installing it. But it also has a downside: Without applying special settings when creating the ubuntu image, you don’t have a persistent storage available. Every time you boot your notebook from your image, you start again with this initial state of your minified operation system. If for example your image contains a web browser, it will be available in your demo session. If not, you may install it in your session, but it’s lost when you turn off your notebook, so you would have to reinstall it next time you launch your image.

The Ubuntu live wizard
Running a docker image is not so different from trying out an OS using a live disk.Image was taken from tutorials.ubuntu.com.

To recap: When installing your operation system, you bundle files together to create an image, which is runnable when booting your notebook. Whatever is part of this image, is available every time you launch it, but everything you install in your session is gone after turning off your pc.

A docker image is actually a pretty similar thing. When creating the image, you decide what programs, settings etc. should be part of it. But you don’t have to burn your docker image to a DVD and reboot your notebook to launch it. Instead, you just need to use the command line - docker run IMAGE - to run a container based on the image. Running a container is similar to launching one session of your demo operation system: The initial state of your container (which programs are available etc.) is fully based on how you’ve created the image. You may add some additional stuff, but if you remove the container, everything is gone.

But different to your live session of your operation system, Docker allows you to run as many containers as you want at the same time. Every container is fully isolated: Changes in one container aren’t reflected to all your other containers.

As you can see, it’s important how you build your image, because everything built onto the image is directly available in all containers you run. So how do you tell Docker what to include in your image? This is where this mystical Dockerfile comes in. A Dockerfile is just a plain text file containing all instructions to build your image. So whatever you want your image to be like, you tell Docker in your Dockerfile.

#Setting up VSCode Remote

Now that we know the very basics about images, containers and Dockerfiles, let’s use this to set up a very simple development environment with VSCode Remote.

##Prerequisites

Before following this tutorial, make sure you’ve installed:

You might want to check out this offial VSCode Remote installation guide for more info.

##Create the Dockerfile

A great thing about Dockerfiles is that you don’t always have to start from scratch. Instead, you can tell Docker that your image is based on another, already existing image. Docker then downloads this base image and you are able to tell Docker what changes you want to apply to this image. And as always in this IT community with all the friendly guys around, there are plenty of awesome docker images to start from!

In our case, we will use one of the official nodejs docker image as our base image. We do this by creating a file called Dockerfile with the following content:

# Use the official node image as base image. Installs node in v10.
FROM node:10

The node:10 image itself is based on Debian. It includes some useful Debian packages and sets up a linux user called node you should use instead of root when working with this image.

VSCode needs to know about our Dockerfile to build an image and/or run a container. To configure our workspace as a VSCode Remote workspace, we first create a folder called .devcontainer in our workspace’s root directory. Next, we move our created Dockerfile in this directory and create a devontainer.json.

.
├── .devcontainer
|   ├── devcontainer.json
|   └── Dockerfile

For now, let’s keep our devcontainer.json simple. We just tell VSCode the (relative) location of our Dockerfile and that it should run a container using the user node (instead of root, which would be docker’s default).

{
  "name": "my-node-app",
  "dockerFile": "Dockerfile",
  "runArgs": ["-u", "node"]
}

And that’s basically it. From now on, every time we start VSCode and open our workspace, VSCode tells us that it found our container configuration. We just have to click on “Reopen in Container” to tell VSCode to first build an image based on our Dockerfile and then run container.

VSCode found our configuration
Yay, VSCode found our configuration!

If you never built this image before, building the image may take a while. Since our image is based on another image, Docker needs to download it first before it’s able to execute our additional instructions (if there would be any). But this only applies to the first time you are building this image: Thanks to Docker’s awesome layer concept, Docker doesn’t need to rebuild everything when you add instructions to your Dockerfile. Instead, Docker is able to (kind of) stack your changes upon previously built images. Check out the linked article about layers for more info.

While VSCode builds our image and tries to run a container, I recommend watching the log output. This will show you the instruction Docker is currently performing, it’s progress and potential errors.

Click on "details" to watch the logs.
Click on "details" to watch the logs.

These logs also sometimes help you if you experience issues in your container, so I recommend to always open them.

##Begin hacking

Congrats, now you are ready to build awesome stuff in your isolated development environment! Some things to notice:

  • All your files have been mounted into your container automatically by VSCode. This means your container can access all your files in your project folder. We’ll learn more about mounts in one of the following articles.
  • If you open a shell in VSCode, it’s now attached to your container. This way it’s super simple to manage your container directly from VSCode.
  • If you change your Dockerfile, you need to rebuild your image to reflect your changes to your development container. To do so, just open the VSCode editor commands (Ctrl + Shift + P on windows) and choose “Remote-Containers: Rebuild Container”. This is also a great choice if you experience any bugs and would love to just start from a fresh new development environment.
  • To get started with your new development environment, check out my following article. I will set up a simple express server application, make it accessible from your host machine and give some more useful tips about VSCode Remote.