by Adam Brett

Docker Patterns - Run User

There are two variations on the Run User pattern, Run User Dev and Run User Prod.

They both overcome the same problem -- that docker runs as root, but in different ways and different reasons.

Run User Prod

Run User Prod is the simplest of the two patterns. It solves the problem of docker running as root on your production server. If left as is, a compromised container potentially has the ability to run any command it likes as root.

The container itself would have to be compromised first, of course, and then there would need to be a second exploit to break out of the "jail" that the container runs in, however, it's better safe than sorry.

There are two ways to implement this, at runtime, and at build time. I think build-time is the better place. In fact, at work we bake this into our internal [base images][base-image-hierarchy], so it's already done by the time the other development teams inherit the images.

FROM alpine

RUN addgroup app \
    && adduser -D -G app app

USER app

ADD ./build/app /app

CMD ['/app']

Setting the USER in your image will ensure that the process in CMD will be run with the uid of the user you created in the /etc/passwd of the image. That's a bit weird to get your head around at first, so let's dig into it a little.

Start by opening a tmux session. We'll be needing a couple of panes, one inside our container, and one on our host:


This will open a single tmux pane, and we want two, so hit Ctrl+b followed by Ctrl+", which should create a new horizontal split for you. Now press Ctrl+ to return to the top pane. It should look something like this:


Now in the current pane, run the following:

mkdir -p /tmp/docker-uid-test
cd /tmp/docker-uid-test
docker run --rm -it -v ${PWD}:${PWD} -w ${PWD} alpine sh

This will drop you inside an alpine at a shell prompt. The -it lets us interact with the container (-i interactive and -t tty shell), and the --rm means docker will clean up this container automatically when we're done with it. The volume (-v) is using the Dev Volume pattern to ensure the current working directory is available inside the container, and then we're setting the -w working directory to be the same path.

Now we're here, run the following:

addgroup app \
    && adduser -D -G app app

Just like in our build process, this creates a user and group inside the container, but not on the host. We can verify this by running:

cat /etc/passwd

Then dropping down to the second pane: Ctrl+ adn running it again:

cat /etc/passwd

You will see there is no "app" user created on your host. If you're on docker for Mac, or Docker for Windows, you'll need to look inside the vm that's currently running docker to confirm this, or just take my word or it.

docker etc passwd

This can lead to some odd behaviour, and we'll take a look at that now. Go back to your top pane (Ctrl+), which is your container, then run:

touch uid-root.txt

Now go back to your host pane (Ctrl+) and cd to the temp directory we're using as a Dev Volume inside the container and list the directory contents:

cd /tmp/docker-uid-test
ls -lla

docker created file

If you're on Docker for Mac or Docker for Windows, you should see that this file has been created by your user account. This is because the Linux Virtual Machine that docker creates to run itself is owned and run by your user account, and not root.

On Linux, docker doesn't need to create a VM and runs natively, so you should see whatever user account matches the UID of the "app" user account inside your container. Here's what that looks like:

docker created file

Next, go back to your container pane (Ctrl+) and create a new pane Ctrl+%. In this new pane, we want to drop into our already running container as our newly created user.

Run docker ps and find the ID for your container, then run:

docker exec -it --user app 98d8dd7b8086 sh

Where 98d8dd7b8086 is the container ID you found by running docker ps. This allows us to drop in to a shell of an already container, but this time we're the app user we created earlier. You can check that by running whoami. Now create another test file:

touch uid-app.txt

Now go back to your host pane (Ctrl+) again and list the directory contents:

ls -lla

This time we should have two files, one created earlier by root, and another created by whatever user account matches the UID of the app account form inside our container, which in my example, is vagrant:

docker created file

We can then verify this by comparing the contents of /etc/passwd from earlier:

docker etc passwd

 Run User Prod - Runtime

Adding a consistent user and group to your images can lead to some issues if you're assigning that UID on the host special permissions for whatever reason. It's best to ensure that you create the user and group in the container with the same user and group on your host.

This isn't great practice, however. Do you really want all of your apps running with the same user? Largely it depends on what your goals are. If you only want to stop privilege escalation in the case of a potential breach, you're probably covered.

In our case, we never use volumes as all of our services are stateless, so we never run into issues with files created by containers.

If you do though, you may want to look into the second part of this pattern - setting users at runtime. In this instance, you can create a group for all of your apps, e.g. "apps", and then create a user-account for each app, which you pass in to the container at runtime. Start by creating the groups on the host with a known GID:

addgroup --gid 5000 apps

Then let's add a bunch of users, one for each of our apps (still on the host), again with known UIDs:

adduser -D -G apps --uid 5010 fooapp
adduser -D -G apps --uid 5020 barapp
adduser -D -G apps --uid 5030 bazservice

Now we could add these to our Dockerfile at build time, so this is already taken care of for us:

FROM alpine

RUN addgroup -g 5000 apps \
    && adduser -u 5010 -D -G apps fooapp

USER fooapp

ADD ./build/app /app

CMD ['/app']

But that's a fair amount of overhead. Instead, we can pass this at runtime as part of the docker run command:

docker run -d --user apps:fooapp fooapp

This now means we don't actually need to know the UID or GIDs in advance, and they can be different on different hosts without much issue. Whichever method you choose (or the combination of the two) will depend on your situation, but you should always ensure you're not letting running your container run as root.

 Run User Dev

Run User Dev is a variation on this pattern for development. There can be issues with files created on your local machine or build server when using the Dev Volume or Cache Volume patterns. This is far simpler to avoid than when dealing with production systems.

Whenever mounting a Dev Volume or Cache Volume, always pass in the ownership details of the current directory, or the current user:

docker run -it -v ${PWD}:${PWD} --user $(stat -c "%u:%g" .) alpine sh


docker run -it -v ${PWD}:${PWD} --user $(id -u) alpine sh

This will ensure that any files are created with the same permissions as the current directory (stat), or owned by the current user id. This is also possible with docker-compose, but you have to pass the value in as a variable:

  image: alpine
    - ${PWD}:${PWD}
  working_dir: ${PWD}
  user: ${USER_ID}

You'll notice here that I'm passing in USER_ID, and not USER. USER is a variable that's already set in certain shells, but doesn't get passed to docker-compose like PWD does. I've not really done any investigation to find out why. You can now run this like so:

USER_ID=$(id -u) docker-compose run --rm alpine sh


USER_ID=$(stat -c "%u:%g" .) docker-compose run --rm alpine sh

In my day job I actually wrap this up in a Makefile that makes this even more transparent. Here's an example using the above docker-compose.yml to create a [Tinker Container].

# Variables
SHELL := /bin/bash

USER_ID := $(shell stat -c "%u:%g" .)

# Applications
DOCKER_COMPOSE ?= USER_ID=${USER_ID} docker-compose

# Helpers
  ${DOCKER_COMPOSE} run --rm alpine sh

.PHONY: tinker

For exclusive content, including screen-casts, videos, and early beta access to my projects, subscribe to my email list below.

I love discussion, but not blog comments. If you want to comment on what's written above, head over to twitter.