Tiny and Fast Docker image for Rust Application

Create a tiny Docker image with a very fast build time for the Rust application

Table of contents

The steps below will use Rocket "hello world" application as a demo.

The basic configuration would be:

# Dockerfile.plain

ARG BASE_IMAGE=rust:1.52.1-slim-buster

FROM $BASE_IMAGE
WORKDIR app
COPY . .
RUN cargo build --release
CMD ["./target/release/hello"]

Let's build the image:

$ time docker build -f Dockerfile.plain -t hello:0.1.0 .

It produces 1.38GB image with 12 minutes build times. The second and the third build also needs a similar running time.

# second build
real    12m13.412s
user    0m0.135s
sys     0m0.095s

# third build
real    12m51.086s
user    0m0.118s
sys     0m0.062s
REPOSITORY    TAG                  IMAGE ID       CREATED          SIZE
hello         0.1.0                ac4e1a72ba05   2 minutes ago    1.38GB
rust          1.52.1-slim-buster   61cb3c65a6ba   3 weeks ago      621MB

I will test the build size output and duration three times. The first build will not be counted. I assume it will be a warm-up. Before the second and third steps are run, I changed the source code, from Hello, world! to Hello, world!1 and so on. To see if the current step has a caching mechanism. The test is held on a very quiet machine.

Let's use a multi-stage build to make the size smaller.

# Dockerfile.multistage

ARG BASE_IMAGE=rust:1.52.1-slim-buster

FROM $BASE_IMAGE as builder
WORKDIR app
COPY . .
RUN cargo build --release
CMD ["./target/release/hello"]

FROM $BASE_IMAGE
COPY --from=builder /app/target/release/hello /
CMD ["./hello"]

Now our image size is 628MB

# second test
real    11m42.625s
user    0m0.108s
sys     0m0.104s

# third test
real    11m38.789s
user    0m0.127s
sys     0m0.075s
REPOSITORY  TAG                  IMAGE ID       CREATED          SIZE
hello       0.1.0                e1fa9144e345   10 minutes ago   628MB

The second and the third build run at the same duration as the first.

I heard that cargo-chef able to use speed up Rust docker build, thanks to Docker layer caching. Let's use it.

# Dockerfile.chef

ARG BASE_IMAGE=rust:1.52.1-slim-buster

FROM $BASE_IMAGE as planner
WORKDIR app
RUN cargo install cargo-chef --version 0.1.20
COPY . .
RUN cargo chef prepare  --recipe-path recipe.json

FROM $BASE_IMAGE as cacher
WORKDIR app
RUN cargo install cargo-chef --version 0.1.20
COPY --from=planner /app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json

FROM $BASE_IMAGE as builder
WORKDIR app
COPY . .
# Copy over the cached dependencies
COPY --from=cacher /app/target target
COPY --from=cacher $CARGO_HOME $CARGO_HOME
RUN  cargo build --release

FROM $BASE_IMAGE
COPY --from=builder /app/target/release/hello /
CMD ["./hello"]

It produces a 628MB image size and needs 1 minute for the last build.

# first test omitted
# second test
real    19m32.836s
user    0m0.188s
sys     0m0.088s

# third test
real    1m52.982s
user    0m0.060s
sys     0m0.088s
REPOSITORY   TAG    IMAGE ID       CREATED              SIZE
hello        0.1.0  ca1a9e1e5948   About a minute ago   628MB

The last was far compared to its previous build. It will not happen without the help of cargo-chef. Using only multistage build, the last build still does a lot of work such as fetching the repo and compiling the crates.

The build will be faster if we could avoid the step for building cargo-chef. Most of my CI builds fetch the pre-built binary then copy it to the executable path such as ~/.cargo/bin. This trick help reduces one of my projects builds from ~30 minutes down to ~14 minutes.

A glimpse of the steps I described above:

# use pre-build trunk
wget -qO- https://github.com/thedodd/trunk/releases/download/v0.10.0/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-
chmod +x trunk
cp trunk ~/.cargo/bin/
trunk --version

Now, we have a very fast build, but with a huge image. We could improve the situation with either one of the two ways. Use a distroless image, or use the scratch. But the latter requires you to change the toolchain target to musl.

Let's try using the scratch image first.

# Dockerfile.musl

ARG BASE_IMAGE=rust:1.52.1-slim-buster

FROM $BASE_IMAGE as planner
WORKDIR app
RUN cargo install cargo-chef --version 0.1.20
COPY . .
RUN cargo chef prepare  --recipe-path recipe.json

FROM $BASE_IMAGE as cacher
WORKDIR app
RUN cargo install cargo-chef --version 0.1.20
COPY --from=planner /app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json

FROM $BASE_IMAGE as builder
WORKDIR app
COPY . .
# Copy over the cached dependencies
COPY --from=cacher /app/target target
COPY --from=cacher $CARGO_HOME $CARGO_HOME
# We need static linking for musl
RUN rustup target add x86_64-unknown-linux-musl
# `cargo build` doesn't work in static linking, need `cargo install`
RUN cargo install --target x86_64-unknown-linux-musl --path .

FROM scratch
COPY --from=builder /usr/local/cargo/bin/hello .
CMD ["./hello"]
# first test omitted
# second test
real    9m22.049s
user    0m0.113s
sys     0m0.103s

# third test
real    9m46.035s
user    0m0.120s
sys     0m0.085s
REPOSITORY   TAG    IMAGE ID       CREATED          SIZE
hello        0.1.0  332ce3b4f717   30 seconds ago   8.38MB

Now, let's try the distroless way.

# Dockerfile.distroless

ARG BASE_IMAGE=rust:1.52.1-slim-buster

FROM $BASE_IMAGE as planner
WORKDIR app
RUN cargo install cargo-chef --version 0.1.20
COPY . .
RUN cargo chef prepare  --recipe-path recipe.json

FROM $BASE_IMAGE as cacher
WORKDIR app
RUN cargo install cargo-chef --version 0.1.20
COPY --from=planner /app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json

FROM $BASE_IMAGE as builder
WORKDIR app
COPY . .
# Copy over the cached dependencies
COPY --from=cacher /app/target target
COPY --from=cacher $CARGO_HOME $CARGO_HOME
RUN cargo build --release

FROM gcr.io/distroless/cc-debian10
COPY --from=builder /app/target/release/hello /
CMD ["./hello"]
# first test omitted
# second test
real    1m57.033s
user    0m0.045s
sys     0m0.038s

# third test
real    2m12.585s
user    0m0.093s
sys     0m0.096s
REPOSITORY   TAG    IMAGE ID       CREATED          SIZE
hello        0.1.0  118da37bdfe7   25 seconds ago   29MB

Using musl and scratch produces the smallest image 8.38MB. Compared to distroless 29MB. But the rebuild is faster in the distroless side 2m compared to musl 9m.

I made the repo containing each of the Dockerfile under the test for you to play with.

Conclusion

The first try produces a 1.38GB image size with 12m build times. Now we can produce a very small image with a very fast build.

The choice is in your hand. If you are ok with musl target, go with scratch image. Otherwise, choose distroless.

Notes

Thanks to Mikail Bagishov for telling me about the distroless image.

On 2021-06-10 Pieter S. van N told me about docker-slim. I tried it right away. I need to set --http-probe=false for both scratch and distroless image. Otherwise, it will fail. The scratch can not be optimized anymore. However, the distroless image can be minified by 2.46X.

I tested the minified image and it works well for this simple app. I don't have any clue for a large and complex app if the minification has any side-effect. The distroless image now has a comparable size with the scratch image without sacrificing the target changes to musl. Thanks to docker-slim.

REPOSITORY               TAG      IMAGE ID       CREATED         SIZE
hello/musl.slim          latest   54832fa90141   4 hours ago     8.53MB
hello/musl               latest   a7f6ff9598b6   10 hours ago    8.53MB
hello/distroless.slim    latest   312e081e8f63   27 seconds ago  11.9MB
hello/distroless         latest   0e434919836a   10 hours ago    29.2MB

My previous attempt to use docker-slim with a tagged image is failed. I thought docker-slim can't play well with tags. Turns out it's the http-probe issue. I think the error message is misleading. It says image not found. So I thought it was a tag issue.

cmd=build info=target.image.error status='image.not.found' image='hello/musl:1.0.1' message='make sure the target image already exists locally'

If you liked this article, please support my work. It will definitely be rewarding and motivating. Thanks for the support!

Comments

10

Pieter S. van N · Jun 14, 2021

Thanks for giving it a try and for the mention! ❤️ Really cool example. Thanks for the feedback, too -- it really helps as we make updates.


Pieter S. van N · Jun 14, 2021

Thanks for giving it a try and for the mention! ❤️ Really cool example. Thanks for the feedback, too -- it really helps as we make updates.


Azzam Syawqi Aziz · Jun 14, 2021

Updated, thanks for telling me about docker-slim. azzamsa.com/n/rust-docker/…


DockerSlim · Jun 12, 2021

The tag problem looks a bit strange because it's usually something that works. This is the first time someone reported that there's a problem with tags. Need to investigate One of the new enhancements makes it possible to use image names without any tags …


Azzam Syawqi Aziz · Jun 12, 2021

No, it works now even with a tag. Turns out it was the --http-probe flag. I need to set it to false explicitly. Otherwise, I got `image not found`. That error is misleading, so I thought it was about the tag at first.


DockerSlim · Jun 11, 2021

there's also a new flag to disable http probing (--http-probe-off) that may feel a bit more natural (will be included in the next release or you can build latest to get it): docker-slim build --http-probe-off hello/musl


Azzam Syawqi Aziz · Jun 11, 2021

Thanks, I managed to optimized the distroless image by 2.46X. Here is the conclusion gist.github.com/azzamsa/077998…


DockerSlim · Jun 11, 2021

The last command didn't really do what you expected because by default the http probe is enabled even if you don't specify the --http-probe flag. To disable it you need to set the flag to false explicitly like this: docker-slim build --http-probe=false he…


Azzam Syawqi Aziz · Jun 11, 2021

Hi. I never heard about docker-slim before. But upon trials, I never get it working. gist.github.com/azzamsa/077998…


Pieter S. van N · Jun 10, 2021

Great results and nice write-up. Curious if you tried @DockerSlim for this example? We've been looking for a few Rust examples for the livestream we do.