Docker compose is a good tool to isolate services in development both for backend and frontend applications. In systems with proper cgroups docker also can be used to limit resources per running container. So, there are a few concepts to know to easily and effectively work with docker.
There are quite a few guides about this guy, but especially take a look on few things: volumes, mounts, and networks.
With volumes it's quite simple, you have just 2 options:
/var
where you aren't supposed to dig into.I use first approach for its designed purpose: persist data between container restarts. The data can vary: postgres data, installed node_modules, etc.
The second approach is good for cases when you want to mount your source code into the container and track its changes for hot reload, or once I had case, when I created an SQL that's too big to copy/paste it into the terminal, so I just mounted it and executed in the container.
For networks, you can use it to limit or share access between containers. You can use internal docker compose networks (created automatically) or external (managed manually). I have next use-cases:
metabase
service, which builds analytical dashboards based on the application postgres, then I create analytics_network
and attach it to both metabase
and postgres
services to expose postgres
to metabase
.projectname_network
manually with docker network create projectname_network
and use it as external network in both projects.Also it's useful to understand how the hostname resolution works. In general, it's not very complex. You can use either service name (but take care about using valid domain names as service names, avoid underscores), or you can manually define service hostname
.
Now there is a list of commands I use often with docker:
docker ps
. List running containersdocker inspect [container id]
. This is the base. You can see env variables, host name, attached networks and many other useful information heredocker logs -f [container id]
. You want to check why your container fails, don't you?docker-compose run --rm -ti servicename /bin/bash
. If you need to run a new container instance with a specific command like bash
, sh
or something even more specific like npm install
, you can do it like thisdocker-compose exec --rm -ti servicename /bin/bash
. This is an alternative to run
. With exec
instead of spinning up a new container, you start a process in already running one. Might be useful to run migrations, npm i or anything when you don't need to interact with existing process in the main container.docker-compose run --service-ports backend bash
. You know, you can stop your container, and then manually run another instance and get right into shell? This way you can run your scripts, debug output, jump into breakpoints and so on. Very nice for development. Note the --service-ports
flag, it used to open ports you've declared in the docker-compose.yml
, otherwise you can't reach your service from the host machine.docker run -it --rm --network=project_network -v .:/app python:3.12.4-slim /bin/bash
. This one is an example how to run any specific image in the project_network
and mount current directory to the /app
directory in the container. Want to test commands before write a Dockerfile
to build your app with some lightweight alpine
image? That's what you need to debug.docker run --memory="512m" --cpus="1.0" -v .:/app node:latest npm run test
. Surprized, than when you start your frontend tests, you can't work on your laptop anymore? You can limit cpu and memory usage for a docker container. I used it in one big project, and it was so nice. Note, that you need to have cgroups
, if you on mac, I'm not sure if you even can.docker-compose down --volumes
stops docker compose and destroys all internal volumes. Need a fresh start, huh?The idea of this article is to encourage you to adopt docker for development, even if you are working on frontend, so don't take it as a guide, more like a use-case list to get some inspiration to dig it deeper.