While “It works on my computer” might give you an excuse to close a bug ticket in the backlog, it doesn’t really work when you’re working on a team.
You need to ensure that your application can run regardless of what hardware your teammates are running or libraries they have installed.
Docker solves this problem by collecting everything your application needs to run into a container.
Containers aren’t just for source code. They can hold configuration, scripts, and even their own filesystem. It’s similar to a Virtual Machine, without having to bring a full operating system along for the ride.
Because containers run in isolation, there is some configuration involved to allow for files to be saved and containers to communicate with one another.
One of the most useful ways to get your head around Docker is to see it in action.
Follow along with Joel Lord as he works through the process of preparing an application to run in separate containers for the frontend and backend of an application that searches for gifs and re-encodes them with a caption supplied by the user.
You’ll see how to configure the separate containers to communicate with one another, work with environment variables, and persist data to your local machine. Along the way, you’ll pick up some tips on useful commands and bash scripting. Finally, you’ll learn how to use Docker Compose to make it easy to run multiple containers simultaneously.
What you’ll learn
- Running existing Docker containers
- Creating Docker containers for an existing application
- Passing environment variables
- Executing bash scripts as part of container building
- Configure networking between containers
- Setting up volumes for persisting data
- Publishing containers to a public registry
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us (in author info), we’ll remove relevant links or contents immediately.