You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Before diving into AWS container services, let's revisit the fundamentals of containers and Docker. This lesson provides a focused refresher on the concepts you'll need as we explore how AWS runs containers at scale.
A container is a lightweight, portable unit that packages an application together with everything it needs to run — code, runtime, libraries, system tools, and configuration files. Unlike virtual machines, containers share the host operating system's kernel, making them significantly smaller and faster to start.
+--------------------------------------------+
| Container Stack |
|--------------------------------------------|
| App A | App B | App C |
| Bins/Libs | Bins/Libs | Bins/Libs |
|--------------------------------------------|
| Container Runtime (Docker) |
|--------------------------------------------|
| Host OS / Hardware |
+--------------------------------------------+
| Property | Description |
|---|---|
| Isolated | Each container has its own filesystem, network stack, and process tree |
| Portable | A container image built on your laptop runs identically in the cloud |
| Ephemeral | Containers are designed to be created and destroyed quickly |
| Immutable | Images are read-only; changes create new images rather than modifying existing ones |
| Lightweight | Containers share the host kernel and typically weigh megabytes, not gigabytes |
Docker uses a client-server architecture:
+------------------+ +-------------------+
| Docker CLI | REST | Docker Daemon |
| (docker) | -------> | (dockerd) |
+------------------+ API +-------------------+
|
+---------+---------+
| |
+----------+ +----------+
| Images | |Containers|
+----------+ +----------+
|
+----------+
| Registry |
+----------+
A Docker image is built from a Dockerfile — a text file containing step-by-step instructions:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Each instruction creates a layer. Layers are cached and shared between images, which speeds up builds and reduces storage.
registry/repository:tag
# Examples:
docker.io/library/nginx:1.25 # Docker Hub official image
123456789012.dkr.ecr.eu-west-2.amazonaws.com/my-app:v1.2.3 # AWS ECR image
| Command | Purpose |
|---|---|
docker build -t my-app:v1 . | Build an image from a Dockerfile |
docker run -d -p 8080:3000 my-app:v1 | Run a container in detached mode with port mapping |
docker ps | List running containers |
docker logs <container> | View container stdout/stderr |
docker exec -it <container> /bin/sh | Open a shell inside a running container |
docker stop <container> | Gracefully stop a container (SIGTERM) |
docker rm <container> | Remove a stopped container |
docker images | List local images |
docker push <image> | Push an image to a registry |
docker pull <image> | Pull an image from a registry |
Containers are ephemeral — when they are removed, any data written inside the container is lost. Volumes provide persistent storage:
# Named volume
docker run -v my-data:/app/data my-app:v1
# Bind mount (host directory)
docker run -v /host/path:/container/path my-app:v1
Docker provides several network drivers:
| Driver | Description |
|---|---|
| bridge | Default — containers on the same bridge can communicate by name |
| host | Container shares the host's network stack directly |
| none | No networking — complete isolation |
| overlay | Multi-host networking for Docker Swarm or distributed setups |
Docker Compose lets you define and run multi-container applications using a YAML file:
version: '3.8'
services:
web:
build: .
ports:
- "8080:3000"
depends_on:
- db
environment:
DATABASE_URL: postgres://user:pass@db:5432/mydb
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: pass
volumes:
pgdata:
Run everything with a single command:
docker compose up -d
Running Docker on a single machine works for development, but production workloads need:
AWS provides managed services that handle all of these concerns so you can focus on your application code rather than infrastructure management.
AWS offers several container services, each targeting a different level of abstraction:
| Service | What It Does |
|---|---|
| Amazon ECR | A managed container image registry — your private Docker Hub on AWS |
| Amazon ECS | A fully managed container orchestration service — runs and manages your containers |
| AWS Fargate | A serverless compute engine for ECS (and EKS) — no EC2 instances to manage |
| Amazon EKS | Managed Kubernetes — for teams already invested in the Kubernetes ecosystem |
Over the remaining lessons, we will explore each of these in depth.
As you move into AWS container services, keep these Docker best practices in mind:
alpine, distroless) to reduce attack surface and image sizelatest in production