Docker & AWS — Ship Production Apps Like FAANG
Build, containerize, and deploy a real Node.js API to AWS using Docker, ECS, and GitHub Actions. You'll go from zero Docker knowledge to running a production-grade containerized service on AWS with a full CI/CD pipeline — the same stack used at Amazon, Google, and Meta.
Build it yourself, get guided when you are stuck, and leave with proof you can actually show.
What you learn by building this
- Containerize any application using Docker and write production-quality Dockerfiles
- Configure AWS IAM with least-privilege access policies across multiple services
- Deploy containerized services to AWS ECS using Fargate with load balancing
- Build a complete CI/CD pipeline that auto-deploys on every git push using GitHub Actions
- Secure cloud infrastructure using VPC, security groups, and AWS Secrets Manager
Challenge
Think first, then write
You need to run this Node.js script on a machine that does NOT have Node.js installed:
console.log("Hello, world!");
console.log(process.version);
console.log("Hello, world!");
console.log(process.version);
Your laptop, the CI server, the production box — none of them have Node. You can't install it. What do you do?
Take 60 seconds. Write down your answer or just think it through before scrolling.
Most people hit a wall here. The instinct is "install Node" — but what if you can't? What if the machine is locked down? What if you need Node 18 on one project and Node 20 on another, on the same machine?
That wall is exactly the problem Docker solves. Keep reading.
A container is a lightweight, isolated box that bundles your code together with everything it needs to run — the runtime, the libraries, the config. It runs on your machine without touching anything outside that box.
It is NOT a virtual machine. No separate OS, no 10-second boot. A container starts in milliseconds and shares the host's kernel. Think of it as a shipping container: same standard box, loads onto any ship (any machine), contents stay exactly as packed.
Docker is the tool that builds, runs, and manages these containers. Install it now — you'll have it running in under 2 minutes.
Tasks
Setup: Install Docker Desktop
Go to https://www.docker.com/products/docker-desktop and install Docker Desktop for your OS. Once installed, open it and wait for the Docker engine to start (the whale icon in your system tray should stop animating).
Verify it's working:
docker --version
docker --version
You should see something like Docker version 27.x.x.
Create your project folder
mkdir items-api
cd items-api
mkdir items-api
cd items-api
Everything you build in this module lives inside items-api/. Keep this terminal open — you'll use it throughout the module.
Run Node without installing Node
Now for the moment from the challenge. Run this:
docker run --rm node:20-alpine node -e "console.log('Hello from inside a container!')"
docker run --rm node:20-alpine node -e "console.log('Hello from inside a container!')"
The first time you run this, Docker will download the node:20-alpine image. You'll see pull progress. After that, it runs instantly.
You should see:
Hello from inside a container!
Hello from inside a container!
That's Node running inside a container. You didn't install Node on your machine.
What those flags mean:
--rm— delete the container after it exits (keeps things clean)node:20-alpine— the image to use (Node 20 on Alpine Linux, a tiny distro)node -e "..."— the command to run inside the container
If stuck: If you get "Cannot connect to the Docker daemon", make sure Docker Desktop is fully started — look for the whale icon in your taskbar/menu bar.
Predict
What will happen?
Before running this command, predict the output. What Node version do you think you'll see?
docker run --rm node:20-alpine node -e "console.log(process.version)"
docker run --rm node:20-alpine node -e "console.log(process.version)"
Write your prediction, then run it.
Did it match? Now change node:20-alpine to node:18-alpine and run again:
docker run --rm node:18-alpine node -e "console.log(process.version)"
docker run --rm node:18-alpine node -e "console.log(process.version)"
Two different Node versions. Same machine. No version manager, no conflicts. This is the core value: the container carries its own runtime.
Two terms you'll use constantly — keep them straight:
- Image — the blueprint. A read-only snapshot: the OS, runtime, your code, all baked in. Like a class definition.
- Container — a running instance of an image. Like an object created from that class. You can run 10 containers from the same image simultaneously.
node:20-alpine is an image. When you ran docker run, Docker spun up a container from that image, executed your command, and (because of --rm) deleted the container. The image stays on disk so next time it starts instantly.
Tasks
See what's on your machine
Run this to see currently running containers:
docker ps
docker ps
Nothing running right now (we used --rm so containers cleaned up after themselves). That's fine.
Now see what images you've downloaded:
docker images
docker images
You should see node listed with tags 20-alpine and 18-alpine in your images list.
Checkpoint — before moving on, confirm:
docker --versionreturns a version numberdocker imagesshowsnodewith at least20-alpine- You ran a Node script without having Node installed on your machine
If all three are true, you understand what Docker does. Not theoretically — you've seen it work. That's the foundation everything else builds on.
Quick check: Without looking back, finish this sentence: "An image is ___, a container is ___."
How this build unfolds
Docker — Containerize Your First App
AWS Foundations — IAM, S3, and EC2
ECR + ECS — Containers in Production
CI/CD Pipeline — Automate Everything
Learn by building your own version.
Remix this public project to open the workspace, follow the guided build, and let the AI mentor teach you through the work instead of doing it for you.