docker client(cli) <- command
|
docker server <- check local server has image cache; if no, reach docker hub; download image from that; create instance container
|
docker hub
hello-world
redis
busybox
other imageA
other imageB
image = single file with all the deps and config required to run a program
- file system snapshot; will take as control groups in container
- startup command; will take as process call in container
Container = instance of image run program
- a process and has control groups
in busybox image, ls, echo programs exit in busybox file system image
processer
processes running on computer can not interact with hard drive(CPU,Memory,Hard Disk) directly, but by kernel as connection; between processes and kernel, there is system call, which invocation function
linux specific feature
|
namespacing: isolating resources per process not only hardware but software; processes, hard drive, network, users, hostname, inter process communication
control groups:limit amount resources used; memory, cpu usage, harddrive input/output(i/o), network bandwith
command processors:
bash: mac
powershell: windows, zsh
linux: sh
some commands
docker ps –all: list all containers
docker run = docker create + docker start
create -> take filesystem
start -> startup commanddocker start -a id: show all logs
docker system prune: delete all and cache
docker logs [id]: retreve all logs
stop
- stop container: sigterm, message , shut down at own time, clean up
- kill container: sigkill,
- docker stop not finish in 10 sec, will turn to docker kill
execute additional command in container
- docker exec -it container-id command
1.reference docker cli
2.run another command
3.allow provide input to container
4.id of container- allow to write input -it = -i -t
5.command to execute
- docker exec -it container-id command
docker build . <- we give file to docker client, and generate image; ‘.’ build context, set of files and folders belong to our project, want to wrap in proj
give name of image:
docker run imageID == docker build -t [docker ID]/[repo/proj name]:[version] .docker run -p 8080[route incoming requests to this port on local host to] : 8080[this port inside the container] [image id/name]
docker run -d [image] <- start new container and run background, can continue to run
docker-compose up -d <- launch in background
docker-compose down <- stop all running container
docker-compose ps <- only list current directory, running container, need to find docker-compose.yml file to identify container
docker-compose ps <- only list current directory, running container, need to find docker-compose.yml file to identify container
Build custom image
- dockerfile: configuration
specofy base image; run some commands to install additional programs; specify command to run on container startup- initial starting point,
- alpine is base image for redis
- initial starting point,
- docker client
apk apachy package install- package manager - provide file to docker server
- FROM RUN CMD
docker build . <- we give file to docker client, and generate image; ‘.’ build context, set of files and folders belong to our project, want to wrap in proj
take image generate from previous step, make new container, exec or make change in file system, take file snapshot and save output as next instructions. images from last step will be final image, CMD is command to start up command
make changes in Dockerfile
-can download file from previous; cache
give name of image:
docker run imageID == docker build -t [docker ID]/[repo/proj name]:[version] .
‘:[version]’ can be leave blank, as default will be latest
-> version is tag; [docker ID]/[repo/proj name] is more repo/proj
alpin is small image: only contain 5MB, not contain npm, ruby
Dockerfile
FROM node:tag name
node:alpine
alpine->as small as compable,
library installed in docker will be isolate from outside files, even if docker file is place in same directory as other files; unless put instruction inside dockerfile
|
COPY ./ [path of file on local machine,root] ./[path file inside the container,root]
container has its owner isolated port
port mapping =
incoming request -> own browser make request and forward to port in container
docker can make request to outside world
docker run with port mapping:
docker run -p 8080[route incoming requests to this port on local host to] : 8080[this port inside the container] [image id/name]
if any new changes in index file, not dependency, still require rebuild and reinstall npm
1.create container, then run command in container
2.create image and start command right away; may replace default command
two containers are isolated and do not share their filesystem, no share data between
set up networking functionality between two different containers
1.use docker cli’s network features
handful commands
2.docker compose tool; separate cli to allow mulitiple commands
in compose file, [services] == type of container
docker run [image] = docker-compose up
rebuild image contain in docker-compose file, write docker-compose up –build
make different containers in compose file, will create network
docker run -d [image] <- start new container and run background, can continue to run
launch in background:
docker-compose up -d
list current running:
docker ps
stop all running container:
docker-compose down
restart policies
‘no’ - [need in ‘’, to differ from boolean value]if stop for no reassons, do not restart
always - if stop for ‘any reason’, always restart
on-failure - only restart if container stops with error code
unless-stopped - always restart unless forcibly stop it
docker-compose ps <- only list current directory, running container, need to find docker-compose.yml file to identify container
build Dockerfile in development mode and production mode:
- Dockerfile.dev
- Dockerfile
build custom docker file [default will look for Dockerfile]
-> docker build -f Dockerfile.dev[custom named dockerfile] .
any changes in local file, it won’t update server run in container unless 1.rebuild 2.other way to solve
docker run -p 3000:3000[mapping local port to container port] [image id] -v[volumn] $(pwd):/app [current directory outside container mapping to container app directory] -v /app/node_modules
^
|
long command -> use docker-compose
docker exec -it [imageid] npm run test
docker attach <- forward command to specific container, alway connect to primary process stdin
test container: 1.npm 2. start
dev server: process all js and serve up to the browser by providing index.html, main.js
||
prod server: response with html with js
nginx: take incoming traffic and routing and respond with static file
nginx server is some routing, based on request,
multi-step docker build
2 base image
1.node
2.nginx
build base -> build files
run base -> copy build files, start nginx
default.conf
|
nginx -> / - react server [port 4000]
/apil - express server [port 3000]
port is very easy to change, easier to change ‘/‘
websocket
set up multi-container
push code to github
|
travis automatically pulls repo
|
travis builds a test image, tests code
|
travis builds prod images
|
travis pushes built prod images to Docker hub
需要登录 -> echo “$DOCKER_PASSWORD” | docker login -u “$DOCKER_USERNAME” –password-stdin
在travis里设置environment variables, 账号和密码需要和hub里的一样
|
travis pushes project to AWS EB
|
EB pulls images from Docker Hub, deploys
set up docker-compose yaml file
docker-compose.yml example:
services[container]:
container1:
image: [public image from docker hub]
container2(custom image):
build:
dockerfile: [Dockerfile.dev]
context: [path of file]
volumes[复制本地文件到container]:
- /app/node_modules
- ./worker:/app
environment:
- variablename=value
Travis
connect with github account, any update on repo, it will retrive to travis, travis communicate with deploying platform, such as AWS
need .travis.yml file to make configration
access AWS by access key and secret key, beter set up in environment variables, not yml file.
AWS
elastic beanstalk: client side send request, load balancer handle requests, forward to virtual machine which running docker, docker will run application
AWS Elastic Cache: automatically creat and maintain redis instance; security; easier to migrate off EB; no matter connection from
AWS Relational Database Service: automatically creat and maintain Postgres instances; automated backups and rollbacks
Security Group(firewall rules): ->EB instance
benfit of load balancer, can smartly split traffic 它创建了多个同样的container,但是less control
deployed application using docker
workflow within team
- push changes to feature branch
git checkout -b [new branch name] <- create new branch
git push origin [new branch name] - create request to merge into master
add things edited and wait for proval - merge pull request
confirm merge - take code and deploy to aws(platform)
travis will recieve update repo, and test, build and deploy to aws
Cheet sheet
RDS Database Creation
- Go to AWS Management Console and use Find Services to search for RDS
- Click Create database button
- Select PostgreSQL
- Check ‘only enable options eligible for RDS Free Usage Tier’ and click Next button
- Scroll down to Settings Form
- Set DB Instance identifier to multi-docker-postgres
- Set Master Username to postgres
- Set Master Password to postgres and confirm
- Click Next button
- Make sure VPC is set to Default VPC
- Scroll down to Database Options
- Set Database Name to fibvalues
- Scroll down and click Create Database button
ElastiCache Redis Creation
- Go to AWS Management Console and use Find Services to search for ElastiCache
- Click Redis in sidebar
- Click the Create button
- Make sure Redis is set as Cluster Engine
- In Redis Settings form, set Name to multi-docker-redis
- Change Node type to ‘cache.t2.micro’
- Change Number of replicas to 0
- Scroll down to Advanced Redis Settings
- Subnet Group should say “Create New”
- Set Name to redis-group
- VPC should be set to default VPC
- Tick all subnet’s boxes
- Scroll down and click Create button
Creating a Custom Security Group
- Go to AWS Management Console and use Find Services to search for VPC
- Click Security Groups in sidebar
- Click Create Security Group button
- Set Security group name to multi-docker
- Set Description to multi-docker
- Set VPC to default VPC
- Click Create Button
- Click Close
- Manually tick the empty field in the Name column of the new security group and type multi-docker, then click the checkmark icon.
- Scroll down and click Inbound Rules
- Click Edit Rules button
- Click Add Rule
- Set Port Range to 5432-6379
- Click in box next to Custom and start typing ‘sg’ into the box. Select the Security Group you just created, it should look similar to ‘sg-…. | multi-docker’
- Click Save Rules button
- Click Close
Applying Security Groups to ElastiCache
- Go to AWS Management Console and use Find Services to search for ElastiCache
- Click Redis in Sidebar
- Check box next to Redis cluster and click Modify
- Change VPC Security group to the multi-docker group and click Save
- Click Modify
Applying Security Groups to RDS
- Go to AWS Management Console and use Find Services to search for RDS
- Click Databases in Sidebar and check box next to your instance
- Click Modify button
- Scroll down to Network and Security change Security group to multi-docker
- Scroll down and click Continue button
- Click Modify DB instance button
Applying Security Groups to Elastic Beanstalk
- Go to AWS Management Console and use Find Services to search for Elastic Beanstalk
- Click the multi-docker application tile
- Click Configuration link in Sidebar
- Click Modify in Instances card
- Scroll down to EC2 Security Groups and tick box next to multi-docker
- Click Apply and Click Confirm
Setting Environment Variables
- Go to AWS Management Console and use Find Services to search for Elastic Beanstalk
- Click the multi-docker application tile
- Click Configuration link in Sidebar
- Select Modify in the Software tile
- Scroll down to Environment properties
- In another tab Open up ElastiCache, click Redis and check the box next to your cluster. Find the Primary Endpoint and copy that value but omit the :6379
- Set REDIS_HOST key to the primary endpoint listed above, remember to omit :6379
- Set REDIS_PORT to 6379
- Set PGUSER to postgres
- Set PGPASSWORD to postgrespassword
- In another tab, open up RDS dashboard, click databases in sidebar, click your instance and scroll to Connectivity and Security. Copy the endpoint.
- Set the PGHOST key to the endpoint value listed above.
- Set PGDATABASE to fibvalues
- Set PGPORT to 5432
- Click Apply button
IAM Keys for Deployment
- Go to AWS Management Console and use Find Services to search for IAM
- Click Users link in the Sidebar
- Click Add User button
- Set User name to multi-docker-deployer
- Set Access-type to Programmatic Access
- Click Next:Permissions button
- Select Attach existing polices directly button
- Search for ‘beanstalk’ and check all boxes
- Click Next:Review
- Add tag if you want and Click Next:Review
- Click Create User
- Copy Access key ID and secret access key for use later
AWS Keys in Travis
- Open up Travis dashboard and find your multi-docker app
- Click More Options, and select Settings
- Scroll to Environment Variables
- Add AWS_ACCESS_KEY and set to your AWS access key
- Add AWS_SECRET_KEY and set to your AWS secret key
Kubernetes
Define: system for running many different containers over multiple different machines (一个central control多个containers)
Development -> Minikube
Production -> Managed solutions(GCP->GKE,AWS->EKS)
development
minikube program set up VM which contain mulit containers & create kubernetes in local computer;
kubectl(also use in production) manage containers in VM
GCP vs. AWS
- google created kubernetes
- aws rencently got kubernetes support
- good documentation for beginners