docker commend&development

docker client(cli) <- command

|
docker server <- check local server has image cache; if no, reach docker hub; download image from that; create instance container
|
docker hub
hello-world
redis
busybox
other imageA
other imageB

image = single file with all the deps and config required to run a program

- file system snapshot; will take as control groups in container
- startup command; will take as process call in container

Container = instance of image run program

- a process and has control groups 

in busybox image, ls, echo programs exit in busybox file system image

processer

processes running on computer can not interact with hard drive(CPU,Memory,Hard Disk) directly, but by kernel as connection; between processes and kernel, there is system call, which invocation function

linux specific feature
|
namespacing: isolating resources per process not only hardware but software; processes, hard drive, network, users, hostname, inter process communication

control groups:limit amount resources used; memory, cpu usage, harddrive input/output(i/o), network bandwith

command processors:
bash: mac
powershell: windows, zsh
linux: sh

some commands

  1. docker ps –all: list all containers

  2. docker run = docker create + docker start
    create -> take filesystem
    start -> startup command

  3. docker start -a id: show all logs

  4. docker system prune: delete all and cache

  5. docker logs [id]: retreve all logs

  6. stop

    • stop container: sigterm, message , shut down at own time, clean up
    • kill container: sigkill,
    • docker stop not finish in 10 sec, will turn to docker kill
  7. execute additional command in container

    • docker exec -it container-id command
      1.reference docker cli
      2.run another command
      3.allow provide input to container
      - allow to write input
      -it = -i -t
      
      4.id of container
      5.command to execute
  8. docker build . <- we give file to docker client, and generate image; ‘.’ build context, set of files and folders belong to our project, want to wrap in proj

  9. give name of image:
    docker run imageID == docker build -t [docker ID]/[repo/proj name]:[version] .

  10. docker run -p 8080[route incoming requests to this port on local host to] : 8080[this port inside the container] [image id/name]

  11. docker run -d [image] <- start new container and run background, can continue to run

  12. docker-compose up -d <- launch in background

  13. docker-compose down <- stop all running container

  14. docker-compose ps <- only list current directory, running container, need to find docker-compose.yml file to identify container

  15. docker-compose ps <- only list current directory, running container, need to find docker-compose.yml file to identify container

Build custom image

  1. dockerfile: configuration
    specofy base image; run some commands to install additional programs; specify command to run on container startup
    • initial starting point,
      • alpine is base image for redis
  2. docker client
    apk apachy package install- package manager
  3. provide file to docker server
  • FROM RUN CMD

docker build . <- we give file to docker client, and generate image; ‘.’ build context, set of files and folders belong to our project, want to wrap in proj

take image generate from previous step, make new container, exec or make change in file system, take file snapshot and save output as next instructions. images from last step will be final image, CMD is command to start up command

make changes in Dockerfile
-can download file from previous; cache

give name of image:
docker run imageID == docker build -t [docker ID]/[repo/proj name]:[version] .

‘:[version]’ can be leave blank, as default will be latest
-> version is tag; [docker ID]/[repo/proj name] is more repo/proj

alpin is small image: only contain 5MB, not contain npm, ruby

Dockerfile

FROM node:tag name

node:alpine

alpine->as small as compable,

library installed in docker will be isolate from outside files, even if docker file is place in same directory as other files; unless put instruction inside dockerfile
|
COPY ./ [path of file on local machine,root] ./[path file inside the container,root]

container has its owner isolated port

port mapping =
incoming request -> own browser make request and forward to port in container
docker can make request to outside world

docker run with port mapping:
docker run -p 8080[route incoming requests to this port on local host to] : 8080[this port inside the container] [image id/name]

if any new changes in index file, not dependency, still require rebuild and reinstall npm

1.create container, then run command in container
2.create image and start command right away; may replace default command

two containers are isolated and do not share their filesystem, no share data between

set up networking functionality between two different containers
1.use docker cli’s network features
handful commands
2.docker compose tool; separate cli to allow mulitiple commands
in compose file, [services] == type of container
docker run [image] = docker-compose up
rebuild image contain in docker-compose file, write docker-compose up –build

make different containers in compose file, will create network 

docker run -d [image] <- start new container and run background, can continue to run

launch in background:
docker-compose up -d
list current running:
docker ps
stop all running container:
docker-compose down

restart policies
‘no’ - [need in ‘’, to differ from boolean value]if stop for no reassons, do not restart
always - if stop for ‘any reason’, always restart
on-failure - only restart if container stops with error code
unless-stopped - always restart unless forcibly stop it

docker-compose ps <- only list current directory, running container, need to find docker-compose.yml file to identify container

build Dockerfile in development mode and production mode:

- Dockerfile.dev
- Dockerfile

build custom docker file [default will look for Dockerfile]
-> docker build -f Dockerfile.dev[custom named dockerfile] .

any changes in local file, it won’t update server run in container unless 1.rebuild 2.other way to solve

docker run -p 3000:3000[mapping local port to container port] [image id] -v[volumn] $(pwd):/app [current directory outside container mapping to container app directory] -v /app/node_modules
^
|
long command -> use docker-compose

docker exec -it [imageid] npm run test

docker attach <- forward command to specific container, alway connect to primary process stdin
test container: 1.npm 2. start

dev server: process all js and serve up to the browser by providing index.html, main.js
||
prod server: response with html with js

nginx: take incoming traffic and routing and respond with static file

nginx server is some routing, based on request,

multi-step docker build
2 base image
1.node
2.nginx
build base -> build files
run base -> copy build files, start nginx

default.conf
|
nginx -> / - react server [port 4000]
/apil - express server [port 3000]

port is very easy to change, easier to change ‘/‘

websocket

set up multi-container

push code to github
|
travis automatically pulls repo
|
travis builds a test image, tests code
|
travis builds prod images
|
travis pushes built prod images to Docker hub
需要登录 -> echo “$DOCKER_PASSWORD” | docker login -u “$DOCKER_USERNAME” –password-stdin
在travis里设置environment variables, 账号和密码需要和hub里的一样
|
travis pushes project to AWS EB
|
EB pulls images from Docker Hub, deploys

set up docker-compose yaml file

docker-compose.yml example:
services[container]:
container1:
image: [public image from docker hub]
container2(custom image):
build:
dockerfile: [Dockerfile.dev]
context: [path of file]
volumes[复制本地文件到container]:

    - /app/node_modules
    - ./worker:/app   
environment:
    - variablename=value

Travis

connect with github account, any update on repo, it will retrive to travis, travis communicate with deploying platform, such as AWS

need .travis.yml file to make configration

access AWS by access key and secret key, beter set up in environment variables, not yml file.

AWS

elastic beanstalk: client side send request, load balancer handle requests, forward to virtual machine which running docker, docker will run application

AWS Elastic Cache: automatically creat and maintain redis instance; security; easier to migrate off EB; no matter connection from

AWS Relational Database Service: automatically creat and maintain Postgres instances; automated backups and rollbacks

Security Group(firewall rules): ->EB instance

benfit of load balancer, can smartly split traffic 它创建了多个同样的container,但是less control

deployed application using docker
workflow within team

  • push changes to feature branch
    git checkout -b [new branch name] <- create new branch
    git push origin [new branch name]
  • create request to merge into master
    add things edited and wait for proval
  • merge pull request
    confirm merge
  • take code and deploy to aws(platform)
    travis will recieve update repo, and test, build and deploy to aws

Cheet sheet

RDS Database Creation

  1. Go to AWS Management Console and use Find Services to search for RDS
  2. Click Create database button
  3. Select PostgreSQL
  4. Check ‘only enable options eligible for RDS Free Usage Tier’ and click Next button
  5. Scroll down to Settings Form
  6. Set DB Instance identifier to multi-docker-postgres
  7. Set Master Username to postgres
  8. Set Master Password to postgres and confirm
  9. Click Next button
  10. Make sure VPC is set to Default VPC
  11. Scroll down to Database Options
  12. Set Database Name to fibvalues
  13. Scroll down and click Create Database button

ElastiCache Redis Creation

  1. Go to AWS Management Console and use Find Services to search for ElastiCache
  2. Click Redis in sidebar
  3. Click the Create button
  4. Make sure Redis is set as Cluster Engine
  5. In Redis Settings form, set Name to multi-docker-redis
  6. Change Node type to ‘cache.t2.micro’
  7. Change Number of replicas to 0
  8. Scroll down to Advanced Redis Settings
  9. Subnet Group should say “Create New”
  10. Set Name to redis-group
  11. VPC should be set to default VPC
  12. Tick all subnet’s boxes
  13. Scroll down and click Create button

Creating a Custom Security Group

  1. Go to AWS Management Console and use Find Services to search for VPC
  2. Click Security Groups in sidebar
  3. Click Create Security Group button
  4. Set Security group name to multi-docker
  5. Set Description to multi-docker
  6. Set VPC to default VPC
  7. Click Create Button
  8. Click Close
  9. Manually tick the empty field in the Name column of the new security group and type multi-docker, then click the checkmark icon.
  10. Scroll down and click Inbound Rules
  11. Click Edit Rules button
  12. Click Add Rule
  13. Set Port Range to 5432-6379
  14. Click in box next to Custom and start typing ‘sg’ into the box. Select the Security Group you just created, it should look similar to ‘sg-…. | multi-docker’
  15. Click Save Rules button
  16. Click Close

Applying Security Groups to ElastiCache

  1. Go to AWS Management Console and use Find Services to search for ElastiCache
  2. Click Redis in Sidebar
  3. Check box next to Redis cluster and click Modify
  4. Change VPC Security group to the multi-docker group and click Save
  5. Click Modify

Applying Security Groups to RDS

  1. Go to AWS Management Console and use Find Services to search for RDS
  2. Click Databases in Sidebar and check box next to your instance
  3. Click Modify button
  4. Scroll down to Network and Security change Security group to multi-docker
  5. Scroll down and click Continue button
  6. Click Modify DB instance button

Applying Security Groups to Elastic Beanstalk

  1. Go to AWS Management Console and use Find Services to search for Elastic Beanstalk
  2. Click the multi-docker application tile
  3. Click Configuration link in Sidebar
  4. Click Modify in Instances card
  5. Scroll down to EC2 Security Groups and tick box next to multi-docker
  6. Click Apply and Click Confirm

Setting Environment Variables

  1. Go to AWS Management Console and use Find Services to search for Elastic Beanstalk
  2. Click the multi-docker application tile
  3. Click Configuration link in Sidebar
  4. Select Modify in the Software tile
  5. Scroll down to Environment properties
  6. In another tab Open up ElastiCache, click Redis and check the box next to your cluster. Find the Primary Endpoint and copy that value but omit the :6379
  7. Set REDIS_HOST key to the primary endpoint listed above, remember to omit :6379
  8. Set REDIS_PORT to 6379
  9. Set PGUSER to postgres
  10. Set PGPASSWORD to postgrespassword
  11. In another tab, open up RDS dashboard, click databases in sidebar, click your instance and scroll to Connectivity and Security. Copy the endpoint.
  12. Set the PGHOST key to the endpoint value listed above.
  13. Set PGDATABASE to fibvalues
  14. Set PGPORT to 5432
  15. Click Apply button

IAM Keys for Deployment

  1. Go to AWS Management Console and use Find Services to search for IAM
  2. Click Users link in the Sidebar
  3. Click Add User button
  4. Set User name to multi-docker-deployer
  5. Set Access-type to Programmatic Access
  6. Click Next:Permissions button
  7. Select Attach existing polices directly button
  8. Search for ‘beanstalk’ and check all boxes
  9. Click Next:Review
  10. Add tag if you want and Click Next:Review
  11. Click Create User
  12. Copy Access key ID and secret access key for use later

AWS Keys in Travis

  1. Open up Travis dashboard and find your multi-docker app
  2. Click More Options, and select Settings
  3. Scroll to Environment Variables
  4. Add AWS_ACCESS_KEY and set to your AWS access key
  5. Add AWS_SECRET_KEY and set to your AWS secret key

Kubernetes

Define: system for running many different containers over multiple different machines (一个central control多个containers)

Development -> Minikube
Production -> Managed solutions(GCP->GKE,AWS->EKS)

development

minikube program set up VM which contain mulit containers & create kubernetes in local computer;
kubectl(also use in production) manage containers in VM

GCP vs. AWS

  • google created kubernetes
  • aws rencently got kubernetes support
  • good documentation for beginners
# docker

Commentaires

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×