天天看點

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

Dokcer基礎

檢視Linux版本

uname -r      

檢視Linux詳盡資訊

cat /etc/*elease      
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
CentOS Linux release 7.6.1810 (Core) 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

CentOS Linux release 7.6.1810 (Core) 
CentOS Linux release 7.6.1810 (Core)       

View Code

容器的五大隔離

  • pid:程序隔離
  • net:網絡隔離 (獨有的ip位址,網關,子網路遮罩)
  • ipc:程序間互動隔離
  • mnt:檔案系統隔離
  • uts:主機和域名隔離 (hostname,domainname)container 有自己的機器名

centos上安裝docker

官方位址:https://docs.docker.com/install/linux/docker-ce/centos/

  1. 解除安裝舊版本
    sudo yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
                      docker-engine      
  2. 安裝包環境
    sudo yum install -y yum-utils \
      device-mapper-persistent-data \
      lvm2      
  3. 設定倉儲位址
    sudo yum-config-manager \
        --add-repo \
        https://download.docker.com/linux/centos/docker-ce.repo      
  4. 安裝Docker-CE
    sudo yum install docker-ce docker-ce-cli containerd.io      
  5. 啟動Docker,運作開機自啟
    systemctl start docker
    systemctl enable docker      

Docker安裝位置

  • 查找Docker可執行程式位址 /usr/bin/docker 
    find / -name docker      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    /run/docker
    /sys/fs/cgroup/pids/docker
    /sys/fs/cgroup/cpuset/docker
    /sys/fs/cgroup/freezer/docker
    /sys/fs/cgroup/devices/docker
    /sys/fs/cgroup/blkio/docker
    /sys/fs/cgroup/perf_event/docker
    /sys/fs/cgroup/memory/docker
    /sys/fs/cgroup/net_cls,net_prio/docker
    /sys/fs/cgroup/hugetlb/docker
    /sys/fs/cgroup/cpu,cpuacct/docker
    /sys/fs/cgroup/systemd/docker
    /etc/docker
    /var/lib/docker
    /var/lib/docker/overlay2/ec5a827479e221461a396c7d0695226ec60b642544f2f921e2da967426b1853c/diff/docker
    /var/lib/docker/overlay2/cf92e8387d988e9f87dc3656bb21d3a2fefff02e3505e1d282c0d105cb703ab1/merged/docker
    /var/lib/docker/overlay2/df3551b1764d57ad79604ace4c1b75ab1e47cdca2fb6d526940af8b400eee4aa/diff/etc/dpkg/dpkg.cfg.d/docker
    /usr/bin/docker
    /usr/share/bash-completion/completions/docker
    /docker      
    View Code
  • 查找Docker服務端程式 /usr/bin/dockerd 
    find / -name dockerd      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    /etc/alternatives/dockerd
    /var/lib/alternatives/dockerd
    /usr/bin/dockerd      
    View Code
  • lib + data: /var/lib/docker
  • config: /etc/docker
  • 查找docker.service服務程式 /usr/lib/systemd/system/docker.service 
    find / -name docker.service      
    [root@localhost ~]# cat /usr/lib/systemd/system/docker.service
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    BindsTo=containerd.service
    After=network-online.target firewalld.service containerd.service
    Wants=network-online.target
    Requires=docker.socket
    
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    ExecReload=/bin/kill -s HUP $MAINPID
    TimeoutSec=0
    RestartSec=2
    Restart=always
    
    # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
    # Both the old, and new location are accepted by systemd 229 and up, so using the old location
    # to make them work for either version of systemd.
    StartLimitBurst=3
    
    # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
    # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
    # this option work for either version of systemd.
    StartLimitInterval=60s
    
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    
    # Comment TasksMax if your systemd version does not supports it.
    # Only systemd 226 and above support this option.
    TasksMax=infinity
    
    # set delegate yes so that systemd does not reset the cgroups of docker containers
    Delegate=yes
    
    # kill only the docker process, not all processes in the cgroup
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target      

解讀dockerd配置檔案

dockerd:https://docs.docker.com/engine/reference/commandline/dockerd/

硬碟挂載

  1. 使用 fdisk -l 指令檢視主機上的硬碟
    fdisk -l      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    [root@localhost ~]# fdisk -l
    
    Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x000b0ebb
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/vda1   *        2048   104856254    52427103+  83  Linux
    
    Disk /dev/vdb: 536.9 GB, 536870912000 bytes, 1048576000 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes      
    View Code
  2. 使用mkfs.ext4指令把硬碟格式化
    # mkfs.ext4    磁盤名稱
    
    mkfs.ext4   /dev/vdb      
  3. 使用mount指令挂載磁盤
    mount /dev/vdb /boot      
  4. 輸入指令: df -h 檢視目前磁盤的情況
    df -h      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    [root@localhost ~]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/vda1        50G  7.4G   40G  16% /
    devtmpfs        7.8G     0  7.8G   0% /dev
    tmpfs           7.8G     0  7.8G   0% /dev/shm
    tmpfs           7.8G  592K  7.8G   1% /run
    tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
    overlay          50G  7.4G   40G  16% /var/lib/docker/overlay2/c76fb87ef4c263e24c7f6874121fb161ce9b22db572db66ff1992ca6daf5768b/merged
    shm              64M     0   64M   0% /var/lib/docker/containers/afe151311ee560e63904e3e9d3c1053b8bbb6fd5e3b2d4c74001091b132fe3bd/mounts/shm
    overlay          50G  7.4G   40G  16% /var/lib/docker/overlay2/5ca6ed8e1671cb590705f53f89af8f8f5b85a6cdfc8137b3e12e4fec6c76fcea/merged
    shm              64M  4.0K   64M   1% /var/lib/docker/containers/79427c180de09f78e33974278043736fca80b724db8b9bce42e44656d04823b3/mounts/shm
    tmpfs           1.6G     0  1.6G   0% /run/user/0
    /dev/vdb        493G   73M  467G   1% /boot      
    View Code

修改docker存儲位置

  1. 建立或修改docker配置檔案
    # 建立或修改docker配置檔案
    vim /etc/docker/daemon.json
    
    {
     "data-root": "/data/docker"
    }      
  2. 建立docker資料存儲檔案夾
    # 建立docker資料存儲檔案夾
    mkdir /data
    mkdir /data/docker      
  3. 停止Docker
    # 停止Docker
    service docker stop      
  4. 拷貝存儲檔案
    # 拷貝存儲檔案
    cp -r /var/lib/docker/* /data/docker/      
  5. 删除源檔案
    # 删除源檔案(不建議先删除,後面沒問題了再删除)
    # rm -rf /var/lib/docker/      
  6. 驗證docker資料存儲位置是否改變
    # 驗證docker資料存儲位置是否改變
    docker info      
    注意:最好在docker剛安裝完就執行切換資料目錄,不然等容器運作起來後裡面的一些volume會還是使用的原來的

鏡像加速器

sudo mkdir -p /etc/docker
vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://uwxsp1y1.mirror.aliyuncs.com"],
  "data-root": "/data/docker"
}

sudo systemctl daemon-reload
sudo systemctl restart docker      

檢視系統日志

# 修改配置資訊
vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://uwxsp1y1.mirror.aliyuncs.com"],
  "data-root": "/data/docker",
  "debug":true
}


# journalctl 統一檢視service所有的日志。
journalctl -u docker.service -f       

遠端連接配接docker deamon

  1. 修改docker.service啟動資訊
    # 修改docker.service啟動資訊
    vim /usr/lib/systemd/system/docker.service      
    # ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock      
  2. 修改daemon.json
    #修改daemon.json
    vim /etc/docker/daemon.json
    
    {
      "registry-mirrors": ["https://uwxsp1y1.mirror.aliyuncs.com"],
      "data-root": "/data/docker",
      "debug":true,
      "hosts": ["192.168.103.240:6381","unix:///var/run/docker.sock"]
    }      
  3. 重載、重新開機
    # 重載、重新開機
    sudo systemctl daemon-reload
    service docker restart      
  4. 檢視端口
    # 檢視端口
    netstat -tlnp
    
    [root@localhost docker]# netstat -tlnp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 192.168.103.240:6381    0.0.0.0:*               LISTEN      27825/dockerd       
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      3743/dnsmasq        
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3122/sshd           
    tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      3109/cupsd          
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      3479/master         
    tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      14503/sshd: root@pt 
    tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
    tcp6       0      0 :::22                   :::*                    LISTEN      3122/sshd           
    tcp6       0      0 ::1:631                 :::*                    LISTEN      3109/cupsd          
    tcp6       0      0 ::1:25                  :::*                    LISTEN      3479/master         
    tcp6       0      0 ::1:6010                :::*                    LISTEN      14503/sshd: root@pt       
  5. 遠端連接配接測試
    # 遠端連接配接測試
    docker -H 192.168.103.240:6381 ps      

容器基礎

docker container 中常用操控指令

docker run --help      
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
[root@localhost ~]# docker run --help

Usage:    docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

Options:
      --add-host list                  Add a custom host-to-IP mapping (host:ip)
  -a, --attach list                    Attach to STDIN, STDOUT or STDERR
      --blkio-weight uint16            Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
      --blkio-weight-device list       Block IO weight (relative device weight) (default [])
      --cap-add list                   Add Linux capabilities
      --cap-drop list                  Drop Linux capabilities
      --cgroup-parent string           Optional parent cgroup for the container
      --cidfile string                 Write the container ID to the file
      --cpu-period int                 Limit CPU CFS (Completely Fair Scheduler) period
      --cpu-quota int                  Limit CPU CFS (Completely Fair Scheduler) quota
      --cpu-rt-period int              Limit CPU real-time period in microseconds
      --cpu-rt-runtime int             Limit CPU real-time runtime in microseconds
  -c, --cpu-shares int                 CPU shares (relative weight)
      --cpus decimal                   Number of CPUs
      --cpuset-cpus string             CPUs in which to allow execution (0-3, 0,1)
      --cpuset-mems string             MEMs in which to allow execution (0-3, 0,1)
  -d, --detach                         Run container in background and print container ID
      --detach-keys string             Override the key sequence for detaching a container
      --device list                    Add a host device to the container
      --device-cgroup-rule list        Add a rule to the cgroup allowed devices list
      --device-read-bps list           Limit read rate (bytes per second) from a device (default [])
      --device-read-iops list          Limit read rate (IO per second) from a device (default [])
      --device-write-bps list          Limit write rate (bytes per second) to a device (default [])
      --device-write-iops list         Limit write rate (IO per second) to a device (default [])
      --disable-content-trust          Skip image verification (default true)
      --dns list                       Set custom DNS servers
      --dns-option list                Set DNS options
      --dns-search list                Set custom DNS search domains
      --entrypoint string              Overwrite the default ENTRYPOINT of the image
  -e, --env list                       Set environment variables
      --env-file list                  Read in a file of environment variables
      --expose list                    Expose a port or a range of ports
      --group-add list                 Add additional groups to join
      --health-cmd string              Command to run to check health
      --health-interval duration       Time between running the check (ms|s|m|h) (default 0s)
      --health-retries int             Consecutive failures needed to report unhealthy
      --health-start-period duration   Start period for the container to initialize before starting health-retries countdown (ms|s|m|h) (default 0s)
      --health-timeout duration        Maximum time to allow one check to run (ms|s|m|h) (default 0s)
      --help                           Print usage
  -h, --hostname string                Container host name
      --init                           Run an init inside the container that forwards signals and reaps processes
  -i, --interactive                    Keep STDIN open even if not attached
      --ip string                      IPv4 address (e.g., 172.30.100.104)
      --ip6 string                     IPv6 address (e.g., 2001:db8::33)
      --ipc string                     IPC mode to use
      --isolation string               Container isolation technology
      --kernel-memory bytes            Kernel memory limit
  -l, --label list                     Set meta data on a container
      --label-file list                Read in a line delimited file of labels
      --link list                      Add link to another container
      --link-local-ip list             Container IPv4/IPv6 link-local addresses
      --log-driver string              Logging driver for the container
      --log-opt list                   Log driver options
      --mac-address string             Container MAC address (e.g., 92:d0:c6:0a:29:33)
  -m, --memory bytes                   Memory limit
      --memory-reservation bytes       Memory soft limit
      --memory-swap bytes              Swap limit equal to memory plus swap: '-1' to enable unlimited swap
      --memory-swappiness int          Tune container memory swappiness (0 to 100) (default -1)
      --mount mount                    Attach a filesystem mount to the container
      --name string                    Assign a name to the container
      --network string                 Connect a container to a network (default "default")
      --network-alias list             Add network-scoped alias for the container
      --no-healthcheck                 Disable any container-specified HEALTHCHECK
      --oom-kill-disable               Disable OOM Killer
      --oom-score-adj int              Tune host's OOM preferences (-1000 to 1000)
      --pid string                     PID namespace to use
      --pids-limit int                 Tune container pids limit (set -1 for unlimited)
      --privileged                     Give extended privileges to this container
  -p, --publish list                   Publish a container's port(s) to the host
  -P, --publish-all                    Publish all exposed ports to random ports
      --read-only                      Mount the container's root filesystem as read only
      --restart string                 Restart policy to apply when a container exits (default "no")
      --rm                             Automatically remove the container when it exits
      --runtime string                 Runtime to use for this container
      --security-opt list              Security Options
      --shm-size bytes                 Size of /dev/shm
      --sig-proxy                      Proxy received signals to the process (default true)
      --stop-signal string             Signal to stop a container (default "SIGTERM")
      --stop-timeout int               Timeout (in seconds) to stop a container
      --storage-opt list               Storage driver options for the container
      --sysctl map                     Sysctl options (default map[])
      --tmpfs list                     Mount a tmpfs directory
  -t, --tty                            Allocate a pseudo-TTY
      --ulimit ulimit                  Ulimit options (default [])
  -u, --user string                    Username or UID (format: <name|uid>[:<group|gid>])
      --userns string                  User namespace to use
      --uts string                     UTS namespace to use
  -v, --volume list                    Bind mount a volume
      --volume-driver string           Optional volume driver for the container
      --volumes-from list              Mount volumes from the specified container(s)
  -w, --workdir string                 Working directory inside the container      

View Code

docker run,docker exec

run可以讓容器從鏡像中執行個體化出來,執行個體化過程中可以塞入很多參數

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

docker run -d --name some-redis redis 外界無法通路,因為是網絡隔離,預設bridge模式。

  • -a stdin: 指定标準輸入輸出内容類型,可選 STDIN/STDOUT/STDERR 三項;
  • -d: 背景運作容器,并傳回容器ID;
  • -i: 以互動模式運作容器,通常與 -t 同時使用;
  • -P: 随機端口映射,容器内部端口随機映射到主機的高端口
  • -p: 指定端口映射,格式為:主機(宿主)端口:容器端口
  • -t: 為容器重新配置設定一個僞輸入終端,通常與 -i 同時使用;
  • --name="nginx-lb": 為容器指定一個名稱;
  • --dns 8.8.8.8: 指定容器使用的DNS伺服器,預設和宿主一緻;
  • --dns-search example.com: 指定容器DNS搜尋域名,預設和宿主一緻;
  • -h "mars": 指定容器的hostname;
  • -e username="ritchie": 設定環境變量;
    # 設定東八區
    docker run -e TZ=Asia/Shanghai -d --name some-redis redis      
  • --env-file=[]: 從指定檔案讀入環境變量;
  • --cpuset="0-2" or --cpuset="0,1,2": 綁定容器到指定CPU運作;
  • -m :設定容器使用記憶體最大值;
  • --net="bridge": 指定容器的網絡連接配接類型,支援 bridge/host/none/container:<name|id> 四種類型;
  • --link=[]: 添加連結到另一個容器;
  • --expose=[]: 開放一個端口或一組端口;
  • --volume , -v: 綁定一個卷
    docker run -p 16379:6379 -d --name some-redis redis      
  • --add-host: 添加自定義ip
    # 場景:consul做健康檢查的時候,需要主控端的ip位址
    docker run --add-host machineip:192.168.103.240 -d --name some-redis redis
    
    docker exec -it some-redis bash
    tail /etc/hosts      

docker start,docker stop, docker kill

  • docker start :啟動一個或多個已經被停止的容器
  • docker stop :停止一個運作中的容器
  • docker restart :重新開機容器
  • docker kill :殺掉一個運作中的容器。

batch delete 容器

docker rm -f 
docker rm -f `docker ps -a -q`
docker containers prune      
# 極其強大的删除清理方式,慎重使用
# docker system prune      

docker container 狀态監控指令

檢視容器日志

docker logs      
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
[root@localhost ~]# docker logs some-redis
1:C 09 Jul 2019 03:07:03.406 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 09 Jul 2019 03:07:03.406 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 09 Jul 2019 03:07:03.406 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 09 Jul 2019 03:07:03.406 * Running mode=standalone, port=6379.
1:M 09 Jul 2019 03:07:03.406 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 09 Jul 2019 03:07:03.406 # Server initialized
1:M 09 Jul 2019 03:07:03.406 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 09 Jul 2019 03:07:03.406 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 09 Jul 2019 03:07:03.406 * Ready to accept connections      

View Code

容器性能名額

docker stats      
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
[root@localhost ~]# docker stats

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
aaa8bec01038        some-redis          0.04%               8.375MiB / 1.795GiB   0.46%               656B / 0B           139kB / 0B          4      

View Code

容器 -> 主控端端口

查詢port映射關系

知道容器的端口,不知道主控端的端口。。。

不知道容器的端口,知道主控端的端口。。。

docker port [container]      
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
[root@localhost ~]# docker port some-redis-2
6379/tcp -> 0.0.0.0:16379      

View Code

檢視容器内運作的程序

docker top [container]      
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
[root@localhost ~]# docker top some-redis-2
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
polkitd             18356               18338               0                   13:20               pts/0               00:00:00            redis-server *:6379      

View Code

容器的詳細資訊

docker inspect [OPTIONS] NAME|ID [NAME|ID...]      
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
[root@localhost ~]# docker inspect some-redis-2
[
    {
        "Id": "6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2",
        "Created": "2019-07-09T05:20:06.985445479Z",
        "Path": "docker-entrypoint.sh",
        "Args": [
            "redis-server"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 18356,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2019-07-09T05:20:07.255368955Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:bb0ab8a99fe694e832e56e15567c83dee4dcfdd321d0ad8ab9bd64d82d6a6bfb",
        "ResolvConfPath": "/data/docker/containers/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2/resolv.conf",
        "HostnamePath": "/data/docker/containers/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2/hostname",
        "HostsPath": "/data/docker/containers/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2/hosts",
        "LogPath": "/data/docker/containers/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2-json.log",
        "Name": "/some-redis-2",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {
                "6379/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "16379"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/data/docker/overlay2/c7693e58e45a483a6cb66deac7d281a647a56e3c9043722f3379a5dd496646d7-init/diff:/data/docker/overlay2/d26d3067261173cfa34d57bbdc3371b164805203ff05a2d71ce868ddc5b5a2bc/diff:/data/docker/overlay2/6a35d92d8841364ee7443a84e18b42c22f60294a748f552ad4a0852507236c7f/diff:/data/docker/overlay2/5ed2ceb6771535d14cd64f375cc31462a82ff57503bbc3abace0589be3124955/diff:/data/docker/overlay2/9543ee1ade1f2d4341c00cadef3ec384eb3761c35d10726cc6ade4a3bfb99be2/diff:/data/docker/overlay2/86f47cf021b01ddec50356ae4c5387b910f65f75f97298de089336b4a413ce25/diff:/data/docker/overlay2/df3551b1764d57ad79604ace4c1b75ab1e47cdca2fb6d526940af8b400eee4aa/diff",
                "MergedDir": "/data/docker/overlay2/c7693e58e45a483a6cb66deac7d281a647a56e3c9043722f3379a5dd496646d7/merged",
                "UpperDir": "/data/docker/overlay2/c7693e58e45a483a6cb66deac7d281a647a56e3c9043722f3379a5dd496646d7/diff",
                "WorkDir": "/data/docker/overlay2/c7693e58e45a483a6cb66deac7d281a647a56e3c9043722f3379a5dd496646d7/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "volume",
                "Name": "88f774ae0567f3e3f834a9f469c0db377be8948b82d05ee757e6eabe185903e6",
                "Source": "/data/docker/volumes/88f774ae0567f3e3f834a9f469c0db377be8948b82d05ee757e6eabe185903e6/_data",
                "Destination": "/data",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "6248c674f067",
            "Domainname": "",
            "User": "",
            "AttachStdin": true,
            "AttachStdout": true,
            "AttachStderr": true,
            "ExposedPorts": {
                "6379/tcp": {}
            },
            "Tty": true,
            "OpenStdin": true,
            "StdinOnce": true,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "GOSU_VERSION=1.10",
                "REDIS_VERSION=5.0.5",
                "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-5.0.5.tar.gz",
                "REDIS_DOWNLOAD_SHA=2139009799d21d8ff94fc40b7f36ac46699b9e1254086299f8d3b223ca54a375"
            ],
            "Cmd": [
                "redis-server"
            ],
            "ArgsEscaped": true,
            "Image": "redis",
            "Volumes": {
                "/data": {}
            },
            "WorkingDir": "/data",
            "Entrypoint": [
                "docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "31f5b2c1c0d59c3f8866fa2b02db2889e4d4d54076cbf88ae7d6057758b3f40a",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "6379/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "16379"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/31f5b2c1c0d5",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "ab4f1a16403dfd415703868b52b33ea0b6d9d28b750e5ce80810d0f9b89f4af1",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.3",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:03",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "80fba7499001738402fe35f0c1bb758ddd5f680abf75f4bd6a0456b3021ee5fe",
                    "EndpointID": "ab4f1a16403dfd415703868b52b33ea0b6d9d28b750e5ce80810d0f9b89f4af1",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }
        }
    }
]      

View Code

容器的導入導出

  • docker export :将檔案系統作為一個tar歸檔檔案導出到STDOUT。
    docker export [OPTIONS] CONTAINER
    
    # OPTIONS說明:
    # -o :将輸入内容寫到檔案。
    
    # PS:
    # docker export -o /app2/1.tar.gz some-redis      
  • docker import : 從歸檔檔案中建立鏡像。
    docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
    
    # OPTIONS說明:
    # -c :應用docker 指令建立鏡像;
    # -m :送出時的說明文字;
    
    # PS:
    # 還原鏡像
    # docker import /app2/1.tar.gz newredis
    # 建立容器并運作redis-server啟動指令
    # docker run -d --name new-some-redis-2 newredis redis-server      

docker images指令詳解

docker image      
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
[root@localhost app2]# docker image

Usage:    docker image COMMAND

Manage images

Commands:
  build       Build an image from a Dockerfile
  history     Show the history of an image
  import      Import the contents from a tarball to create a filesystem image
  inspect     Display detailed information on one or more images
  load        Load an image from a tar archive or STDIN
  ls          List images
  prune       Remove unused images
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rm          Remove one or more images
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

Run 'docker image COMMAND --help' for more information on a command.      

View Code

鏡像的擷取,删除,檢視

  • docker pull : 從鏡像倉庫中拉取或者更新指定鏡像
    docker pull [OPTIONS] NAME[:TAG|@DIGEST]
    
    # OPTIONS說明:
    # -a :拉取所有 tagged 鏡像
    # --disable-content-trust :忽略鏡像的校驗,預設開啟      
  • docker rmi : 删除本地一個或多少鏡像。
    docker rmi [OPTIONS] IMAGE [IMAGE...]
    
    # OPTIONS說明:
    # -f :強制删除;
    # --no-prune :不移除該鏡像的過程鏡像,預設移除;      
  • docker inspect : 擷取容器/鏡像的中繼資料。
    docker inspect [OPTIONS] NAME|ID [NAME|ID...]
    
    # OPTIONS說明:
    # -f :指定傳回值的模闆檔案。
    # -s :顯示總的檔案大小。
    # --type :為指定類型傳回JSON。      
  • docker images : 列出本地鏡像。
    docker images [OPTIONS] [REPOSITORY[:TAG]]
    
    # OPTIONS說明:
    # -a :列出本地所有的鏡像(含中間映像層,預設情況下,過濾掉中間映像層);
    # --digests :顯示鏡像的摘要資訊;
    # -f :顯示滿足條件的鏡像;
    # --format :指定傳回值的模闆檔案;
    # --no-trunc :顯示完整的鏡像資訊;
    # -q :隻顯示鏡像ID。      

鏡像的導入導出,遷移

docker export/import 對容器進行打包

docker save / load 對鏡像進行打包

  • docker save : 将指定鏡像儲存成 tar 歸檔檔案。
    docker save [OPTIONS] IMAGE [IMAGE...]
    
    # OPTIONS 說明:
    # -o :輸出到的檔案。
    
    # PS:
    # docker save -o /app2/1.tar.gz redis      
  • docker load : 導入使用 docker save 指令導出的鏡像。
    docker load [OPTIONS]
    
    # OPTIONS 說明:
    # -i :指定導出的檔案。
    # -q :精簡輸出資訊。
    
    # PS:
    # docker load -i /app2/1.tar.gz      

docker tag

打标簽的目的,友善我上傳到自己的私有倉庫

  • docker tag : 标記本地鏡像,将其歸入某一倉庫。
    docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]
    
    # PS:
    # docker tag redis:latest 13057686866/redis_1
    # 登入
    # docker login
    # 推送到遠端私有倉庫
    # docker push 13057686866/redis_1      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

手工建構

  • docker build 指令用于使用 Dockerfile 建立鏡像。
    docker build [OPTIONS] PATH | URL | -
    
    # OPTIONS說明:
    # --build-arg=[] :設定鏡像建立時的變量;
    # --cpu-shares :設定 cpu 使用權重;
    # --cpu-period :限制 CPU CFS周期;
    # --cpu-quota :限制 CPU CFS配額;
    # --cpuset-cpus :指定使用的CPU id;
    # --cpuset-mems :指定使用的記憶體 id;
    # --disable-content-trust :忽略校驗,預設開啟;
    # -f :指定要使用的Dockerfile路徑;
    # --force-rm :設定鏡像過程中删除中間容器;
    # --isolation :使用容器隔離技術;
    # --label=[] :設定鏡像使用的中繼資料;
    # -m :設定記憶體最大值;
    # --memory-swap :設定Swap的最大值為記憶體+swap,"-1"表示不限swap;
    # --no-cache :建立鏡像的過程不使用緩存;
    # --pull :嘗試去更新鏡像的新版本;
    # --quiet, -q :安靜模式,成功後隻輸出鏡像 ID;
    # --rm :設定鏡像成功後删除中間容器;
    # --shm-size :設定/dev/shm的大小,預設值是64M;
    # --ulimit :Ulimit配置。
    # --tag, -t: 鏡像的名字及标簽,通常 name:tag 或者 name 格式;可以在一次建構中為一個鏡像設定多個标簽。
    # --network: 預設 default。在建構期間設定RUN指令的網絡模式      

dockerfile

docker build自己動手建構鏡像

官方文檔:https://docs.docker.com/engine/reference/builder/

dockerfile參數

  • FROM
  • ENV
  • RUN
  • CMD
  • LABEL
  • EXPOSE
  • ADD

    不僅可以copy檔案,還可以下載下傳遠端檔案。。。

    如果是本地的zip包,還能自動解壓。

  • COPY
  • ENTRYPOINT
  • VOLUME
  • USER
  • WORKDIR
  • ONBUILD
  • STOPSIGNAL
  • HEALTHCHECK
  1. 建立項目 WebApplication1 空項目即可
  2. 建立 Dockerfile 配置檔案
    # 1-有了基礎鏡像
    FROM mcr.microsoft.com/dotnet/core/sdk:2.2
    
    # 2-把我的檔案拷貝到這個作業系統中的/app檔案夾中
    COPY . /app
    
    # 工作目錄
    WORKDIR /app
    
    # 3-publish
    RUN cd /app && dotnet publish "WebApplication1.csproj" -c Release -o /work
    
    # 4-告訴外界我的app暴露的是80端口
    EXPOSE 80
    
    # else
    ENV TZ Asia/Shanghai
    ENV ASPNETCORE_ENVIRONMENT Production
    
    # 作者資訊
    LABEL version="1.0"
    LABEL author="wyt"
    
    # 執行角色
    USER root
    
    # 設定工作目錄
    WORKDIR /work
    
    # 4-啟動
    CMD ["dotnet","WebApplication1.dll"]      
  3. 将 WebApplication1 整個目錄拷貝到遠端伺服器下
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
  4. 建構鏡像
    cd /app/WebApplication1
    docker build -t 13057686866/webapp:v1 .      
  5. 運作容器
    docker run -d -p 18000:80 --name webapp3 13057686866/webapp:v1      
  6. 運作成功
    curl http://192.168.103.240:18000/
    Hello World!      

Dockerfile優化政策

使用 .dockerignore 忽略檔案

官方位址:https://docs.docker.com/engine/reference/builder/#dockerignore-file

**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.vs
**/.vscode
**/*.*proj.user
**/azds.yaml
**/charts
**/bin
**/obj
**/Dockerfile
**/Dockerfile.develop
**/docker-compose.yml
**/docker-compose.*.yml
**/*.dbmdl
**/*.jfm
**/secrets.dev.yaml
**/values.dev.yaml
**/.toolstarget      

我們完全可以使用VS來建立Dockerfile,會自動生成 .dockerignore 

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

使用多階段建構

多階段建構:一個From一個階段

dockerfile中隻有最後一個From是生效的,其他的from隻是給最後一個from打輔助。。。

當最後一個from生成完畢的時候,其他的from都會自動銷毀。。。

 FROM build AS publish  給目前的鏡像取一個别名。。

FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
WORKDIR /src
COPY ["WebApplication1.csproj", ""]
RUN dotnet restore "WebApplication1.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "WebApplication1.csproj" -c Release -o /app

FROM build AS publish
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApplication1.dll"]       

及時移除不必須的包

# 3-publish
RUN cd /app && dotnet publish "WebApplication1.csproj" -c Release -o /work && rm -rf /app      

最小化層的個數   

  • 可參考官方dockerfile
  • ADD 和 COPY,ADD 會增加 layer的個數。
  • RUN盡可能合并

搭建自己的私有registry倉庫

官網介紹:https://docs.docker.com/registry/deploying/

搭建自己内網倉庫,可以加速

  1. 拉取本地倉庫鏡像
    docker pull registry:2      
  2. 運作本地倉庫容器
    # 運作本地倉庫容器
    docker run -d -p 5000:5000 --restart=always --name registry registry:2      
  3. 拉取alpine鏡像
    # 拉取alpine鏡像
    docker pull alpine      
  4. 重命名标簽,指向本地倉庫
    # 重命名标簽,指向本地倉庫
    docker tag alpine 192.168.103.240:5000/alpine:s1      
  5. 遠端推送到本地倉庫
    # 遠端推送到本地倉庫
    docker push 192.168.103.240:5000/alpine:s1      

    故障:http: server gave HTTP response to HTTPS client(https client 不接受  http response)

    解決辦法: https://docs.docker.com/registry/insecure/

    # 編輯該daemon.json檔案,其預設位置 /etc/docker/daemon.json在Linux或 C:\ProgramData\docker\config\daemon.jsonWindows Server上。如果您使用Docker Desktop for Mac或Docker Desktop for Windows,請單擊Docker圖示,選擇 Preferences,然後選擇+ Daemon。
    # 如果該daemon.json檔案不存在,請建立它。假設檔案中沒有其他設定,則應具有以下内容:
    
    {
      "insecure-registries" : ["192.168.103.240:5000"]
    }
    
    # 将不安全系統資料庫的位址替換為示例中的位址。
    
    # 啟用了不安全的系統資料庫後,Docker将執行以下步驟:
    # 1-首先,嘗試使用HTTPS。
    # 2-如果HTTPS可用但證書無效,請忽略有關證書的錯誤。
    # 3-如果HTTPS不可用,請回退到HTTP。
    
    # 重新啟動Docker以使更改生效。
    service docker restart      
  6. 驗證鏡像是否推送成功
    docker pull 192.168.103.240:5000/alpine:s1      
  7. 拉取開源registry UI鏡像

    官方位址:https://hub.docker.com/r/joxit/docker-registry-ui

    # 拉取registry-ui鏡像
    docker pull joxit/docker-registry-ui      
  8. 設定允許repositry跨域
    # 設定允許跨域https://docs.docker.com/registry/configuration/
    # 複制檔案到本地
    docker cp registry:/etc/docker/registry/config.yml /app
    # 修改配置檔案,添加跨域
    vim /etc/docker/registry/config.yml
    
    version: 0.1
    log:
      fields:
        service: registry
    storage:
      cache:
        blobdescriptor: inmemory
      filesystem:
        rootdirectory: /var/lib/registry
    http:
      addr: :5000
      headers:
        X-Content-Type-Options: [nosniff]
        Access-Control-Allow-Origin: ['*']
        Access-Control-Allow-Methods: ['*']
        Access-Control-Max-Age: [1728000]
    health:
      storagedriver:
        enabled: true
        interval: 10s
        threshold: 3
        
    # 重新啟動registry容器
    docker rm registry -f
    docker run -d -p 5000:5000 --restart=always --name registry -v /app/config.yml:/etc/docker/registry/config.yml registry:2      
  9. 運作registry-ui容器
    # 運作容器
    docker rm -f registry-ui
    docker run -d -p 8002:80 --name registry-ui joxit/docker-registry-ui      
  10. 通路可視化容器
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

使用阿裡雲鏡像存儲服務

官方位址:https://cr.console.aliyun.com/cn-hangzhou/instances/repositories

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

接入操作:

  1. 登入阿裡雲Docker Registry
    sudo docker login --username=tb5228628_2012 registry.cn-hangzhou.aliyuncs.com      
    用于登入的使用者名為阿裡雲賬号全名,密碼為開通服務時設定的密碼。
  2. 從Registry中拉取鏡像
    sudo docker pull registry.cn-hangzhou.aliyuncs.com/wyt_registry/wyt_registry:[鏡像版本号]      
  3. 将鏡像推送到Registry
    sudo docker tag [ImageId] registry.cn-hangzhou.aliyuncs.com/wyt_registry/wyt_registry:[鏡像版本号]
    sudo docker push registry.cn-hangzhou.aliyuncs.com/wyt_registry/wyt_registry:[鏡像版本号]      

volume資料挂載

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

三種方式可以讓 資料 脫離到 容器之外,減少容器層的size,也提升了性能(避免容器的讀寫層)。

volume 管理

# 建立資料卷
docker volume create redisdata
# 使用資料卷
docker run -d -v redisdata:/data --name some-redis redis      

優點:

  • 不考慮主控端檔案結構,是以更加友善遷移,backup。
  • 可以使用 docker cli 指令統一管理
  • volumes支援多平台,不用考慮多平台下的檔案夾路徑問題。
  • 使用volumn plugin 可以友善和 aws, 等雲平台遠端存儲。

bind 管理 (檔案,檔案夾)

将主控端檔案夾初始化送入容器中,後續進行雙向綁定。

tmpfs 容器内目錄挂載到主控端記憶體中

# 不隐藏容器内/tmp内檔案
docker run --rm -it webapp bash
# 隐藏容器内/tmp内檔案
docker run --rm --tmpfs /tmp -it webapp bash      

network網絡

單機網絡

預設情況下是 bridge,overlay,host, macvlan,none

docker host 的bridge 的 docker0 預設網橋

預設的 bridge 的大概原理

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

當docker啟動的時候,會生成一個預設的docker0網橋。。。

當啟動容器的時候,docker會生成一對 veth裝置。。。。這個裝置的一端連接配接到host的docker0網橋,一端連接配接到container中,重命名為eth0

veth一端收到了資料,會自動傳給另一端。。。

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:a4ff:fe79:a36f  prefixlen 64  scopeid 0x20<link>
        ether 02:42:a4:79:a3:6f  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11  bytes 1439 (1.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


vethfc5e4ce: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::f802:99ff:fe73:34d7  prefixlen 64  scopeid 0x20<link>
        ether fa:02:99:73:34:d7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 17  bytes 1947 (1.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever



[root@localhost ~]#  docker run -it alpine ash
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #       

View Code

預設的bridge缺陷

無服務發現功能,同一個子網,無法通過 “服務名” 互通,隻能通過 ip 位址。。。

自定義bridge網絡

自帶服務發現機制

# 建立橋接網絡
docker network create my-net 
# 建立容器
docker run -it --network my-net --name some-redis alpine ash
docker run -it --network my-net --name some-redis2 alpine ash
# 在some-redis中ping容器some-redis2
ping some-redis2      

容器網絡釋出

如果讓主控端之外的程式能能夠通路host上的bridge内的container:-p 釋出端口

# 運作容器進行端口轉發
docker run -it --network my-net -p 80:80 --name some-redis-1 alpine ash
# 檢視網絡轉發詳情
iptables -t nat -L -n

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.18.0.4:80      

多機網絡

overlay網絡

可實作多機通路

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
  1. 使用docker swarm init 實作docker 叢集網絡
    # 192.168.103.240
    docker swarm init
    # 192.168.103.226
    docker swarm join --token SWMTKN-1-0g4cs8fcatshczn5koupqx7lulak20fbvu99uzjb5asaddblny-bio99e9kktn023k527y3tjgyv 192.168.103.240:2377      
  2. 實作自定義的 可獨立添加容器的 overlay網絡
    docker network create --driver=overlay --attachable test-net      

    TCP 2377 叢集 manage 節點交流的

    TCP 的 7946 和 UDP 的 7946 nodes 之間交流的

    UDP 4789 是用于overlay network 流量傳輸的。

示範

  1. 192.168.103.226 redis啟動
    docker run --network test-net --name some-redis -d redis      
  2. 192.168.103.240 python
    mkdir /app
    vim /app/app.py
    vim /app/Dockerfile
    vim /app/requirements.txt      
    app.pv
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    from flask import Flask
    from redis import Redis, RedisError
    import os
    import socket
    
    # Connect to Redis
    redis = Redis(host="some-redis", db=0, socket_connect_timeout=2, socket_timeout=2)
    
    app = Flask(__name__)
    
    @app.route("/")
    def hello():
        try:
            visits = redis.incr("counter")
        except RedisError:
            visits = "<i>cannot connect to Redis, counter disabled</i>"
    
        html = "<b>Hostname:</b> {hostname}<br/>" \
               "<b>Visits:</b> {visits}"
        return html.format(hostname=socket.gethostname(), visits=visits)
    
    if __name__ == "__main__":
        app.run(host='0.0.0.0', port=80)      

    View Code

    Dockerfile

    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    FROM python:2.7-slim
    
    WORKDIR /app
    
    COPY . .
    
    EXPOSE 80
    
    RUN pip install --trusted-host pypi.python.org -r requirements.txt
    
    VOLUME [ "/app" ]
    
    CMD [ "python", "app.py" ]      

    View Code

    requirements.txt

    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Flask
    Redis      
    View Code
    # 建構鏡像
    docker build -t pyweb:v1 .
    # 運作容器
    docker run -d --network test-net -p 80:80 -v /app:/app --name pyapp pyweb:v1      
    通路結果
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

host 模式 

這種模式不和主控端進行網絡隔離,直接使用主控端網絡

最簡單最粗暴的方式

overlay雖然複雜,但是強大, 不好控制。

docker-compose

什麼是docker-compose?應用程式棧一鍵部署,(獨立程式一鍵部署),docker-compose 可以管理你的整個應用程式棧的生命周期。

下載下傳

官方位址:https://docs.docker.com/compose/install/

# 下載下傳Docker Compose的目前穩定版本
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# https://github.com/docker/compose/releases/download/1.24.1/docker-compose-Linux-x86_64
# 建議迅雷下載下傳後進行重命名,這樣速度快
# 對二進制檔案應用可執行權限
sudo chmod +x /usr/local/bin/docker-compose
# 測試安裝
docker-compose --version      

簡單示例

  1. 建立項目 WebApplication1 空網站項目添加NLog、Redis包支援
    Install-Package NLog.Targets.ElasticSearch
    Install-Package StackExchange.Redis      
  2. 修改 Program.cs 使用80端口
    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseUrls("http://*:80")
            .UseStartup<Startup>();      
  3. 修改 Startup.cs 添加日志和redis
    public Logger logger = LogManager.GetCurrentClassLogger();
    public ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("redis");
    
    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
    
        app.Run(async (context) =>
        {
            var count = await redis.GetDatabase(0).StringIncrementAsync("counter");
            var info= $"you have been seen {count} times !";
            logger.Info(info);
    
            await context.Response.WriteAsync(info);
        });
    }      
  4. 添加 nlog.config 配置檔案
    <?xml version="1.0" encoding="utf-8" ?>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          autoReload="true"
          internalLogLevel="Warn">
    
        <extensions>
            <add assembly="NLog.Targets.ElasticSearch"/>
        </extensions>
    
        <targets>
            <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000" >
                <target xsi:type="ElasticSearch" uri="http://elasticsearch:9200" documentType="web.app"/>
            </target>
        </targets>
    
        <rules>
            <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
        </rules>
    </nlog>      
  5. 添加 Dockerfile 檔案
    FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
    
    WORKDIR /data
    COPY . .
    
    EXPOSE 80
    
    ENTRYPOINT ["dotnet", "WebApplication1.dll"]      
  6. 添加 docker-compose.yml 檔案
    version: '3.0'
    
    services:
    
      webapp: 
        build: 
          context: .
          dockerfile: Dockerfile
        ports: 
          - 80:80
        depends_on: 
          - redis
        networks: 
          - netapp
    
      redis: 
        image: redis
        networks: 
          - netapp
    
      elasticsearch: 
        image: elasticsearch:5.6.14
        networks: 
          - netapp
    
      kibana: 
        image: kibana:5.6.14
        ports: 
          - 5601:5601
        networks: 
          - netapp
    
    networks: 
      netapp:      
  7. 釋出項目檔案,并拷貝到遠端伺服器/app檔案夾内
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
  8. 運作 docker-compose 
    cd /app
    docker-compose up --build      
  9. 檢視效果

    通路網站http://192.168.103.240/

    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    通路Kibana檢視日志http://192.168.103.240:5601
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

docker-compose 常見指令

  •  操控指令
    docker-compose ps
    docker-compose images
    docker-compose kill webapp
    docker-compose build
    docker-compose run      -> docker exec
    docker-compose scale
    docker-compose up       -> docker run
    docker-compose down      
  • 狀态指令
    docker-compose logs
    docker-compose ps
    docker-compose top
    docker-compose port 
    docker-compose config      

compose指令講解

官方位址:https://docs.docker.com/compose/compose-file/

yml常用指令分析

version      3.7 
services
config    (swarm)
secret    (swarm)
volume     
networks        

appstack 補充

修改 WebApplication1 項目中的 docker-compose.yml 

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
version: '3.0'

services:

  webapp: 
    build: 
      context: .
      dockerfile: Dockerfile
    image: wyt/webapp
    container_name: webapplication
    restart: always
    ports: 
      - 80:80
    depends_on: 
      - redis
    networks: 
      - netapp

  redis: 
    image: redis
    networks: 
      - netapp

  elasticsearch: 
    image: elasticsearch:5.6.14
    networks: 
      - netapp
    volumes:
      - "esdata:/usr/share/elasticsearch/data"

  kibana: 
    image: kibana:5.6.14
    ports: 
      - 5601:5601
    networks: 
      - netapp

volumes:
  esdata:

networks: 
  netapp:      

View Code

部分docker-compose腳本:https://download.csdn.net/download/qq_25153485/11324352

docker-compose 一些使用原則

使用多檔案部署

  • 生産環境代碼直接放在容器中,test環境實作代碼挂載
    test:   docker-compose -f  docker-compose.yml  -f test.yml   up 
    prd:   docker-compose -f  docker-compose.yml  -f prd.yml   up       
  • 生産環境中綁定程式預設端口,測試機防沖突綁定其他端口。
  • 生産環境配置 restart: always , 可以容器就可以挂掉之後重新開機。
  • 添加日志聚合,對接es

按需編譯,按需建構

# 隻建構service名稱為webapp的鏡像,也會建構其依賴
docker-compose build webapp
# 隻建構service名稱為webapp的鏡像,不建構其依賴
docker-compose up --no-deps --build -d webapp      

變量插值

  1. 設定主控端環境變量
    # 設定環境變量
    export ASPNETCORE_ENVIRONMENT=Production
    # 擷取環境變量
    echo $ASPNETCORE_ENVIRONMENT
    # hostip 網卡ip 埋進去,友善擷取
    # image的版本号      
  2. 修改 docker-compose.yml 讀取環境變量
    environment:
      ASPNETCORE_ENVIRONMENT: ${ASPNETCORE_ENVIRONMENT}      

docker可視化portainer

安裝教程參考:https://www.cnblogs.com/wyt007/p/11104253.html

yml檔案

protainer:
  image: portainer/portainer
  ports:
    - 9000:9000
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  restart: always
  networks: 
    - netapp      

使用python 和 C# 遠端通路 docker

  1. 開放tcp端口,友善遠端通路

    修改 docker.service ,修改掉ExecStart

    vim /usr/lib/systemd/system/docker.service
    
    # ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.soc
    ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.soc      
    配置 daemon.json 
    vim /etc/docker/daemon.json
    
    "hosts": ["192.168.103.240:18080","unix:///var/run/docker.sock"]      
  2. 重新整理配置檔案,重新開機docker
    systemctl daemon-reload
    systemctl restart docker      
  3. 檢視docker程序是否監聽
    netstat -ano | grep 18080
    
    tcp        0      0 192.168.103.240:18080   0.0.0.0:*               LISTEN      off (0.00/0/0)      

python通路docker

官方位址:https://docs.docker.com/develop/sdk/examples/

c#通路docker

社群位址:https://github.com/microsoft/Docker.DotNet

class Program
{
    static  async Task Main(string[] args)
    {
        DockerClient client = new DockerClientConfiguration(
                new Uri("http://192.168.103.240:18080"))
            .CreateClient();
        IList<ContainerListResponse> containers = await client.Containers.ListContainersAsync(
            new ContainersListParameters()
            {
                Limit = 10,
            });
        Console.WriteLine("Hello World!");
    }
}      

cluster volumes

開源分布式檔案系統:https://www.gluster.org/

  1. 部署前準備,修改 /etc/hosts 檔案,增加如下資訊

    2台機器

    vim /etc/hosts
    
    192.168.103.240 fs1
    192.168.103.226 fs2      
  2. 安裝GlusterFS   【兩個node】
    yum install -y centos-release-gluster
    yum install -y glusterfs-server
    systemctl start glusterd
    systemctl enable glusterd      
  3. 将fs2加入到叢集中
    # 在fs1中執行
    # 将fs2加入叢集節點中
    gluster peer probe fs2
    # 檢視叢集狀态
    gluster peer status
    # 檢視叢集清單
    gluster pool list
    # 檢視所有指令
    gluster help global      
  4. 建立volume
    # 建立檔案夾(兩個都要建立)
    mkdir -p /data/glusterfs/glustervolume
    # 建立同步副本資料卷 replica叢集 2複制分發 force強制(fs1)
    gluster volume create glusterfsvolumne replica 2 fs1:/data/glusterfs/glustervolume fs2:/data/glusterfs/glustervolume force
    # 啟動卷使用
    gluster volume start glusterfsvolumne      
    相當于兩台機器都擁有了glusterfsvolumne
  5. 建立本地檔案夾挂載 volume 即可
    # 分别建立
    mkdir /app
    # 【交叉挂載】
    # fs1
    mount -t glusterfs fs2:/glusterfsvolumne /app
    # fs2
    mount -t glusterfs fs1:/glusterfsvolumne /app      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    [root@localhost app]# df -h
    檔案系統                 容量  已用  可用 已用% 挂載點
    /dev/mapper/centos-root   17G   12G  5.8G   67% /
    devtmpfs                 903M     0  903M    0% /dev
    tmpfs                    920M     0  920M    0% /dev/shm
    tmpfs                    920M   90M  830M   10% /run
    tmpfs                    920M     0  920M    0% /sys/fs/cgroup
    /dev/sda1               1014M  232M  783M   23% /boot
    tmpfs                    184M   12K  184M    1% /run/user/42
    tmpfs                    184M     0  184M    0% /run/user/0
    overlay                   17G   12G  5.8G   67% /data/docker/overlay2/46ed811c8b335a3a59cae93a77133599390c4a6bf2767a690b01b8b2999eb1e3/merged
    shm                       64M     0   64M    0% /data/docker/containers/f7044f3d2b744f97f60a2fd004402300a8f4d1c1494f86dfd0852a89d4626efd/mounts/shm
    fs2:/glusterfsvolumne     17G   12G  5.7G   68% /app
    overlay                   17G   12G  5.8G   67% /data/docker/overlay2/b681972965562fe4f608f0724430906078130a65d3dbe9031cb9ab40ce29698f/merged
    shm                       64M     0   64M    0% /data/docker/containers/d43a7653a61a9a6d6ad89cb178b9567d99b5b0c6976ece90bd7b92f8cc2ebcaf/mounts/shm      
    View Code
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    [root@localhost app]# df -h
    檔案系統                 容量  已用  可用 已用% 挂載點
    /dev/mapper/centos-root   17G  8.2G  8.9G   48% /
    devtmpfs                 903M     0  903M    0% /dev
    tmpfs                    920M     0  920M    0% /dev/shm
    tmpfs                    920M   90M  830M   10% /run
    tmpfs                    920M     0  920M    0% /sys/fs/cgroup
    /dev/sda1               1014M  232M  783M   23% /boot
    tmpfs                    184M  4.0K  184M    1% /run/user/42
    tmpfs                    184M   36K  184M    1% /run/user/0
    overlay                   17G  8.2G  8.9G   48% /data/docker/overlay2/20ae619da7d4578d9571a5ab9598478bce496423254833c110c67641e9f2d817/merged
    shm                       64M     0   64M    0% /data/docker/containers/fc31990633d41fd4bf21a8b0601db1cfb7cf9b2d5920bf1a13cf696e111d91e2/mounts/shm
    fs1:/glusterfsvolumne     17G   12G  5.7G   67% /app      

    View Code

    在fs1建立檔案

    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    在fs2中檢視
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
  6. 容器部署
    # fs1 fs2
    # 資料是共享的
    docker run --name some-redis -p 6379:6379 -v /app/data:/data -d  redis      

搭建自己的docker swarm叢集

Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

叢集的搭建

  1. 準備三台伺服器
    192.168.103.240 manager1
    192.168.103.226 node1
    192.168.103.227 node2      
  2. 初始化swarm
    # 192.168.103.240 manager1
    docker swarm init      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    [root@localhost ~]# docker swarm init
    Swarm initialized: current node (ryi7o7xcww2c9e4j1lotygfbu) is now a manager.
    
    To add a worker to this swarm, run the following command:
    
        docker swarm join --token SWMTKN-1-10bndgdxqph4nqmjn0g4oqse83tdgx9cbb50pcgmf0tn7yhlno-6mako3nf0a0504tiopu9jefxc 192.168.103.240:2377
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.      
    View Code
  3. 加入節點
    # 192.168.103.226 node1
    # 192.168.103.227 node2
    docker swarm join --token SWMTKN-1-10bndgdxqph4nqmjn0g4oqse83tdgx9cbb50pcgmf0tn7yhlno-6mako3nf0a0504tiopu9jefxc 192.168.103.240:2377      

 關鍵詞解釋

  • managernode 

    用于管理這個叢集。(manager + work )

    用于分發task 給 worknode 去執行。

  • worknode

    用于執行 manager 給過來的 task。

    給manager report task的執行情況 或者一些 統計資訊。

  • service 服務
  • task 容器
  • overlay 網絡

swarm操作的基本指令

  • docker swarm 
    docker swarm init
    docker swarm join
    docker swarm join-token
    docker swarm leave      
  • docker node 
    docker node demote / promote 
    docker node ls / ps      
  • docker service 
    docker service create
    docker service update
    docker service scale
    docker service ls
    docker service ps
    docker service rm      
    # 在随機節點上建立一個副本
    docker service create --name redis redis:3.0.6
    # 建立每個節點都有的redis執行個體
    docker service create --mode global --name redis redis:3.0.6
    # 建立随機節點的5個随機的redis執行個體
    docker service create --name redis --replicas=5 redis:3.0.6
    # 建立端口映射3個節點的redis執行個體
    docker service create --name my_web --replicas 3 -p 6379:6379 redis:3.0.6
    # 更新服務,副本集提高成5個
    docker service update --replicas=5 redis
    # 更新服務,副本集提高成2個
    docker service scale redis=2
    # 删除副本集
    docker service rm redis      

compose.yml自定義swarm叢集

官方文檔:https://docs.docker.com/compose/compose-file/#deploy

所有分布式部署都使用compose中的 deploy 進行節點部署

使用compose中的 deploy 進行節點部署

  1. 準備4台伺服器
    192.168.103.240 manager1
    192.168.103.228 manager2
    192.168.103.226 node1
    192.168.103.227 node2      
  2. 編寫 docker-compose.yml 檔案
    vim /app/docker-compose.yml
    
    version: '3.7'
    services:
      webapp:
        image: nginx
        ports:
          - 80:80
        deploy:
          replicas: 5      
  3. 運作yml檔案
    # 與docker-compose不同,這裡是基于stack deploy的概念
    docker stack deploy -c ./docker-compose.yml nginx      
  4. 檢視stack
    # 檢視所有棧
    docker stack ls
    # 檢視名稱為nginx的棧
    docker stack ps nginx      

帶狀态的容器進行限制

placement:
  constraints:
    - xxxxxx      
  1. 借助node的自帶資訊

    https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint

    node.id / node.hostname / node.role

    node.id Node ID node.id==2ivku8v2gvtg4
    node.hostname Node hostname node.hostname!=node-2
    node.role Node role node.role==manager
    node.labels user defined node labels node.labels.security==high
    engine.labels Docker Engine's labels
  2. 借助node的自定義标簽資訊  [更大的靈活性]

    node.labels / node.labels.country==china

讓 5個task 配置設定在 node1節點上

  1. 編寫 docker-compose.yml 檔案|
    vim /app/docker-compose.yml
    
    version: '3.7'
    services:
      webapp:
        image: nginx
        ports:
          - 80:80
        deploy:
          replicas: 5
          placement:
            constraints:
              - node.id == icyia3s2mavepwebkyr0tqxly      
  2. 運作yml檔案
    # 先删除,釋出,延遲5秒、檢視詳情
    docker stack rm nginx &&  docker stack deploy -c ./docker-compose.yml nginx && sleep 5 && docker stack ps nginx      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

讓 5 個 task 在東部地區運作

  1. 給node打标簽
    docker node update --label-add region=east --label-add country=china  0pbg8ynn3wfimr3q631t4b01s
    docker node update --label-add region=west --label-add country=china  icyia3s2mavepwebkyr0tqxly
    docker node update --label-add region=east --label-add country=usa  27vlmifw8bwyc19tpo0tbgt3e      
  2. 編寫 docker-compose.yml 檔案
    vim /app/docker-compose.yml
    
    version: '3.7'
    services:
      webapp:
        image: nginx
        ports:
          - 80:80
        deploy:
          replicas: 5
          placement:
            constraints:
              - node.labels.region == east      
  3. 運作yml檔案
    # 先删除,釋出,延遲5秒、檢視詳情
    docker stack rm nginx &&  docker stack deploy -c ./docker-compose.yml nginx && sleep 5 && docker stack ps nginx      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

讓 5 個 task 在中國東部地區運作

deploy:
  replicas: 5
  placement:
    constraints:
      - node.labels.region == east
      - node.labels.country == china      

均勻分布

目前隻有 spread 這種政策,用于讓task在指定的node标簽上均衡的分布。

placement:
  preferences:
    - spread: node.labels.zone      

讓 8 個task 在 region 均勻分布

  1. 編寫 docker-compose.yml 檔案
    vim /app/docker-compose.yml
    
    version: '3.7'
    services:
      webapp:
        image: nginx
        ports:
          - 80:80
        deploy:
          replicas: 8
          placement:
            constraints:
              - node.id != ryi7o7xcww2c9e4j1lotygfbu
            preferences:
              - spread: node.labels.region      
  2. 運作yml檔案
    # 先删除,釋出,延遲5秒、檢視詳情
    docker stack rm nginx &&  docker stack deploy -c ./docker-compose.yml nginx && sleep 5 && docker stack ps nginx      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)

重新開機政策

deploy:
  restart_policy:
    condition: on-failure
    delay: 5s
    max_attempts: 3
    window: 120s      

預設是any,(always) 單要知道和 on-failure, 前者如果我stop 容器,一樣重新開機, 後者則不是

version: '3.7'
services:
  webapp:
    image: nginx
    ports:
      - 80:80
    deploy:
      replicas: 2
      restart_policy:
        condition: on-failure
        delay: 5s
      placement:
        constraints:
          - node.role == worker      

其他屬性

endpoint_mode vip -> keepalive 【路由器的一個協定】

labels:标簽資訊

mode:分發還是全局模式

resources:限制可用資源

update_config 【覆寫的一個政策】

把之前的單機版程式修改放到分布式環境中

修改 docker-compose.yml 檔案

version: '3.0'

services:

  webapp:
    image: registry.cn-hangzhou.aliyuncs.com/wyt_registry/wyt_registry
    ports:
      - 80:80
    depends_on:
      - redis
    networks:
      - netapp
    deploy:
      replicas: 3
      placement:
        constraints:
          - node.id == ryi7o7xcww2c9e4j1lotygfbu

  redis:
    image: redis
    networks:
      - netapp
    deploy:
      placement:
        constraints:
          - node.role == worker

  elasticsearch:
    image: elasticsearch:5.6.14
    networks:
      - netapp
    deploy:
      placement:
        constraints:
          - node.role == worker

  kibana:
    image: kibana:5.6.14
    ports:
      - 5601:5601
    networks:
      - netapp
    deploy:
      placement:
        constraints:
          - node.role == worker
networks:
  netapp:      

在私有倉庫拉取的時候記得 帶上這個參數,,否則會 no such image 這樣的報錯的。

docker stack deploy -c ./docker-compose.yml nginx --with-registry-auth

docker新特性

使用config實作全局挂載

  1. 建立config配置
    vim /app/nlog.config
    
    <?xml version="1.0" encoding="utf-8" ?>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          autoReload="true"
          internalLogLevel="Warn">
    
        <extensions>
            <add assembly="NLog.Targets.ElasticSearch"/>
        </extensions>
    
        <targets>
            <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000" >
                <target xsi:type="ElasticSearch" uri="http://elasticsearch:9200" documentType="web.app"/>
            </target>
        </targets>
    
        <rules>
            <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
        </rules>
    </nlog>      
    # 建立名稱為nlog的配置
    docker config create nlog /app/nlog.config      
  2. 檢視config内容,預設是base64編碼
    docker config inspect nlog
    
    [
        {
            "ID": "1zwa2o8f71i6zm6ie47ws987n",
            "Version": {
                "Index": 393
            },
            "CreatedAt": "2019-07-11T10:30:58.255006156Z",
            "UpdatedAt": "2019-07-11T10:30:58.255006156Z",
            "Spec": {
                "Name": "nlog",
                "Labels": {},
                "Data": "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiID8+CjxubG9nIHhtbG5zPSJodHRwOi8vd3d3Lm5sb2ctcHJvamVjdC5vcmcvc2NoZW1hcy9OTG9nLnhzZCIKICAgICAgeG1sbnM6eHNpPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYS1pbnN0YW5jZSIKICAgICAgYXV0b1JlbG9hZD0idHJ1ZSIKICAgICAgaW50ZXJuYWxMb2dMZXZlbD0iV2FybiI+CgogICAgPGV4dGVuc2lvbnM+CiAgICAgICAgPGFkZCBhc3NlbWJseT0iTkxvZy5UYXJnZXRzLkVsYXN0aWNTZWFyY2giLz4KICAgIDwvZXh0ZW5zaW9ucz4KCiAgICA8dGFyZ2V0cz4KICAgICAgICA8dGFyZ2V0IG5hbWU9IkVsYXN0aWNTZWFyY2giIHhzaTp0eXBlPSJCdWZmZXJpbmdXcmFwcGVyIiBmbHVzaFRpbWVvdXQ9IjUwMDAiID4KICAgICAgICAgICAgPHRhcmdldCB4c2k6dHlwZT0iRWxhc3RpY1NlYXJjaCIgdXJpPSJodHRwOi8vZWxhc3RpY3NlYXJjaDo5MjAwIiBkb2N1bWVudFR5cGU9IndlYi5hcHAiLz4KICAgICAgICA8L3RhcmdldD4KICAgIDwvdGFyZ2V0cz4KCiAgICA8cnVsZXM+CiAgICAgICAgPGxvZ2dlciBuYW1lPSIqIiBtaW5sZXZlbD0iVHJhY2UiIHdyaXRlVG89IkVsYXN0aWNTZWFyY2giIC8+CiAgICA8L3J1bGVzPgo8L25sb2c+Cg=="
            }
        }
    ]
    
    
    #解密
    <?xml version="1.0" encoding="utf-8" ?>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          autoReload="true"
          internalLogLevel="Warn">
    
        <extensions>
            <add assembly="NLog.Targets.ElasticSearch"/>
        </extensions>
    
        <targets>
            <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000" >
                <target xsi:type="ElasticSearch" uri="http://elasticsearch:9200" documentType="web.app"/>
            </target>
        </targets>
    
        <rules>
            <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
        </rules>
    </nlog>      
  3. 給servcie作用域加上 config 檔案, 根目錄有一個 nlog 檔案
    docker service create --name redis --replicas 3 --config nlog redis      
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    Dokcer基礎使用總結(Dockerfile、Compose、Swarm)
    [root@localhost app]# docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS               NAMES
    e5f7b18e8377        redis:latest        "docker-entrypoint.s…"   About a minute ago   Up About a minute   6379/tcp            redis.3.usqs8c5mucee16mokib7143aa
    [root@localhost app]# docker exec -it e5f7b18e8377 bash
    root@e5f7b18e8377:/data# cd /
    root@e5f7b18e8377:/# ls
    bin  boot  data  dev  etc  home  lib  lib64  media  mnt  nlog  opt  proc  root    run  sbin  srv    sys  tmp  usr  var
    root@e5f7b18e8377:/# cd nlog 
    bash: cd: nlog: Not a directory
    root@e5f7b18e8377:/# cat nlog 
    <?xml version="1.0" encoding="utf-8" ?>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          autoReload="true"
          internalLogLevel="Warn">
    
        <extensions>
            <add assembly="NLog.Targets.ElasticSearch"/>
        </extensions>
    
        <targets>
            <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000" >
                <target xsi:type="ElasticSearch" uri="http://elasticsearch:9200" documentType="web.app"/>
            </target>
        </targets>
    
        <rules>
            <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
        </rules>
    </nlog>      
    View Code
  4. 使用docker-compose實作
    vim /app/docker-compose.yml
    
    version: "3.7"
    services:
      redis:
        image: redis:latest
        deploy:
          replicas: 3
        configs:
          - nlog2
    configs:
      nlog2:
        file: ./nlog.config      
  5. 運作
    docker stack deploy -c docker-compose.yml redis --with-registry-auth      
  6. 挂載到指定目錄(這裡的意思是挂在到容器内的/root檔案夾内)
    vim /app/docker-compose.yml
    
    version: "3.7"
    services:
      redis:
        image: redis:latest
        deploy:
          replicas: 1
        configs:
          - source: nlog2
            target: /root/nlog2
    configs:
      nlog2:
        file: ./nlog.config      

serect挂載明文和密文

如果你有敏感的配置需要挂載在swarm的service中,可以考慮使用 serect

  1. 使用者名和密碼
  2. 生産的資料庫連接配接串   

使用方式與config一緻,挂在目錄在:/run/secrets/<secret_name>

posted on 2019-07-11 20:30 進擊的辣條 閱讀(...) 評論(...) 編輯 收藏