我要评分
获取效率
正确性
完整性
易理解

Operations on a Worker Node

Add a worker node to the cluster.

Obtain the DemoVideoEngine.tar.gz software package based on Video Stream Engine, and upload the software package to the /home/k8s directory on the server.

  1. Configure container storage isolation and storage size. Perform this step again after the server is restarted.
    1
    2
    3
    4
    5
    cd /home/k8s
    tar -xvf DemoVideoEngine.tar.gz k8s/
    cd /home/k8s/k8s/DevicesPlugin
    chmod +x storage_manager.sh
    ./storage_manager.sh $ACTION $STORAGE_START_INDEX $STORAGE_END_INDEX $STORAGE_SIZE_GB $IMG_BASE
    

    Table 1 describes the command parameters.

    Table 1 Parameters for configuring container storage isolation and storage size

    Parameter

    Description

    ACTION

    Specifies the action. The value can be create or delete.

    STORAGE_START_INDEX

    Specifies the start index of the data volume to be deleted or created.

    STORAGE_END_INDEX

    Specifies the end index of the data volume to be deleted or created. The end index must be greater than or equal to the start index.

    STORAGE_SIZE_GB

    Specifies the storage size, in GB. This parameter is unavailable when the action is delete.

    IMG_BASE

    Specifies the image file of the base data volume. If there is no base data volume, this parameter can be left blank. For details about how to create an image file of the base data volume, see Creating a Base Data Volume. This parameter is unavailable when the action is delete. Transfer either STORAGE_SIZE_GB or IMG_BASE.

    For example:

    • Create 100 isolated data volumes from video1 to video100 whose storage size is 32 GB each.
      1
      ./storage_manager.sh create 1 100 32
      
    • Add 20 isolated data volumes from video101 to video120 whose storage size is 32 GB each.
      1
      ./storage_manager.sh create 101 120 32
      
    • Delete data volumes video1 to video100.
      1
      ./storage_manager.sh delete 1 100
      
    • Delete the remaining 20 data volumes (video101 to video120).
      1
      ./storage_manager.sh delete 101 120
      
    • Use videobase.img to create data volumes video1 to video100.
      1
      ./storage_manager.sh create 1 100 /home/mount/img/videobase.img
      

    If you have run commands in this step and want to change the storage size of a data volume (video1 for example), you need to delete the data volume and then create it again with the required size.

  2. Run the token commands to join the cluster. (The commands are saved when the master node is initialized successfully. See the red box in Figure 1.)

    For example:

    1
    2
    kubeadm join xx.xx.xx.xx:xxxx --token 7h0hpd.1av4cdcb4fb0on5x \
    --discovery-token-ca-cert-hash sha256:357c6d1dbefe6f7adf3c80987a90d3765965b1c43e1757b655ea8586c8ade10a
    
    • After a worker node is restarted and added to the cluster, ensure that the worker node can run the video stream cloud phone.
    • xx.xx.xx.xx indicates the IP address, and xxxx indicates the mapped port.
    • If the token commands for joining the cluster are invalid, run the following command on the master node to generate a new token:
      1
      kubeadm token create --print-join-command
      
  3. Check the cluster status.
    1. Check the status on the master node.
      1
      kubectl get nodes -A -o wide
      

      It is expected that the STATUS column of this worker node is Ready and the CONTAINER-RUNTIME column is containerd://x.x.x.

    2. Check the Pod status on the master node.
      1
      kubectl get pod -A -o wide
      

      It is expected that the STATUS column of all Pods on the worker node is Running.

    3. Check the container status on the worker node.
      1
      crictl ps
      

      It is expected that the STATE column of all containers is Running.

  4. (Optional) Configure NUMA affinity. To ensure performance, configure NUMA affinity between CPUs and DaoCloud devices.
    1. Stop kubelet.
      1
      systemctl stop kubelet
      
    2. Delete the old CPU manager state file. The default path to the file is /var/lib/kubelet/cpu_manager_state.
      1
      rm -rf /var/lib/kubelet/cpu_manager_state
      
    3. Modify the kubelet configuration file config.yaml.
      vi /var/lib/kubelet/config.yaml
    4. Press i to enter the insert mode, and add the following content to the end of the file: Table 2 describes the parameters.
      cpuManagerPolicy: "static"
      topologyManagerPolicy: "single-numa-node"
      reservedSystemCPUs: "0,1,32,33,64,65,96,97"
      Table 2 Parameters in the kubelet configuration file

      Parameter

      Description

      cpuManagerPolicy

      CPU manager policy. The value can be none or static.

      • none (default): Containers are not bound to cores.
      • static: Containers are bound to cores to ensure that the bound CPU cores are exclusively used by the containers.

      topologyManagerPolicy

      Topology manager policy. You are advised to set this parameter to single-numa-node to prevent cross-NUMA access.

      reservedSystemCPUs

      Reserved CPUs. The reserved CPUs can be bound to system processes, for example, network interrupts, to prevent system processes from affecting services. You can configure reserved CPUs based on service requirements.

    5. Press Esc, type :wq!, and press Enter to save the file and exit.
    6. Restart kubelet and check the kubelet status.
      1
      2
      systemctl start kubelet
      systemctl status kubelet