我要评分
获取效率
正确性
完整性
易理解

Integrating Ceph Block Storage with OpenStack

Installing the Ceph Software Package

During the integration process, install the Ceph software package on the controller and compute nodes of OpenStack to use them as Ceph clients.

  1. Configure the Ceph image source on the controller and compute nodes:
    1
    vim /etc/yum.repos.d/ceph.repo
    

    Add the following content to the file:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    [Ceph]
    name=Ceph packages for $basearch
    baseurl=http://download.ceph.com/rpm-nautilus/el7/$basearch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
    priority=1
    
    [Ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
    priority=1
    
    [Ceph-source]
    name=Ceph source packages
    baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
    priority=1
    
  2. Update the Yum source on the controller and compute nodes:
    1
    yum clean all && yum makecache
    
  3. Install the Ceph software package on the controller and compute nodes.
    1
    yum -y install ceph ceph-radosgw
    

Configuring the Ceph Environment

  1. On ceph1, create the required storage pools and change the number of placement groups (PGs):
    1
    2
    3
    4
    ceph osd pool create volumes 32
    ceph osd pool create images 32
    ceph osd pool create backups 32
    ceph osd pool create vms 32
    
  2. View the created storage pools.
    1
    ceph osd pool ls
    

  3. Synchronize the configuration file on ceph1 with those on the controller and compute nodes.
    1
    2
    cd /etc/ceph
    ceph-deploy --overwrite-conf admin ceph1 controller compute
    
  4. On ceph1, create keyrings for the cinder, glance, and cinder-backup users so that they can access the ceph storage pools:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
    ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
    ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups'
    ceph auth get-or-create client.glance | ssh controller tee /etc/ceph/ceph.client.glance.keyring
    ssh controller chown glance:glance /etc/ceph/ceph.client.glance.keyring
    ceph auth get-or-create client.cinder | ssh compute tee /etc/ceph/ceph.client.cinder.keyring
    ssh compute chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
    ceph auth get-or-create client.cinder-backup | ssh compute tee /etc/ceph/ceph.client.cinder-backup.keyring
    ssh compute chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
    ceph auth get-key client.cinder | ssh compute tee client.cinder.key
    
  5. On the compute node, add a key to libvirt.
    1
    2
    UUID=$(uuidgen)
    cat > secret.xml <<EOF
    
    1
    2
    3
    4
    5
    6
    7
    <secret ephemeral='no' private='no'>
    <uuid>${UUID}</uuid>
    <usage type='ceph'>
    <name>client.cinder secret</name>
    </usage>
    </secret>
    EOF
    
    1
    2
    virsh secret-define --file secret.xml
    virsh secret-set-value --secret ${UUID} --base64 $(cat /etc/ceph/ceph.client.cinder.keyring | grep key | awk -F ' ' '{ print $3 }')
    

    Save the generated UUID, which will be used in subsequent Cinder and Nova configurations. In this example, the UUID is b3d5fee6-839c-482e-b244-668bad7128a9.

Configuring Glance to Integrate Ceph

  1. Modify the Glance configuration file on the controller node.
    1
    vim /etc/glance/glance-api.conf
    

    Add the following content to the configuration file:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    [DEFAULT]
    ...
    # enable COW cloning of images
    show_image_direct_url = True
    ...
    [glance_store]
    stores = rbd
    default_store = rbd
    rbd_store_pool = images
    rbd_store_user = glance
    rbd_store_ceph_conf = /etc/ceph/ceph.conf
    rbd_store_chunk_size = 8
    
  2. Add the following content to the /etc/glance/glance-api.conf file to prevent images from being cached in the /var/lib/glance/image-cache directory:
    1
    2
    [paste_deploy]
    flavor = keystone
    
  3. Restart the glance-api service on the controller node.
    1
    systemctl restart openstack-glance-api.service
    

Configuring Cinder to Integrate Ceph

  1. On the Cinder node (compute), modify the /etc/cinder/cinder.conf configuration file.
    1
    vim /etc/cinder/cinder.conf
    

    Add the following content to the file:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    [DEFAULT]
    ...
    #enabled_backends = lvm     #Comment out the lvm configuration of Cinder.
    enabled_backends = ceph
    
    [ceph]
    volume_driver = cinder.volume.drivers.rbd.RBDDriver
    volume_backend_name = ceph
    rbd_pool = volumes
    rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot = false
    rbd_max_clone_depth = 5
    rbd_store_chunk_size = 4
    rados_connect_timeout = -1
    glance_api_version = 2
    rbd_user = cinder
    rbd_secret_uuid = b3d5fee6-839c-482e-b244-668bad7128a9
    

    The value of rbd_secret_uuid is the UUID obtained during Ceph environment configuration in 5.

  2. On the Cinder node (compute), restart the cinder-volume service.
    1
    systemctl restart openstack-cinder-volume.service
    

Configuring cinder_backup to Integrate Ceph

  1. On the Cinder node (compute), modify the /etc/cinder/cinder.conf configuration file.
    1
    vim /etc/cinder/cinder.conf
    

    Add the following content to the file:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    [DEFAULT]
    ...
    #backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
    ...
    backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
    backup_ceph_conf = /etc/ceph/ceph.conf
    backup_ceph_user = cinder-backup
    backup_ceph_chunk_size = 4194304
    backup_ceph_pool = backups
    backup_ceph_stripe_unit = 0
    backup_ceph_stripe_count = 0
    restore_discard_excess_bytes = true
    

    Comment out other backup_driver configurations when configuring back_driver.

  2. Restart the cinder-backup service.
    1
    systemctl restart openstack-cinder-backup.service
    

Configuring Nova to Integrate Ceph

  1. On the Nova node (compute), modify the /etc/nova/nova.conf configuration file.
    1
    vim /etc/nova/nova.conf
    

    Add the following content to the file:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    [DEFAULT]
    ...
    live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
    
    [libvirt]
    ...
    virt_type = kvm
    images_type = rbd
    images_rbd_pool = vms
    images_rbd_ceph_conf = /etc/ceph/ceph.conf
    disk_cachemodes="network=writeback"
    rbd_user = cinder
    rbd_secret_uuid = b3d5fee6-839c-482e-b244-668bad7128a9
    

    The UUID is obtained from /etc/cinder/cinder.conf in Configuring the Ceph Environment.

  2. On the Nova node (compute), restart the nova-compute service.
    1
    systemctl restart openstack-nova-compute.service