Skip to end of metadata
Go to start of metadata

This page describes  how to configure and use Third Party Block Storage discovered using  OpenStack Cinder  as the storage provider.

Introduction

CoprHD integrates with OpenStack using the Cinder component. In CoprHD terminology, this integration is known as "CoprHD's southbound integration with OpenStack".

In prior releases, Third Party Block Storage integration in Cinder was consumed as standalone storage, meaning that any back-ends configured in Cinder were treated as independent storage systems. With this release, a new CoprHD feature enables supported storage systems to be used as VPLEX back-ends. See the latest VPLEX Support Matrix for a list of storage systems that can be used as VPLEX back-ends.

NOTE:   Cinder has drivers for more than 78 storage systems. However, only those that appear in the VPLEX Support Matrix can be used as VPLEX back-ends.  

Support Matrix

VPLEX Latest Support Matrix

CoprHDOpenStack
DarthKilo
Darth SP1Kilo

 

Prerequisites

  1. EMC VPLEX storage system must be up and running.
  2. The intended back-end storage system (for example, HP3PAR) must be up and running.
  3. The physical SAN connectivity between the EMC VPLEX and back-end storage system must be operational.
  4. If Brocade devices are used,Brocade CMCNE must be installed and configured to discover the Fabric/Switches used for connectivity.
  5. If Cisco devices are used, Cisco MDS must be installed and configured.

Configuration

VPLEX

From the VPLEX point of view, you need to ensure that  physical connectivity exists between the back-end storage system and the VPLEX. A suggested method follows:

  1. Provision  a test LUN to VPLEX using the storage system management console.
  2. When connectivity is proven, we recommend that you delete the test LUN on the VPLEX.  
  3. Clean up the storage system management console for any VPLEX entries, such as host entries, mask entries, etc.

OpenStack

White paper

Go here for a white paper describing how to set-up OpenStack Cinder as  a storage provider for CoprHD.   A summary is provided here.

The following tasks are required to prepare OpenStack as a storage provider for CoprHD: 

  1. Install the required OpenStack components.
  2. Configure cinder.conf for the storage systems that are used as VPLEX back-ends.
  3. Create volume types for each of the configured back-ends.

1-Install the required OpenStack components

Keystone and Cinder are mandatory components. Keystone is required for authentication and authorization. Cinder is required for the core block storage services

There are various ways one could create this configuration. Three options are: 

  • Option #1: Deploy using the www.devstack.org.
  • Option #2: Perform a standard installation by following the installation instructions at www.openstack.org.
  • Option #3: Use the specifically built OVA for the OpenStack and CoprHD integration. This is a click-through installation on VMware. For more details on the OVA configuration details, go here.

The deployment method to use depends on whether the configuration is for development, testing or production purposes. 

  • For development activity, we recommend using Option #1.  That option  installs all of the core components of OpenStack in a single node and  it is the  easiest option to use.   
  • Do not use Option #1 for production workloads.
  • For POC/test and production workloads, we highly recommend using Options #2 or #3,  preferably Option #2.

 2-Configure cinder.conf for the starage systems that are used as VPLEX back-ends 

The white paper referenced above covers the details of cinder.conf by showing an example of one back-end storage system.

Each storage system is considered to be a separate back-end in Cinder. The settings that go inside the cinder.conf  file differ from storage system to storage system. To see the required settings for your identified storage system, go here. These guidelines for configuring storage systems are applicable generally when CoprHD is using them as standalone storage systems, meaning they will not be used as back-ends to VPLEX. To use these storage systems as VPLEX back-ends in CoprHD, you must follow a required method for choosing the back-end title names.

The requirement is that the back-end title name in cinder.conf must include the serial number of the storage system as shown on the VPLEX management console. For example,  if the storage system's serial number on VPLEX management console is "2800776e" and  it is HP3PAR storage system, the back-end name in cinder.conf can be any one of the following values:

  "hp3par-2800776e", "2800776e-hp3par" or "hp3par-2800776e-fcbackend"

To find the serial number on the VPLEX console:

  1. Log in to the VPLEX management console:  https://<VPLEX Management Console IP>
  2. Go to Provision Storage -> Storage Arrays.  This  page shows the list of arrays/storage systems connected to VPLEX.
  3. Make note of the serial number. Usually, it is the number after the last hyphen ("-") .

 

 

Note

If your storage system is not shown in the list, you may have to provision a test LUN from the storage system management console and ensure that the provision goes through successfully. In general, there should be at least one LUN exported/provisioned from a storage system to the VPLEX in order for the storage system to appear in the list.

 

The cinder.conf entries for the above storage system would look like this: 

[ETERNUS_DXL-28000000280e07]
volume_driver=cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
cinder_eternus_config_file=/etc/cinder/eternus.xml
volume_backend_name=ETERNUS_DXL-28000000280e07

[ETERNUS_DXL-28000000280e07-2]
volume_driver=cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
cinder_eternus_config_file=/etc/cinder/eternus-2.xml
volume_backend_name=ETERNUS_DXL-28000000280e07-2

[ETERNUS_DXL-28000000280e07-3]
volume_driver=cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
cinder_eternus_config_file=/etc/cinder/eternus-raid.xml
volume_backend_name=ETERNUS_DXL-28000000280e07-raid5

 

Note

CoprHD relies on the volume_driver or volume_backend_name contents to deduce the communication protocol, usually most of the driver names contain the protocol they support. For example notice the value "volume_driver=cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver", it has "fc" in it. If "volume_driver" attribute does not contain the protocol type, then make sure that the "volume_backend_name" attribute contains protocol type.

 

3-Create volume types for each of the configured back-ends

 The third and last task required for configuring OpenStack as a storage provider for CoprHD is to create volume types for each of the configured back-ends. See the white paper for details. 

CoprHD

To provision, you need to discover all of the physical set-ups and create virtual entities out of it.

  1. Storage Systems Discovery.
    1. EMC VPLEX
    2. OpenStack Cinder as Storage Provider
  2. Add/Update FC storage ports.
  3. Fabric Manager Discovery.
  4. Creation of Virtual Entities.
  5. Create a Project and Using service catalogue to provision a Volume.

Storage Systems Discovery

Add the VPLEX Storage Provider as follows:

  1. Launch the ViPR console.
  2. Go to Physical Assets -> Storage Providers -> Add.
  3. For Type,  select "VPLEX".
  4. Enter the VPLEX access details. 
  5. Click Save to trigger the discovery.
  6. After successful discovery, there should be a VPLEX storage system under Physical Assets -> Storage Systems.

 

 

Add Third Party Block Storage Provider ( discover cinder storage provider )

  1. Launch the ViPR console.
  2. Go to Physical Assets -> Storage Providers -> Add.
  3. For Type, select "Third Party Block". 
  4. Enter the Cinder node access details. 
  5. Click Save to trigger the discovery.
  6. After successful discovery, all configured cinder back-ends are discovered as storage systems under Physical Assets -> Storage Systems.

 

Add/Update Fibre Channel Storage Ports

EMC VPLEX and back-end storage systems are usually connected through Fibre Channel SAN.    To provision a volume from the back-end storage systems to VPLEX, CoprHD needs information about the connected Fibre Channel storage ports . The OpenStack API does not provide the storage port World Wide Port Name (WWPN) for Fibre Channel connected storage systems. Therefore, CoprHD cannot retrieve the storage port WWPNs during discovery.  The WWPNs must be added and updated manually using the CoprHD UI or CLI.

When CoprHD discovers a third-party block storage array, a default storage port is created for the storage system. The default port appears on the Storage Port page, with the storage port identifier and the name Default, as follows: Openstack+<storagesystemserialnumber>+Port+Default

Prerequisites

These storage systems will be VPLEX back-ends and therefore should conform to the following VPLEX best practices regarding the number of storage ports:  

  1. Each director should have at least two paths to the back-end storage system.
  2. Each director should not have more than four paths to the back-end storage system. 

Before you proceed:

  1. Make sure that each  back-end storage system has  at least 2 FC storage ports with valid identifiers.
  2. There should be maximum 4 FC storage ports on back-end storage system to completely adhere to the VPLEX best practices.
  3. Locate the FC storage ports for each back-end storage system and make a note of the WWN for each one.

To add or modify the FC storage port using the UI:  

  1. Go to the storage ports page for a storage system.    If no new ports are added, there is a single default storage port. 
  2. To modify a port: 
    1. Click  the available storage port name to navigate to the Edit Storage Port view.
    2. Enter the appropriate WWN, and click Save.
  3. To add new storage port instance: 
    1.  Click Add to navigate to the Create Storage Port  view.
    2. Enter the name of the storage port, 
    3. For Port Type,  select  Fibre Channel.
    4. For Port Network ID, enter the valid FC WWN.
    5. Click Save.

 

To modify the FC storage port using the CLI:

  1. Get the last three digits of the storage system serial number from the list of storage systems.
    C:\Users\<username>viprcli storagesystem list
    NAME                                    PROVIDER_NAME SYSTEM_TYPE SERIAL_NUMBER
    3parfc-776e_FC-r1_2_hp_3par_fc+11111111234 myProviderName      openstack   11111111234

     

  2. Get the port network ID for the Default storage port. The storage port network ID (PORT_NETWORK_ID) will be an invalid value.
    C:\Users\<username>viprcli storageport list -t openstack -sn 234
      PORT_NAME TRANSPORT_TYPE    NETWORK_NAME           PORT_NETWORK_ID    REGISTRATION_STATUS
      DEFAULT       FC          FABRIC_name-fabric     <some invalid value>   REGISTERED

     

  3. Add the WWPN (50:01:02:34:05:06:FE:07 in this example) to the storage port.
    C:\Users\<username>viprcli storageport update -t openstack -sn 234 -pn DEFAULT -tt FC -pnwid "50:01:02:34:05:06:FE:07"

     

  4. Repeat step 2, to validate that the value was added to the storage port (PORT_NETWORK_ID).
    C:\Users\<username>viprcli storageport list -t openstack -sn 234
      PORT_NAME TRANSPORT_TYPE    NETWORK_NAME           PORT_NETWORK_ID       REGISTRATION_STATUS
      DEFAULT       FC          FABRIC_name-fabric     50:01:02:34:05:06:FE:07   REGISTERED

     

To add a new FC storage port using the CLI:  

  1. Get the last three digits of the storage system serial number.
    C:\Users\<username>viprcli storagesystem list
    NAME                                    PROVIDER_NAME SYSTEM_TYPE SERIAL_NUMBER
    3parfc-776e_FC-r1_2_hp_3par_fc+11111111234 myProviderName      openstack   11111111234

     

  2. Create a new storage port.

    C:\Users\<username>viprcli storageport create -st openstack -sn 234 -pn "New Storage Port 1" -tt FC -pnwid "50:01:02:34:05:06:FE:08"

 

Discover Fabric Manager

 The Fabric Manager must be configured and running before proceeding.

 To discover a fabric manager:  

  1. Go to Physical Assets -> Fabric Managers.
  2. Click Add.
  3. For Type, select Brocade or Cisco
  4. Enter the access details based on the type selected.
  5. Click Save to trigger the discovery of the Fabric Manager.

If you entered the correct details, the discovery should happen successfully and discover all of the SAN networks. These networks are listed under Physical Assets -> Networks.

Create Virtual Entities

Create a virtual array and a virtual pool. See the ViPR Controller User Interface Virtual Data Center Configuration Guide which can be found on the ViPR Controller Product Documentation Index.

 

Create Project and Provision Virtual Volume

To create a project, see the ViPR Controller User Interface Virtual Data Center Configuration Guide which can be found on the ViPR Controller Product Documentation Index.

To provision a virtual volume, see the ViPR Controller Service Catalog Reference Guide, which can be found on the ViPR Controller Product Documentation Index.

 

 

 

 

Additional Information

Storage Pools Size

In CoprHD, for Cinder discovered storage systems, physical storage pools map to volume types on the OpenStack Cinder. OpenStack API does not provide the information about actual storage pool's available size. CoprHD need to populate the size for storage pools to make them usable. The way the storage pools size is populated for Cinder/OpenStack type storage systems is as follows

  1. By default each storage pool have 10 TB available size, this is just a CoprHD design decision, actual storage pool size might be more or less. This value was 1 TB before darth.
  2. If 10 TB size is exhausted by 75%, then the storage pool size gets automatically doubled. For example, consider volumes created using a particular storage pool have consumed 7.5 TB or more of available space, then the storage pool size gets increased by 2.
  3. The volume creation goes through as long as the Cinder allows to create, it would fail only if the size/limit gets exhausted on the Cinder irrespective of the available space/size in CoprHD for the particular storage pool. This is a conscious CoprHD design decision not to fail unless Cinder fails to create because of no space problem. If it is a space problem, Cinder will anyway report the issue back to CoprHD, that way user will be informed of the failure reason is a space constraint.

 

2 Comments

  1. Hi Parash, please take a look at CLI commands to add storage port, I think they need update - pnwid option is no longer in CLI guide, I think it may have been replaced with pid option. Thank you.

  2. Stas:

    I checked CLI commands, it is still pnwid. Having said if there is an issue, we can always update the document, you know we are a community now:-)