Skip to end of metadata
Go to start of metadata

Source code for GA https://review.coprhd.org/projects/CH/repos/coprhd-controller/browse?at=refs%2Ftags%2FVIPR-2.3-GA

 

(Work In Progress)

CoprHD Release Notes

Release number 2.3.0.0

July, 2015

 

This document contains supplemental information about CoprHD. Topics include:

 

About

 CoprHD (CoprHD) is a software-defined storage platform that abstracts, pools, and automates a data center's underlying physical storage infrastructure. It provides a single control plane for heterogeneous storage systems to data center administrators.

CoprHD enables software-defined data centers by providing the following features:

  • Storage automation capabilities for multi-vendor block and file storage environments (control plane, or CoprHD Controller)
  • Management of multiple data centers in different locations with single sign-on data access from any data center
  • Migration of non-CoprHD volumes into the CoprHD environment through the CoprHD Migration
  • Services Host Migration Utility, which leverages the capabilities of EMC PowerPath Migration Enabler
  • Integration with block arrays (non-native) through OpenStack via the CoprHD Third-Party Block Storage Provider, which enables CoprHD to discover any
    third-party block array that an OpenStack block storage (Cinder) driver supports

CoprHD 2.3.0 new features and changes

New features and changes introduced in CoprHD 2.3 are described in the following sections. 

Database Consistency Check

The database consistency check allows System Administrators to check if the CoprHD database is synchronized across all of the nodes. If the nodes are not synchronized, then a database repair is run to synchronize the database across the nodes at the point in time the check was run. The Database Consistency Status is displayed in the CoprHD UI, Dashboard. The Last Completed Database Check indicates the last time the check was run. The database check is run automatically every 5 days, or after a CoprHD node recovery. The frequency of how often the check is run is defined in the CoprHD and cannot be changed.

Support for tenant mapping and role assignment based on LDAP groups

In addition to AD groups, tenant mapping, and role assignment can now be configured with LDAP groups. LDAP group support based on Group Object Classes and Group Member Attributes in the Authentication Provider.

  • Enables CoprHD to use LDAP groups in tenant user mapping, role and ACL assignments.
  • Supports both common, and custom LDAP schemas, based on user-defined Group Object Class and Group Member Attribute.
  • Supports nested hierarchy of LDAP groups.

To support LDAP groups, Group Object Class and Group Member Attributes have been added to the CoprHD Authentication Provider. CoprHD uses the Group Object Classes and Group Member Attributes, along with Group Attribute to search group membership in LDAP.

Note: When using the CLI, the groupobjectclasses and groupmemberattributes entries must be present in the configuration file but with blank values when using Active Directory. If upgrading from an earlier version of CoprHD existing authentication providers can be modified to enable LDAP group support. When CoprHD is configured with VDC’s in the geo-federation all instances must be upgraded to CoprHD 2.3 before the LDAP groups can be added. LDAP groups can be added, mapped, and assigned from the CoprHD REST API, CLI, or UI. 

Support for tenant mapping and role assignment based on local user groups

CoprHD user groups can be created based on the AD or LDAP user attributes. Once created the CoprHD User Group can be:

  • Can be used in the tenant user mapping.
  • Assigned a virtual data center (VDC) role, or an ACL. In role or ACL assignment, use of user groups allows collective assignment of users based on their attributes. In a geo-federated environment, user groups can only be implemented after all instances have been upgraded to CoprHD 2.3.
  • CoprHD user groups can be created, mapped, and assigned from the ViPR Controller REST API, CLI, or UI.  

Changes to Security Administrator and Tenant Administrator privileges

Only the Security Administrator can perform the following functions in CoprHD.

  • See the tenants that exist in the VDC.
  • Create, modify, and delete existing tenants.
  • Assign the Tenant Quota.
  • Assign User Mapping Rules to the Tenant.

The Tenant Administrator can perform the following operations for the tenants assigned to them:

  • View the tenants, and tenant attributes.
  • Assign roles to the users in the tenant.
  • Modify the tenant name, and description.
  • Perform administration tasks for the tenant, such as creating a project, or editing the
  • service catalog. 

Additional file storage support for EMC Isilon. 

For EMC Isilon file storage systems the following Access Control List (ACL) functionality is supported:

  • Use CoprHD to add, modify, and delete permissions for a user or group on CIFS share.
  • Steps can be performed from the CoprHD API, CLI, or the UI, Resource pages for File Systems, and File System Snapshots.
  • ACLs will now be discovered, and ingested with discovery, and ingestion of unmanaged filesystems.


Limitation for Data Domain, VNX for File, and VNXe:

  • Default access permissions are enforced when creating CIFS shares from the CoprHD Controller.
  • Access permissions for CIFS shares must be configured using Access Control Lists on the storage system that provides the file system.
  • ACLs are not discovered and ingested with discovery of unmanaged filesystems.

Increased functionality for file system shares

Enhancements to file storage system discovery includes:

  • File system shares can be created at subdirectory level.
  • Complete access control on CIFS/SMB share.
  • Unmanaged Filesystem discovery and ingestion of shares for all file platforms.
  • Ingest Filesystems and all shares belonging to that Filesystem. 

VCE Vblock™ System support

CoprHD support for Vblock systems now includes support for:

  • Updating Service Profile Templates (uSPT)
  • Vblock ingestion

CoprHD can now discover the compute element UUID when an ESX host or cluster is discovered in the CoprHD. Prior to CoprHD 2.3, if an ESX host or cluster, which resided on a compute element of a compute system that was discovered by CoprHD was decommissioned, the compute element was left in an unavailable state because CoprHD did not discover the linkage between the Host, or cluster, and the compute element. Now CoprHD can identify the compute system as available when an associated ESX host or cluster is decommissioned.

ESX or ESXi host discovery

ESX and ESXi hosts can now be added to CoprHD individually as hosts. They can also be added with a vCenter, but it is no longer required.

Ingestion of VMAX3 volumes

CoprHD can ingest VMAX3 volumes, including SRDF R1 and R2 volumes containing replicas operating in synchronous and asynchronous mode. 

Ingestion of snapshots, full copies, and continuous copies

CoprHD can ingest snapshots, full copies, and continuous copies that are on

VMAX, VMAX3, and VNX volumes. This includes the following functionality:

  • Detached clones are ingested as independent volumes.
  • VNX continuous copies in a fractured or split state are ingested as differential full copies.
  • Snapshots linked to targets are ingested as block snapshots. VMAX3 SnapVX sessions that are not linked to any targets cannot be ingested.
  • After ingestion, CoprHD can perform these operations:
    • Full copies: restore, resynchronize, detach, and delete
    • Snapshots: restore, delete, export, and unexport
    • Continuous copies: pause, resume, stop, and delete 

Ingestion of SRDF/A protected volumes without consistency group (CG) provisioning

CoprHD no longer requires that SRDF/A protected volumes provide CG provisioning to be ingested. SRDF/A protected volumes in device groups, composite groups and consistency groups on the VMAX array cannot be ingested. 

SRDF enhancements

These enhancements include:

  • SMI-S Provider 8.0.3 support for both VMAX and VMAX3 arrays
  • SRDF/S and SRDF/A replication between VMAX and VMAX2 arrays
  • SRDF/S and SRDF/A replication between VMAX3 and VMAX arrays
  • SRDF/S and SRDF/A replication between VMAX3 and VMAX3 arrays
  • Provision SRDF/A volumes through CoprHD without consistency groups
  • Ability to perform most SRDF operations on VMAX and VMAX3 arrays operating in synchronous and asynchronous mode with and without consistency groups
  • Ability to create clones, snaps and mirrors for SRDF/S and SRDF/A volumes created without consistency groups
  • Ability to create clones and snaps for SRDF/S and SRDF/A volumes created with consistency groups 

Full copy enhancements: incremental restore and resynchronization

These enhancements include:

  • Platform support for VMAX, VMAX3, VNX, and HDS.
  • Consistency group support for VMAX, VMAX3, and VNX
  • No automatic detach on full copy creation
  • New block protection catalog services:
    • The Restore From Full Copies service restores a source volume with the latest data from a full copy
    • The Resynchronize Full Copies service copies the latest data from a source volume to a full copy
    • The Detach Full Copies service removes the source and target relationship of a copy session 

VPLEX enhancements

These enhancements include:

  • One VPLEX can manage multiple RecoverPoint clusters. This provides the ability to provision more than the 128 consistency groups that RecoverPoint provides a single RecoverPoint cluster. When a consistency group is created in this configuration, ViPR Controller balances the consistency group load across the multiple protection systems.
  • Non-disruptively change the protection of an existing volume from VPLEX + RecoverPoint CRR to MetroPoint CRR, using a new virtual pool operation in the Change RecoverPoint to MetroPoint service.
  • Support for Metropoint 2-site configuration with XtremIO and VPLEX local on each site, RP CDP to the same XtremIO array.

RecoverPoint enhancements

These enhancements include:

  • A RecoverPoint copy can have its journal volumes provisioned on a different virtual array and virtual pool from the copy itself. This applies to the RecoverPoint source as well as RecoverPoint target volumes.
  • Support for configurations where there are twoRecoverPoint systems with three physical sites and RecoverPoint CRR. The shared site between the two systems can be a source or a target for protection.
  • Support for configurations
  • RecoverPoint consistency group balancing across RPAs.

Ingestion of XtremIO volumes

These enhancements include:

  • Discovery of XtremIO volumes and snapshots provisioned outside of CoprHD.
  • Ingestion of exported and unexported XtremIO volumes and snapshots into CoprHD.

New VMware Services added to the Service Catalog

The following two services have been added to the CoprHD Service Catalog, Block

Services for VMware:

  • Export Volume for VMware — This service is used to export the volumes to a vCenter host or cluster, and then rescan the HBAs on the vCenter host or cluster after the volume is exported.
  • Unexport Volume from VMware — This service is used to unexport the volumes from a vCenter host or cluster, and then rescan the HBAs on the vCenter host or cluster after the volume is unexported.
  • Using the Export Volume for VMware service, puts some limitations of the use of the following two services:
  • Delete Datastore and Volume — If you use the Delete Datastore and Volume service

from CoprHD after using the Export volume for VMware service to export the

volume to the datastore, it will result with dead paths on other hosts or clusters that

have access to this datastore and volume. To delete the datastore and volume:

1. Use the Unexport Volume from VMware service on all the hosts or clusters, except

for one.

2. Use the Delete Datastore and Volume service to delete datastore, and volume

from the remaining host.

  • Extend Datastore — A HostConfig error occurs when you use the Extend Datastore service from CoprHD, after using the Export volume for VMware service toexport the volume to the datastore, and the datastore was exported to multiple VMWare hosts or clusters. To use CoprHD to successfully extend the datastore:
    1. Use the Unexport Volume from VMware service on all the hosts or clusters, except for one.
    2. Use the Extend Datastore service to extend the datastore while it is still connected to the single host or cluster.
    3. Use the Export Volume for VMware service to export the volumes back to the hosts.

 

Enhancement to Service Catalog, VMware services

CoprHD now allows you to use the Create Block Volume and VMware Datastore

service to create multiple datastores, and volumes in the same order. While in the service

order, the user enters the datastore name, volume name, an volume size for each

datastore being created.

You can now enable the Storage I/O Control on a VMFS or NFS datastore while using the

following services from the Service Catalog:

  • Create VMware Datastore
  • Create Volume and VMware Datastore
  • Create VMware NFS Datastore
  • Create Filesystem and NFS Datastore

 

Table 4 ViPR block storage integration component version requirements

ViPR integration component Plug-in

version

Required software for integration

component

CoprHD Third-party Block

Storage Provider

2.2 OpenStack block storage (Cinder) driver

A list of block arrays that OpenStack Cinder

drivers support is provided at:

https://wiki.openstack.org/wiki/

CinderSupportMatrix

CoprHD Migration Services Host

Migration Utility

1.0 EMC Powerpath

For pre-requisite information, see the CoprHD

Migration Services Host Migration Utility 1.0

Installation and Configuration Guide

 

CoprHD SMI-S provider upgrade requirements for VMAX systems

Review the following to understand the CoprHD upgrade requirements for VMAX

storage systems.

Note

The CoprHD Support Matrix has the most recent version requirements for all

systems supported, or required by CoprHD. For specific version requirements of

the SMI-S provider review the CoprHD Support Matrix before taking any action to

upgrade or install the SMI-S provider for use with CoprHD.

l For VMAX storage systems, use either SMI-S provider 4.6.2 or SMI-S provider 8.0.3

but not both versions, however you must use 8.0.3 to use the new features provided

with CoprHD 2.3.

l For VMAX3 storage systems, always use SMI-S provider 8.0.3.

l CoprHD 2.3 requires SMI-S Provider 4.6.2 for all VNX storage systems. Plan

accordingly if you are using both VMAX and VNX storage systems in your

environment.

l When upgrading, you must upgrade the CoprHD to version 2.3 before you

upgrade the SMI-S provider to 8.0.3.

l To upgrade to SMI-S Provider 8.0.3, you must contact EMC Customer Support.

Table 5 Upgrade requirements for VMAX storage systems

Upgrade from: To:

CoprHD SMI-S provider CoprHD SMI-S provider

2.x 4.6.2 2.3 8.0.3

2.2 8.0.1 2.3 8.0.3

 

 

  • No labels