Skip to end of metadata
Go to start of metadata

Article
Release 3.0
New Features and Changes for CoprHD 3.0
New Feature Description
Restructure of the CoprHD UI menus and options
Enhancements to Isilon Storage System support
Replication Copy and Disaster Recovery of Isilon File Systems
Schedule snapshots of Isilon file systems
Set the smart quota on an Isilon file system
Ingestion of Isilon file system Access Control Lists (ACLs)
A single Virtual NAS (vNAS) can be associated with multiple projects
Application services
Discovery and management of HP-UX hosts
Service Catalog improvements
New features in Block Storage services
Enhancements to EMC Elastic Cloud Storage support
Replication as a quality of service in Object Virtual Pools
Assign User Access Control to a bucket
Namespace discovered with EMC Elastic Cloud Storage (ECS) systems
Support for ECS User Secret Key
Inventory delete only from CoprHD for Buckets
Mobility group migration
Ingestion of RecoverPoint consistency groups
Additional RecoverPoint support
Resynchronize Block Snapshot for VMAX2 and XIO
Support for SnapVX for VMAX3 storage systems
Additional SRDF support
SRDF Metro support for VMAX3 storage systems
Support for VMAX meta and VMAX3 devices
Storage Orchestration for OpenStack (SOFO) and OVF for deployment with OpenStack deployment
Use CoprHD to create gatekeeper volumes on VMAX storage systems
Additional XtremIO support
Support for restore operation directly from the UI
Option to change backup interval and number of backup copies from UI
Support for node-to-node IPsec encryption
Improved query filter for Audit Log

New Feature Description

This article lists and describes the new features and changes introduced in CoprHD 3.0.
Unless otherwise noted, all of the CoprHD operations to support the enhancements provided in this version of CoprHD can be performed from the CoprHD UI, REST API, and CLI.

Restructure of the CoprHD UI menus and options

The CoprHD UI menu structure has been reconstructed as follows.

CoprHD UI menus

The menus displayed in the UI are dependent on the user role assigned to the logged in user. See the User Role Requirements in the CoprHD UI online help for more details.

    1. CoprHD UI menu and options changes

      Menu

      Options

      Settings

      This menu has been removed from the CoprHD UI. The options previously in this menu have been moved to the Dashboard, or System menus.

      Dashboards

      • Overview (previously Dashboard)
  • Health (previously System Health)
  • Database Housekeeping status|

    Physical

    Previously located in the Physical Assets menu:

  • Storage Systems
  • Storage Providers
  • Data Protections Systems
  • Fabric Managers
  • Networks
  • Compute Images
  • Compute Image Servers
  • Vblock Compute Systems
  • Hosts
  • Clusters
  • vCenters
  • Controller Config|

    Virtual

    Previously located in the Virtual Assets menu:

  • Virtual Arrays
  • Block Virtual Pools
  • File Virtual Pools
  • Object Virtual Pools
  • Compute Virtual Pools
  • Mobility Groups (New menu option)
  • Virtual Data Centers|

    Catalog

    Previously located in the Service Catalog menu:

  • Recently Used
  • View Catalog
  • Edit Catalog
  • My Orders
  • All Orders
  • Scheduled Orders|

    Resources

    • Applications (new menu option)
  • Volumes
  • Block Snapshots
  • Snap Sessions (new menu option)
  • Export Groups
  • File Systems
  • File Snapshots
  • vNas Servers
  • Buckets
  • Tasks|

    Tenants

    • Tenants
  • Projects
  • Schedule Policies (new menu option)
  • Consistency Groups
  • Execution Windows
  • Approval Settings|

    Security

    • VDC Role Assignments
  • Authentication Providers
  • User Groups
  • Local Passwords
  • Keystore
  • Trusted Certificates
  • IPsec (new menu option)|

    System

    Previously located in Settings menu:

  • General Configuration
  • Network Configuration (only appears with Hyper-V or VMware non-vApp deployments)

    Other System menu options
  • Data Backup and Restore
  • Node Recovery (only appears with Hyper-V or VMware non-vApp deployments)
  • System Disaster Recovery (new menu option)
  • Upgrade
  • License
  • Support Request
  • Logs
  • Audit Logs|
    Changes to Download System Logs page

    The default values for the CoprHD UI System Logs > Download > Download System Logs page, Log Level, and Orders options have changed.
  • Log Level — default value is set to Debug. When debug is set all of the CoprHD logs will be automatically downloaded.
  • Orders — default value has been changed from None to All Orders.

Enhancements to Isilon Storage System support

CoprHD can now be used to perform the following operations on Isilon storage systems.

Replication Copy and Disaster Recovery of Isilon File Systems

CoprHD can be used to create replication copies of critical file system data, which is available for disaster recovery at any given point in time. File replication and disaster recovery is only supported on Isilon file systems enabled with a SyncIQ license.
When replication is enabled in the file virtual pool data protection attribute, CoprHD replicates the data on a source file system by creating a replication copy (target) of a source file system. A replication copy of the file system can be created with local mirror protection or remote mirror protection. When configured for local mirror protection, the local replication copy is created from the same source virtual pool from the same storage system from which the source file system was provisioned. When configured for remote mirror protection, the remote replication copy is created from a different target virtual pool and different storage system than where the source file system was provisioned. Once replication copies are created you can use the CoprHD to failover from the source to target file system.
To use CoprHD create replication copies of Isilon file systems, and failover to target devices, you will need to perform the following operations:


    1. Discover the Isilon storage systems in CoprHD.
    2. Create a file virtual pool, and set the Data Protection, Replication attributes.
    3. Use the file provisioning services to create the source and target file systems from the replication enabled virtual pool.
    4. Use the file protection catalog services, file replication copy service to create replication copies for existing file systems.
Information and requirements to enable replication copies of Isilon file systems


Be aware of the following before enabling replication on Isilon file systems:

  • Isilon storage systems, must be licensed, and enabled with SyncIQ.
  • Only asynchronous replication is supported.
  • Local or remote replication is supported for file systems.
  • Replication is supported on the same types of storage devices.
  • Full copy or Clone of Isilon storage systems is not supported.
  • Syncrhonizing from an older version of Isilon file systems to a new version of Isilon storage systems is supported, however synchronizing from a higher version of Isilon file systems to lower version of Isilon file systems is not supported.
  • CoprHD can only be used to create one target copy of a source file system. Creating multiple targets for one source file system, and cascading replication is not supported.
  • You can only move file systems from an unprotected virtual pool to a protected virtual pool. All other options must be configured the same in both the virtual pool from which the file system is being moved and the virtual pool in which the file system is being moved.
  • When the target file system is created in CoprHD, it is write enabled until the file replication copy operation is run using it as a target file system. Once it is established as a target file system by the file replication copy order, any data that was previously written to the target file system will be lost.

Schedule snapshots of Isilon file systems

You can use CoprHD to create a file snapshot schedule policy, which defines:

  • Regularly scheduled intervals when CoprHD will create snapshots of an Isilon file system.
  • The retention period for how long the snapshot will be retained before it is deleted.

The steps to create and assign schedule polices are:


    1. Discover storage systems.
    2. Create a file virtual pool with the schedule snapshot option enabled.
    3. Create one or more snapshot schedule policies in CoprHD.
    4. Create file systems from file virtual pools with snapshot scheduling enabled.
    5. Assign one or more snapshot policies to the file system.
Information and requirements to schedule snapshots of Isilon file systems


Be aware of the following before scheduling snapshots on Isilon file systems:

  • Only Tenant Administrators can configure schedule policies.
  • Schedule policies are only supported for local snapshots on Isilon storage systems with SnapshotIQ enabled.
  • Snapshot scheduling must be enabled on the virtual pool.
  • Schedule policies cannot be created on ingested file systems, or file systems created in CoprHD prior to this release.
  • The snapshot policy can be reused for different file systems.
  • One file system can be assigned one or more schedule policies.

Set the smart quota on an Isilon file system

You can use CoprHD to set smart quota limits at the file system, and quota directory level of Isilon storage systems managed by CoprHD.
Smart quota limits are set from CoprHD, and sent to the storage system. CoprHD will display a warning when limits are exceeded, the limits are then enforced by, and notifications are sent from the storage system.

Information and requirements to set smart quota limits


  • Smart Quota limits can only be set on Isilon storage systems which are configured with a SmartQuota license.
  • CoprHD detects whether the storage system is configured with a SmartQuota license at the time of provisioning, and provisioning will fail if you have entered smart quota values on a service for:
      • Isilon storage systems which are not enabled with a SmartQuota license.
      • All storage systems, other than Isilon storage enabled with a SmartQuota license.
  • When SmartQuota limits are set on the file system, the QuotaDirectories under the file system will inherit the same limits that were set on the file system, unless the different SmartQuota limits are set on the QuotaDirectories, while the QuotaDirectories are being created.
  • Once you have set the SmartQuota limits from CoprHD, you cannot change the SmartQuota values on the file system, or QuotaDirectories from the CoprHD UI. You must use the CoprHD CLI or CoprHD REST API to change the SmartQuota limits.
  • CoprHD will only enforce smart quota limits set on the file system by CoprHD.
  • For troubleshooting, refer to the apisvc.log, and the controllersvc.log log files.


Ingestion of Isilon file system Access Control Lists (ACLs)

When discovering and ingesting Isilon unmanaged file systems with NFSv4 protocol, CoprHD will also discover, and ingest any access controls set on the file system.
Additionally, CoprHD will also discover and ingest access controls set on the sub-directories of Isilon file systems enabled with NFSv4. Once ingested CoprHD allows you edit the permissions of ingested access controls, add more access controls to the Access Control List (ACL), and delete access controls from the list.

A single Virtual NAS (vNAS) can be associated with multiple projects

A single vNAS can be associated with multiple projects in CoprHD.
Steps to configure CoprHD to share a vNAS with multiple projects are:


    1. Discover the storage system.
    2. Set the Controller Configuration to allow a vNAS to be shared with multiple projects.
    3. Map a vNAS to multiple projects.
Information and requirements to associate a vNAS with multiple projects


  • You must have both the CoprHD System and Tenant Administrators roles to associate a vNAS server with projects.
  • vNAS domain must match the tenant domain.

Application services

An application is a logical grouping of volumes determined by the customer. With application services, you can create, restore, resynchronize, detach, or delete full copies or snapshots of the volumes that are grouped by application.
A single CoprHD block consistency group represents consistency groups on all related storage and protection systems including RecoverPoint, VPLEX, and block storage arrays (such as VMAX and VNX). In previous releases, a single consistency group was limited, at most, to one consistency group on any one storage system. This prevented the creation of full copies or snapshots of subsets of RecoverPoint or VPLEX consistency groups. Now you can use Application services to create and manage sub groups of volumes in order to overcome this limitation.

Discovery and management of HP-UX hosts

HP-UX hosts, and host initiators can now be discovered and managed by CoprHD.
For the HP-UX versions supported by CoprHD see the [ Support Matrix|https://community.emc.com/docs/DOC-38014].

CoprHD UI options for HP-UX


HP-UX hosts, and host initiators are added, discovered, registered, and provisioned storage from the following areas of the CoprHD UI.

    1. UI options for HP-UX hosts

      New or updated CoprHD UI menus and pages

      Description

      Physical > Hosts > HP-UX

      In addition to setting the HP-UX flag for VMAX hosts, as in previous releases, CoprHD will now also discover and register the HP-UX host, and host initiators when HP-UX is selected.

      Catalog > View Catalog > Block storage services > Block services for HP-UX

      Provides services to:

  • Create and mount block volumes
  • Unmount a volume from HP-UX
  • Unmount, and delete a volume from HP-UX
  • Mount existing volume on HP-UX
  • Expand Volume on HP-UX|

    Service Catalog improvements

    In addition to the updates to the Service Catalog to support the CoprHD features offered with CoprHD 3.0, which are described in this document, the following improvements have also been made to the service catalog.
  • The CoprHD UI Service Catalog menu option has been changed to Catalog.
  • New features have been added to the Block Storage services.
  • In this release CoprHD includes pausing the continuous copy mirror as part of the Remove Continuous Copy service operations. In previous releases, you would have to manually pause the mirrors prior to running the operation.
  • The Format Volume option in the following services is now disabled by default:
      • Mount Existing Volume on AIX
      • Mount Existing Volume on HP-UX
      • Mount Existing Volume on Linux
      • Mount Existing Volume on Windows

In previous releases this option was enabled by default.

New features in Block Storage services

You can unexport multiple volumes or remove a volume snapshot from an export. The Remove Block Volumes feature now supports an Inventory Only deletion type.

Enhancements to EMC Elastic Cloud Storage support

CoprHD now supports the following EMC Elastic Cloud Storage (ECS) operations.

Replication as a quality of service in Object Virtual Pools

When creating an object virtual pool in CoprHD, you can now set the Replication value required to include a replication group in an object virtual pool.

Assign User Access Control to a bucket

CoprHD can be used to assign access control to buckets. Access can be given to an individual user, a group of users, or a custom group of users.

Information and requirements to assign access control to buckets


The user, group, or custom group must have been configured for the ECS prior to assigning them to buckets from CoprHD.
When adding access control to buckets, you cannot use spaces in the user, group, or custom group names.

Namespace discovered with EMC Elastic Cloud Storage (ECS) systems

CoprHD discovers all namespaces configured on an ECS system while discovering the ECS.
Discovering the ECS namespaces with the ECS makes for a more user friendly experience when mapping ECS namespaces to CoprHD tenants, by allowing users to select from a list of namespaces, rather than having to manually type them in, as in previous versions of CoprHD.

Support for ECS User Secret Key

You can use CoprHD to generate ECS user secret keys using the CoprHD REST API, or CLI.
This option is not available in the CoprHD UI.

Inventory delete only from CoprHD for Buckets

CoprHD provides the Inventory Only option to delete a bucket from the CoprHD database.
If a bucket needs to be deleted from both the CoprHD database, and the ECS a full delete should be used. If the full delete fails, because CoprHD did not detect the bucket on the ECS, then you can use the Inventory Only delete to remove the bucket from the CoprHD database.

New or updated CoprHD UI menus and pages

Description

Resources > Buckets > (bucket name)

The Inventory only option has been added to the Delete Buckets menu.

Resources > Buckets > (bucket name)

 


Mobility group migration

Mobility groups enable the migration of multiple VPLEX volumes with one order. Group volumes by host, cluster, or by an explicit list of volumes.

Ingestion of RecoverPoint consistency groups

Block Storage Services have been updated to support ingestion of RecoverPoint consistency groups.

Use cases for ingestion of RecoverPoint consistency groups


  • New deployment of CoprHD where you want to discover and ingest RecoverPoint consistency groups. Follow the instructions in the CoprHD Ingest Services for Existing Environments to discover and ingest the RecoverPoint consistency groups.
  • Existing deployment where resources were provisioned in CoprHD and also used in RecoverPoint configurations outside of CoprHD. In order to do end-to-end provisioning from CoprHD, these resources must be deleted using the Inventory-Only Delete option before they can be ingested into CoprHD.


For existing deployments, you must assess your backend arrays to determine whether additional cleanup steps are required before running the Inventory-Only Delete option in CoprHD. First, determine if your environment includes:

      • RecoverPoint plus VPLEX or MetroPoint volumes were provisioned in a CoprHD BlockConsistencyGroup on VMAX/VNX arrays.
      • Existing CoprHD-provisioned VPLEX volumes (Local or Metro) that have more than one volume in a CoprHD BlockConsistencyGroup on VMAX.

If you have volumes, snapshots, or full copies that fit this description and were created using a pre-3.0 version of CoprHD, refer to Knowledge Base Article: 000482649 for instructions before you try to ingest RecoverPoint consistency groups. This assessment is required in order to prevent the loss of CoprHD provisioned volumes that have been placed into back-end array Replication Groups (RGs) during provisioning.

Additional RecoverPoint support

The following RecoverPoint mode and support has been added to CoprHD.

  • The Change Virtual Pool service has been enhanced so that you can change the replication mode (link policies) on a RecoverPoint consistency group.
  • The Failover Block Volume service has been updated to allow you to specify a point in time to be used for failover of RecoverPoint protected volumes.


Resynchronize Block Snapshot for VMAX2 and XIO

The Resynchronize Block Snapshot service has been added to the CoprHD service catalog, which allows you to resynchronize a snapshot of a block volume or consistency group for VMAX2, and XIO systems.
This operation adds a point-in-time copy to one or more snapshots of the selected volume or consistency group.
The Resynchronize Block Snapshot service is only available for VMAX2, and XIO storage systems.

Support for SnapVX for VMAX3 storage systems

CoprHD provides support for SnapVX functionality, and ingestion of SnapVX devices for VMAX3 storage systems.
CoprHD support for SnapVX is also supported on the following storage systems, and configurations when VMAX3 is used for the backend:

  • VPLEX Local
  • VPLEX Metro
  • VPLEX + RecoverPoint
  • VMAX3 to VMAX3 SRDF

Refer to VMAX3 documentation for more information about TimeFinder SnapVX functionality.

SnapVX operations supported by CoprHD


CoprHD uses Snapshot Sessions to manage SnapVX sessions, and devices. The following TimeFinder SnapVX operations are supported by CoprHD with VMAX3 storage systems. All operations support volumes in a consistency group.

    1. SnapVX operations in CoprHD

      SnapVX Operations

      Performed with CoprHD operations

      Description

      Create

      Create block snapshot session.

      Creates a snapshot session.

      Link

      Link block snapshot

      Links the target volume with the SnapVX snapshot session.

      Deactivate

      Remove Snapshot Session

      Delete the SnapVX snapshot session. You cannot delete a snapshot session, which is linked to any targets.

      Unlink

      Unlink without delete

      Unlink the target volume from the snapshot session, while allowing you to continue to use CoprHD to manage the target volume as an individual volume.

       

      Unlink with delete

      Unlink the target volume from the snapshot session and also delete the target volume.

      Relink

      Link block snapshots or Relink snap session

      Relink the target volume with the snapshot session. You can relink the target to the same snapshot session or a different snapshot session.

      Restore

      Restore block snapshot

      Restore can be used for either of the following SnapVX functions:

SRDF Metro support for VMAX3 storage systems

CoprHD now supports SRDF Metro for VMAX3 storage systems. SRDF Metro is established on the block virtual pool by setting the SRDF copy mode to Active. The Active mode is then used by CoprHD to select only the storage pools that are SRDF Metro enabled to add to the block virtual pool.
CoprHD supports export functionality for SRDF Metro, as well as SRDF Metro replication functionality. Replication is supported with the following CoprHD functionality:

  • Snapshots
  • Full Copies
  • Continuous Copies
Information and requirements to support SRDF Metro


For SMI-S version requirements refer to the [ Support Matrix|https://community.emc.com/docs/DOC-38014].

  • CoprHD only supports SRDF Metro between two VMAX3 storage systems.
  • VMAX3 storage systems must be enabled with an SRDF Metro licenses.
  • CoprHD does not support ingestion of SRDF Metro devices.
  • You do not need to discover the "witness," storage system in CoprHD with the SRDF Metro-enabled storage systems.
  • SRDF Metro operations are supported with volumes in a consistency group.
  • CoprHD does not support Swap and Failover operations. When a new pair needs to be created or added to the same CoprHD project (SRDF Group), CoprHD will suspend the existing pairs, and add the new pairs.

Support for VMAX meta and VMAX3 devices

CoprHD now supports the following SRDF configurations between VMAX and VMAX3 devices.

  • Creation and management of SRDF relationships where a VMAX meta volume is the source, and a VMAX3 thin device is the target.
  • Creation and management of SRDF relationships where a VMAX meta volume is the target, and a VMAX3 think device is the source.
  • Using the Change Virtual Pool, > Add SRDF protection option on existing SRDF devices to swap the target to a:
      • VMAX3 thin device, where a VMAX meta is the source.
      • VMAX meta volume, where a VMAX3 think device is the source.



Swap may not work if the VMAX meta is comprised of 2 times more cylinders than the VMAX3 device.

Storage Orchestration for OpenStack (SOFO) and OVF for deployment with OpenStack deployment

CoprHD support for OpenStack has been enhanced by providing support for storage orchestration for OpenStack, and includes the OpenStack Cinder OVF for CoprHD for deployment of southbound integration with OpenStack Liberty.

Storage Orchestration for OpenStack (SOFO)


Use CoprHD to manage block storage in an OpenStack ecosystem. An orchestration is a series of functions performed in a specific order that accomplishes a requested task.
OpenStack is open source software designed to build public and private clouds. In OpenStack, there is a project called 'Cinder" that is used to provision and manage block storage. It has a defined set of standard REST APIs for enterprise storage management.
CoprHD is a software-defined-storage (SDS) controller designed to bring cloud-like benefits to enterprise storage management. Benefits include automatic provisioning, a self-service portal, policy-based storage profile definitions, and single pane of glass for management of multi-vendor storage systems. The SOFO implementation uses a Java implementation of the REST APIs to enable CoprHD to manage block storage in OpenStack deployments.

See the CoprHD User Guide: Storage Orchestration for Openstack here: https://coprhd.atlassian.net/wiki/display/COP/User+Guide+%3A+Storage+Orchestration+For+OpenStack

    1. UI options to support storage orchestration for Openstack

      New or updated CoprHD UI menus and pages

      Description

      Security > Authentication Providers > Add > Create Authentication Provider page

      The following options have been added to the Create Authentication Provider page to support storage orchestration for OpenStack.

  • Keystone has been added to the Type to register CoprHD as block storage service in Openstack (Keystone).
  • The Automatic: Registration and Tenant Mapping check box has been added to:
  • Import all tenants which are present in OpenStack into CoprHD.
  • Import all projects which are present in OpenStack into CoprHD.
  • Tag CoprHD tenants and projects with appropriate tenant and project IDs to create a logical mapping.


    You can perform these steps manually as described in the CoprHD User Guide: Storage Orchestration for Openstack.|

    Security > Authentication Providers > (select OpenStack authentication provider)

    To edit the properties of the Keystone (OpenStack) authentication provider.


    OpenStack Cinder OVF for CoprHD

    The OpenStack Cinder OVF for CoprHD is available for deployment of a CoprHD southbound integration with OpenStack. You still have the option of deploying OpenStack through the OpenStack downloads, but you now have the option of using the OpenStack Cinder OVF for CoprHD to deploy CoprHD southbound integration with OpenStack.
    For more information, see the CoprHD 3.0 Release Notes.

    If you are already running CoprHD with OpenStack, you do not need to re-install OpenStack.

    Use CoprHD to create gatekeeper volumes on VMAX storage systems

    You can use CoprHD to create gatekeeper volumes, which are less than 1 GB, on VMAX and VMAX3 storage systems.
    You cannot use CoprHD to expand volumes, which are less than 1 GB.

    Additional XtremIO support

    The following XtremIO mode and support has been added to CoprHD.
  • Ingestion of XtremIO consistency groups.

Support for restore operation directly from the UI

You can now do a Restore operation directly from the CoprHD UI.
The Data Backup and Restore page can be accessed from the System > Data Backup and Restore menu.

    1. Data Backup and Restore: Local Backups Tab

      Column name

      Description

      Action

      Click Upload to upload the backup file to the FTP site configured for CoprHD backups. The external FTP site is configured from the Backup tab of the System > General Configuration page.
      Automatically scheduled backups are automatically uploaded to the FTP site if the FTP site has been configured in the CoprHD. If the FTP site has not been configured in CoprHD, you can upload the files after configuring the FTP site.
      Click Restore to go to the Restore page, where you can input the current root password to start the restore process. Remote backups are downloaded first node by node before the restore.



    2. Data Backup and Restore: Remote Backups Tab

      Column name

      Description

      Action

      Click Restore to go to the Restore page, where you can input the current root password to start the restore process. Remote backups are downloaded first node by node before the restore.


      Option to change backup interval and number of backup copies from UI

      You can now use the CoprHD UI to change the backup interval and the maximum number of backup copies to save.
      Select System > General Configuration > Backup.

      Option

      Description

      Backup Time

      The time (hh:mm) that the scheduled backup starts, based on the local time zone.

      Number of Backups per Day

      Choose 1 or 2 backups per day at the scheduled Backup Time.

      Backup Max Copies (scheduled)

      The maximum number of scheduled backup copies (0-5) to save on the CoprHD nodes. Once this number is reached, older scheduled backup copies are deleted from the nodes so that newer ones can be saved.

      Backup Max Copies (manual)

      The maximum number of manually-created backup copies (0-5) to save on the CoprHD nodes. Once this number is reached, no additional copies can be created until you manually delete the older manually-created copies from the nodes.



      Support for node-to-node IPsec encryption

      Internet Protocol Security (IPsec) is a protocol suite for secure Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session. IPsec enables secure communication between nodes in a CoprHD cluster and between CoprHD clusters, and uses an IPsec Key to secure the communication.
      IPsec is managed from the Security > IPsec page.
      The IPsec page provides current information on IPsec Status and IPsec Configuration. The page is refreshed once per minute.
  • IPsec Status: This area provides an indication of the IPsec stability. Possible values are:
      • Stable: IPsec connections between the nodes are good.
      • Disabled: The IPsec feature is turned off.
      • Degraded: Some IPsec connections are broken. In most situations, CoprHD should be able to resolve the issue, and user action will not be required.

Information is also provided on the time interval since the last update.

  • IPsec Configuration: This area provides the date and time on which the current IPsec Key was generated. Click Rotate IPsec Key to generate a new key. The IPsec communication between the nodes in the cluster will be stopped for a minute or so while restarting the IPsec service.

Improved query filter for Audit Log

The Audit Log query filter has been improved.
The System > Audit Log page displays the recorded activities performed by administrative users for a defined period of time.
The Audit Log table displays the Time at which the activity occurred, the Service Type (for example, vdc or tenant), the User who performed the activity, the Result of the operation, and a Description of the operation.

Filtering the Audit Log Display



    1. Select System > Audit Log. The Audit Log table defaults to displaying activities from the current hour on the current day and with a Result Status of ALL STATUS (both SUCCESS and FAILURE).
    2. To filter the Audit Log table, click Filter.
    3. In the Filter System Logs dialog box, you can specify the following filters:
      • Result Status: Specify ALL STATUS (the default), SUCCESS, or FAILURE.
      • Start Time: To display the audit log for a longer time span, use the calendar control to select the Date from which you want to see the logs, and use the Hour control to select the hour of day from which you want to display the audit log.
      • Service Type: Specify a Service Type (for example, vdc or tenant).
      • User: Specify the user who performed the activity.
      • Keyword: Specify a keyword term to filter the Audit Log even further.
    4. Select Update to display the filtered Audit Log.
Downloading Audit Logs



    1. Select System > Audit Log. The Audit Log table defaults to displaying activities from the current hour on the current day and with a Result Status of ALL STATUS (both SUCCESS and FAILURE).
    2. To download audit logs, click Download.
    3. In the Download System Logs dialog box, you can specify the following filters:
      • Result Status: Specify ALL STATUS (the default), SUCCESS, or FAILURE.
      • Start Time: Use the calendar control to select the Date from which you want to see the logs, and use the Hour control to select the hour of day from which you want to display the audit log.
      • End Time: Use the calendar control to select the Date to which you want to see the logs, and use the Hour control to select the hour of day to which you want to display the audit log. Check Current Time to use the current time of day.
      • Service Type: Specify a Service Type (for example, vdc or tenant).
      • User: Specify the user who performed the activity.
      • Keyword: Specify a keyword term to filter the downloaded system logs even further.
    4. Select Download
  • No labels