Skip to end of metadata
Go to start of metadata

CoprHD can be downloaded or built as an appliance for easy deployment and used as a virtual machine. Currently there are 2 different appliances released:

  • CoprHD Controller (run-time): where CoprHD can be deployed and accessed after booting
  • CoprHD DevKit (developer environment): where storageos can be downloaded, built, installed, uninstalled, etc. so CoprHD Controller can be validated or new appliances developed

Step-by-step guide (initial appliances)

Choose either A or B to install the appliance that is best applicable:

A. How to setup a CoprHD Controller appliance (run-time):

  1. VSphere
    1. Download ovf and disk1.vmdk files from "build.coprhd.org"
    2. Deploy the ovf image on VSphere and add the <hostname>, <search domain>, <Network IP>, <Gateway>, <Netmask>, <DNS> and <VIP node 1> when prompted, as they will be used to setup the controller
    3. Default credentials to access the host are: svcuser / ChangeMe
    4. Wait for the services to be online by calling: "sudo /opt/ADG/conf/configure.sh waitStorageOS"
    5. Access the UI
  2. VMware Workstation/Player/Fusion
    1. Download the vmx and vmdk files from "build.coprhd.org"
    2. Open the vmx image on VMware Workstation/Player/Fusion
    3. Default credentials to access the host are: root / ChangeMe
      1. Copy the ovf.properties example file by calling: "cp /opt/ADG/conf/ovf.properties.example /opt/ADG/conf/ovf.properties"
      2. Edit a file named: /opt/ADG/conf/ovf.properties as in the #Examples:/opt/ADG/conf/ovf.properties section down below with new network properties
      3. Run the following command: "/opt/ADG/bin/setNetwork 0"
      4. Run the following command: "service network restart" to allocate a static IP with the desired network properties
      5. Start an SSH session from the defined static IP using the same credentials (the following steps will fail if logged directly on VM's console): "ssh root@<ip>"
      6. Create network configuration by calling: "/opt/ADG/conf/configure.sh installNetworkConfigurationFile"
      7. Edit a file named: /etc/ovfenv.properties as in the #Examples:/etc/ovfenv.properties section down below with desired network properties
      8. Enable coprhd services by calling: "/opt/ADG/conf/configure.sh enableStorageOS"
      9. Wait for the services to be online by calling: "/opt/ADG/conf/configure.sh waitStorageOS"
      10. You will notice some "Warning: apisvc unavailable. Waiting..." then "Warning: service unavailable. Waiting..." messages until the services are up and running.
      11. Access the UI
  3. VirtualBox

    You will notice a green screen while the image is booting for the first time and it should disappear after some minutes. If you would like to read the boot messages, press alt+1.

    1. Download ovf and disk1.vmdk files from "build.coprhd.org"
    2. Import the ovf on VirtualBox
    3. Edit the VM settings and allocate the RAM settings to at least 2048 GB of memory and power it on
    4. Default credentials to access the host are: root / ChangeMe
      1. Copy the ovf.properties example file by calling: "cp /opt/ADG/conf/ovf.properties.example /opt/ADG/conf/ovf.properties"
      2. Edit a file named: /opt/ADG/conf/ovf.properties as in the #Examples:/opt/ADG/conf/ovf.properties section down below with new network properties
      3. Run the following command: "/opt/ADG/bin/setNetwork 0"
      4. Run the following command: "service network restart" to allocate a static IP with the desired network properties
      5. Start an SSH session from the defined static IP using the same credentials (the following steps will fail if logged directly on VM's console): "ssh root@<ip>"
      6. Create network configuration by calling: "/opt/ADG/conf/configure.sh installNetworkConfigurationFile"
      7. Edit a file named: /etc/ovfenv.properties as in the #Examples:/etc/ovfenv.properties section down below with desired network properties
      8. Enable coprhd services by calling: "/opt/ADG/conf/configure.sh enableStorageOS"
      9. Wait for the services to be online by calling: "/opt/ADG/conf/configure.sh waitStorageOS"
      10. You will notice some "Warning: apisvc unavailable. Waiting..." then "Warning: service unavailable. Waiting..." messages until the services are up and running.
      11. Access the UI
  4. Vagrant with VirtualBox as provider

    You can download the Vagrantfile an example to get started, but ultimately you should customize your own Vagrantfile to match your environment (private networks, proxy settings, new scripts to configure the services, etc.)

    1. Download the box and Vagrantfile files from "build.coprhd.org"
    2. Edit the network settings and desired RAM to at least 2048 GB on the downloaded Vagrantfile
    3. Run "vagrant up" after editing the environment and wait the image to be ready
    4. You will notice some "Warning: apisvc unavailable. Waiting..." then "Warning: service unavailable. Waiting..." messages until the services are up and running.
    5. Default credentials to access the host are: svcuser / ChangeMe
      1. Login or access the UI

  5. OpenStack

    1. Download and deploy the qcow2 file from "build.coprhd.org"
  6. Docker 

    The recommended settings of your host is 8 GB of RAM and 1 CPU. However the settings can be increased to run 3, 5 or more containers (16 GB of RAM 4 CPU).

    1. Download the tgz file from "build.coprhd.org"
    2. Import the container with the command: cat CoprHD.x86_64-3.0.0.0.*.tgz | docker import - coprhd:latest
    3. If you are running a CoprHDDevKit, the script will be found on /opt/ADG/conf/configure.sh
    4. If your host does have any script yet, copy it from "configure.sh"
    5. Run: "bash configure.sh installDockerEnv"
    6. Wait for the services to be online by calling "docker exec -it vipr1 /opt/ADG/conf/configure.sh waitStorageOS"
    7. You will notice some "Warning: coordinatorsvc unavailable. Waiting...", "Warning: apisvc unavailable. Waiting...", then "Warning: service unavailable. Waiting..." messages until the services are up and running.
    8. Login or access the UI (note that it should be the same IP of your host, not the one docker is showing, as that is an internal IP)
    9. To cleanup the containers run "bash configure.sh uninstallDockerEnv"

      You can customize steps "e" and "i" to run more than 1 container, assuming you have the required hardware to run multiple containers on the same host: "bash configure.sh installDockerEnv $PWD n" or "bash configure.sh $PWD n", where n is the number of containers to run (3, 5, etc.).

Visit http://coprhd.github.io/download/ to find the major release versions of this appliance.

The storageos rpm version found in the appliance can be downloaded from: build.coprhd.org

Proxy Configuration

When behind a proxy, make sure you configure it first. Check this page for details: How to Download and Build CoprHD#IfyouarebuildingbehindaProxy.

Also check the HTTP_PROXY and HTTPS_PROXY values in /etc/sysconfig/proxy

B. How to setup a CoprHD DevKit appliance (development environment):

  1. VSphere
    1. Download the ovf and disk1.vmdk files from "build.coprhd.org"
    2. Deploy the ovf image on VSphere and add the <hostname>, <search domain>, <Network IP>, <Gateway>, <Netmask>, <DNS> and <VIP node 1> when prompted, as they will be used to setup the controller
    3. Default credentials to access the host are: root / ChangeMe
  2. VMware Workstation/Player/Fusion
    1. Download the vmx and vmdk files from "build.coprhd.org"
    2. Open the vmx image on VMware Workstation/Player/Fusion
    3. Default credentials to access the host are: root / ChangeMe
      1. Copy the ovf.properties example file by calling: "cp /opt/ADG/conf/ovf.properties.example /opt/ADG/conf/ovf.properties"
      2. Edit a file named: /opt/ADG/conf/ovf.properties as in the #Examples:/opt/ADG/conf/ovf.properties section down below with new network properties
      3. Run the following command: "/opt/ADG/bin/setNetwork 0"
      4. Run the following command: "service network restart" to allocate a static IP with the desired network properties
      5. Start an SSH session from the defined static IP using the same credentials (the following steps will fail if logged directly on VM's console): "ssh root@<ip>"
      6. Create network configuration by calling: "/opt/ADG/conf/configure.sh installNetworkConfigurationFile"
      7. Edit a file named: /etc/ovfenv.properties as in the #Examples:/etc/ovfenv.properties section down below with desired network properties
  3. VirtualBox

    You will notice a green screen while the image is booting for the first time and it should disappear after some minutes. If you would like to read the boot messages, press alt+1.

    1. Download the ovf and disk1.vmdk files from "build.coprhd.org"
    2. Import the ovf on VirtualBox
    3. Edit the VM settings and allocate the RAM settings to at least 2048 GB of memory and power it on
    4. Default credentials to access the host are: root / ChangeMe
      1. Copy the ovf.properties example file by calling: "cp /opt/ADG/conf/ovf.properties.example /opt/ADG/conf/ovf.properties"
      2. Edit a file named: /opt/ADG/conf/ovf.properties as in the #Examples:/opt/ADG/conf/ovf.properties section down below with new network properties
      3. Run the following command: "/opt/ADG/bin/setNetwork 0"
      4. Run the following command: "service network restart" to allocate a static IP with the desired network properties
      5. Start an SSH session from the defined static IP using the same credentials (the following steps will fail if logged directly on VM's console): "ssh root@<ip>"
      6. Create network configuration by calling: "/opt/ADG/conf/configure.sh installNetworkConfigurationFile"
      7. Edit a file named: /etc/ovfenv.properties as in the #Examples:/etc/ovfenv.properties section down below with desired network properties
  4. Vagrant with VirtualBox as provider

    You can download the Vagrantfile an example to get started, but ultimately you should customize your own Vagrantfile to match your environment (private networks, proxy settings, new scripts to configure the services, etc.)

    1. Download the box and Vagrantfile files from "build.coprhd.org"
    2. Edit the network settings and desired RAM to at least 2048 GB on the downloaded Vagrantfile
    3. Run "vagrant up" after editing the environment
    4. Default credentials to access the host (second IP) are: root / ChangeMe or vagrant / vagrant (first IP)
      1. Login
      2. Edit a file named: /etc/ovfenv.properties as in the #Examples:/etc/ovfenv.properties section down below with desired network properties
  5. OpenStack
    1. Download and deploy the qcow2 files from "build.coprhd.org".
  6. Docker
    1. Download the tbz file from "build.coprhd.org"
    2. Import the container with the command: cat CoprHDDevKit.x86_64-3.0.0.0.*.tbz | docker import - coprhddevkit:latest
    3. Run the container with the command: docker run --rm --privileged -it --net=host coprhddevkit:latest /bin/bash

Step-by-step guide (building your own appliances)

Choose either C or D to build the appliance that is best applicable (check the section #Caching artifacts to a local filesystem to run these builds faster):

C. Checkout and build the CoprHD Controller appliance (requires OpenSUSE's ISO install or a previous CoprHD DevKit appliance deployed):

  1. Login (default credentials for DevKit are root / ChangeMe)
  2. Download opensuse's ISO to the following location: /disks/adgbuild/OPENSUSE13.2/openSUSE-13.2-DVD-x86_64.iso
  3. git clone https://review.coprhd.org/scm/ch/coprhd-controller.git
  4. cd coprhd-controller
  5. make (using an option from #Examples:CoprHD Controller build options)
  6. copy the appliance built from the build folder created after make

D. Checkout and build the CoprHD DevKit appliance (requires OpenSUSE's ISO install or a previous CoprHD DevKit appliance deployed):

  1. Login (default credentials for DevKit are root / ChangeMe)
  2. Download opensuse's ISO to the following location: /disks/adgbuild/OPENSUSE13.2/openSUSE-13.2-DVD-x86_64.iso
  3. git clone https://review.coprhd.org/scm/ch/coprhd-controller.git
  4. cd coprhd-controller
  5. make (using an option from #Examples:CoprHD DevKit build options)
  6. copy the appliance built from the build folder created after make

Examples

/opt/ADG/conf/ovf.properties
network_DOM_SetupVM="example.localdomain"
network_hostname_SetupVM="devkit.example.localdomain"
network_ipv40_SetupVM="192.168.1.100"
network_ipv4dns_SetupVM="192.168.1.1"
network_ipv4gateway_SetupVM="192.168.1.2"
network_ipv4netmask0_SetupVM="255.255.255.0"
network_vip_SetupVM="192.168.1.100"
system_timezone="US/Eastern"
vm_vmname="SetupVM"
key_vmname="SetupVM"
/etc/ovfenv.properties
network_1_ipaddr6=::0
network_1_ipaddr=192.168.1.100
network_gateway6=::0
network_gateway=192.168.1.2
network_netmask=255.255.255.0
network_prefix_length=64
network_vip6=::0
network_vip=192.168.1.100
node_count=1
node_id=vipr1
CoprHD Controller build options
# IMPORTANT: the build folder should be "coprhd-controller"
# Example 1: Image builds (all formats)
make BUILD_TYPE=oss PRODUCT_VERSION=3.0.0.0 PRODUCT_RELEASE=999 clobber rpm controller
 
# Example 2: Image builds (all formats via docker containers)
make -f Makefile.docker BUILD_TYPE=oss PRODUCT_VERSION=3.0.0.0 PRODUCT_RELEASE=999 "clobber rpm controller"
 
# Example 3: Cleanup all files created
make destroy
 
# Example 4: Selecting one or more image formats
cd packaging/appliance-images/openSUSE/13.2
make -f CoprHD/Makefile qcow2 ovf vmx box SOURCE_RPM=/workspace/storageos-3.5.0.0.1094-1.x86_64.rpm JOB=999
CoprHD DevKit build options
# IMPORTANT: the build folder should be "coprhd-controller"
# Example 1: Image builds (all formats)
make BUILD_TYPE=oss PRODUCT_VERSION=3.5.0.0 PRODUCT_RELEASE=999 clobber devkit
 
# Example 2: Image builds (all formats via docker containers)
make -f Makefile.docker BUILD_TYPE=oss PRODUCT_VERSION=3.5.0.0 PRODUCT_RELEASE=999 "clobber devkit"

# Example 3: Cleanup all files created
make destroy
# Example 4: Selecting one or more image formats

cd packaging/appliance-images/openSUSE/13.2/CoprHDDevKit
make ovf box JOB=999

Caching artifacts to a local filesystem

You can create a local cache of openSuSE's artifacts by running the Makefile from coprhd-build/opensuse-13.2:

Building a local repository for CoprHD
# git clone https://review.coprhd.org/scm/ch/coprhd-build.git
# cd coprhd-build/opensuse-13.2
# make
# ls -1 -d /disks/adgbuild/OPENSUSE13.2/*
/disks/adgbuild/OPENSUSE13.2/openSUSE-13.2-DVD-x86_64.iso
/disks/adgbuild/OPENSUSE13.2/repo

This cache is used to build appliances faster, so RPMs won't be downloaded on each build and ISO won't have to be downloaded twice.

This also will build RPMs such as nginx, so it won't have to be built from source and it will be installed instead.

3 Comments

  1. node_id=vipr1
    We need to stop using Vipr and replace with CoprHD.  I know the node_id is tied into other settings so you can't just rename it here, but we need to start moving that direction.
  2. Does the vsphere deploy really require 2 IPs?

     

    1. Deploy the ovf image on VSphere and add the <hostname>, <search domain>, <Network IP>, <Gateway>, <Netmask>, <DNS> and <VIP node 1> when prompted, as they will be used to setup the controller
  3. Sharing the same one should be fine on CoprHD, the images are built in standalone mode, so they can't be configured as a cluster. Using 2 different ones should also work.