Cluster as a Service on Bright OpenStack 7.
[The following notes explain setting up a clusters as a service in Bright Cluster Manager. This means that clusters can be provided on demand by hardware controlled by Bright Cluster Manager. For environments with trusted users, the document provided here should be sufficient. However, because of the rapid pace of development of this service and the constant upgrades and enhancements, it is strongly advised that Bright Computing is contacted before attempting to deploy this for environments that require user isolation.]
The aim of the Cluster as a Service (CaaS) deployment is to create multiple, fully-isolated, clusters, that are installed on top of Bright OpenStack.
The users of Bright OpenStack can then spin up virtual clusters completely independently, using the command line or the Horizon dashboard.
CaaS and Bright-managed nodes/instances differ in concept, and are mutually exclusive:
Bright-managed nodes/instances merely have virtual nodes created on top of Bright OpenStack, in order to extend the computational power of the physical cluster.
CaaS, in contrast, has clusters that can be created and removed at will, within a Bright Cluster that is running OpenStack as the integrated layer in between.
The starting point for installing a CaaS system can be a standard Bright 7.1 cluster. For convenience it will be called the physical host cluster, although it does not need to be a physical cluster. The cluster is installed according to the instructions in the Installation Manual. 
OpenStack, and optionally Ceph, are deployed by following the instructions in the OpenStack Deployment Manual. 
If the cluster is to be an HA setup, then the Caas tools, pxehelper, and buildmatic, must be set up and installed after the OpenStack deployment, and before the secondary headnode is cloned.
As a prerequisite it must be enabled jumbo frame on the switch where the internalnet and vxlanhostnet is attached, the value on 9000 for the MTU can be setted per VLAN or per port.
A network and component flowchart follows:
The buildmatic and pxehelper software is installed on the physical head node:
buildmatic comes in the packages: buildmatic, buildmatic-common
pxehelper comes in the package: cm-openstack-caas
The buildmatic service is a framework that takes an XML configuration file as input and uses it to generate a PXE-bootable Bright Cluster Manager installer image, or a Bright Openstack installer image. The images that are generated are made for the different Bright versions and variety of Linux distributions that Bright supports. A particular buildmatic image is what is used to install the head node of a Bright cluster.
The PXEHelper service is used to dynamically redirect a head node instance that is PXE booting to the correct entries in the buildmatic service.
The cm-openstack-caas and cm-ipxe-caas packages are installed with yum:
# yum install -y cm-openstack-caas cm-ipxe-caas
On the controller node the cm-openstack-caas-dashboard package is installed with yum:
# yum install -y cm-openstack-caas-dashboard
Next, in the file /cm/shared/apps/cm-openstack-caas/bin/Settings.py1, the values of “external_dns_server” and “buildmatic_host_ip” should be edited appropriately and “localhost1” updated to the external ip address of the cluster.
Some special cases:
If buildmatic will be installed on the head node of the cluster, then the value of “buildmatic_host_ip” is set to the external IP address of the cluster.
If the host cluster is an HA setup, then the value of “buildmatic_host_ip” and “localhost1” is set to the shared IP address
An example of the text structure that may need to be modified in “Settings.py1” is:
'external_dns_server': '<INSERT NAME SERVER IP HERE>',
'buildmatic_host_ip': '<INSERT BUILDMATIC SERVER IP HERE>',
After the modifications are in place, the pxehelper service is started and enabled:
# systemctl start pxehelper
# systemctl enable pxehelper
The pxehelper service uses port 8082. The Shorewall firewall on the head node needs to unblock this port. This can be done by adding the following rule to “/etc/shorewall/rules” and then restarting shorewall:
# -- Allow pxehelper service for automatic head node installation
ACCEPT net fw tcp 8082
# systemctl restart shorewall
The OpenStack images can now be created:
# openstack image create --file /cm/local/apps/ipxe/ipxe-plain-net0.img --disk-format=raw --container-format=bare --public iPXE-plain-eth0
# openstack image create --file /cm/local/apps/ipxe/ipxe-plain-net1.img --disk-format=raw --container-format=bare --public iPXE-plain-eth1
# openstack image create --file /cm/local/apps/ipxe/ipxe-caas.img --disk-format=raw --container-format=bare --public ipxe-caas
The dnsmasq utility must now be configured. Its configuration file: /cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev1.conf
has a string
<INSERT EXTERNAL IP OF THE MACHINE RUNNING PXE HELPER HERE>
The string is replaced with the external IP address of the head node(s).
The configuration file also has a string
<INSERT EXTERNAL FQDN OF BUILDMATIC SERVER HERE>,<INSERT EXTERNAL IP OF BUILDMATIC SERVER HERE>
This is replaced with the FQDN of the head node (In case of HA setup with the FQDN assigned to the VIP) and with the IP address.
The following customizations are then added:
# cmsh -c "configurationoverlay; use openstackhypervisors; customizations; add /etc/neutron/plugins/ml2/openvswitch_agent.ini; entries; add securitygroup enable_security_group=false; add securitygroup firewall_driver=neutron.agent.firewall.NoopFirewallDriver; commit"
# cmsh -c "configurationoverlay; use openstacknetworknodes; customizations; add /etc/neutron/dhcp_agent.ini; entries; add dnsmasq_config_file=/cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev1.conf; commit"
Buildmatic is now installed and configured. There is a KB article  that gives more background on Buildmatic installation, and it can be followed up to and including step C. But, for convenience, the same instructions are listed here:
# yum -y install buildmatic-common buildmatic-7.2-stable createrepo dos2unix
The config file is generated and installed:
# /cm/local/apps/buildmatic/common/bin/setupbmatic --createconfig
# cp /cm/local/apps/buildmatic/common/settings.xml /cm/local/apps/buildmatic/7.2-stable/bin
# cp /cm/local/apps/buildmatic/common/nfsparams.xml /cm/local/apps/buildmatic/7.2-stable/bin
# cp /cm/local/apps/buildmatic/common/nfsparams.xml /cm/local/apps/buildmatic/7.2-stable/files
The rpm-store is now populated using a bright DVD. In the following example the rpm-store is populated with a Bright 7.2 version, and with centos 7.2 as the operating system. To add more supported Linux distributions, this step can be repeated with additional Bright ISOs.
# /cm/local/apps/buildmatic/common/bin/setupbmatic --createrpmdir bright7.2-centos7u2.iso
The xml buildconfig file must now be generated. This is the xml file used by the bright head node installer to setup the system.
The index used, “000001” here, must be at six digits in length and unique, it is possible to have different xml file (eg. 000001, 000002 etc.) for different version of OS.
If the xml already exist it won’t be overwritten, instead a new file will be created, for instance if the 000001.xml already exist the 000001-1.xml will be generated and so on.
In the following example a configuration for the 7.2-stable version of Bright, with a Centos 7.2 distribution, is created:
# /cm/local/apps/buildmatic/7.2-stable/bin/genbuildconfig -v 7.2-stable -d CENTOS7u2 -i 000001
A PXE image is now generated:
# /cm/local/apps/buildmatic/7.2-stable/bin/buildmaster /cm/local/apps/buildmatic/7.2-stable/config/000001.xml
The following lines are added to “/etc/exports”, so that they can be NFS-mounted from the installer (Replace <CIDR> with the public network ip address):
# cmsh -c “device fsexports master; add /home/bright/base-distributions@externalnet; set hosts externalnet; set path /home/bright/base-distributions; commit”
# cmsh -c “device fsexports master; add /home/bright/rpm-store@externalnet; set hosts externalnet; set path /home/bright/rpm-store; commit”
# cmsh -c “device fsexports master; add /home/bright/cert-store-pc/7.2@externalnet; set hosts externalnet; set path /home/bright/cert-store-pc/7.2; commit”
A symbolic link to the directory containing the license file is created:
# cd /home/bright
# ln -s cert-store cert-store-pc
The shorewall rules for NFS are now uncommented in the file /etc/shorewall/rules:
# -- Allow NFS traffic from outside to the master
ACCEPT net fw tcp 111 # portmapper
ACCEPT net fw udp 111
ACCEPT net fw tcp 2049 # nfsd
ACCEPT net fw udp 2049
ACCEPT net fw tcp 4000 # statd
ACCEPT net fw udp 4000
ACCEPT net fw tcp 4001 # lockd
ACCEPT net fw udp 4001
ACCEPT net fw udp 4005
ACCEPT net fw tcp 4002 # mountd
ACCEPT net fw udp 4002
ACCEPT net fw tcp 4003 # rquotad
ACCEPT net fw udp 4003
Shorewall is now restarted.
# systemctl restart shorewall
The new dnsmasq configuration is now copied into the openstack software image and also onto the network node.
# cp /cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev1.conf /cm/images/openstack-image/etc/neutron
# scp /cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev1.conf <NETWORK NODE IP>:/etc/neutron
A symlink for the images is created:
# ln -s /tftpboot/buildmatic /var/www/html/buildmatic/images
To use cm-openstack-caas, an OpenStack cluster and a cluster user must be added next.
Before adding a user, the synchronization of the LDAP users to OpenStack must be enabled.
# cmsh -c ‘openstack; use default; settings; users; set automaticallysyncldapuserstokeystone yes; set writeopenstackrcfilesforusers yes; commit’
A user <username> is then added and set as a member of the openstackusers group. This means that the user will be synced to OpenStack and become a member of the project <username>-project. The project <username>-project is created by cmdaemon automatically when a member is defined. The <username> and <password> can be changed as follows:
# cmsh -c ‘user; add <username>; set password <password>; commit’
# cmsh -c ‘group; add openstackusers; commit’
# cmsh -c ‘group; append openstackusers groupmembers <username>; commit’
It is possible to retrieve the password for login in the OpenStack Horizon portal from the “.openstackrc_password” in the home directory of the user.
$ cat ~/.openstackrc_password
For further details on adding a user, the instructions in chapter 6 of the administration manual can be followed. 
Create the flavor used as a default for nodes creation if not specified:
# openstack flavor create --vcpus 1 --ram 1024 --disk 10 --ephemeral 10 --public m1.xsmall
Enable jumbo frame, setting the MTU to a value of 9000 for the internalnetwork and for the vxlanhostnet network:
# cmsh -c "network use internalnet; set mtu 9000; commit"
# cmsh -c "network use vxlanhostnet; set mtu 9000; commit"
The configuration is now done and it is possible to start using CaaS.
Login with a user member of the “openstackusers” group and run the following command changing the string “<REPLACE>” with the username:
# echo "OS_INITIALS=<REPLACE>" >> ~/.openstackrc
The following file are sourced
# source .openstackrc
# source .openstackrc_password
To add the cluster, it must be created, and its components launched. This can be done in text mode as follows:
A cluster can be created using the “os_cluster” command. In the following example
the values “CENTOS7u2” and “7.2-stable” are used because in buildmatic the “7.2-stable” version of Bright and the “CENTOS7u2” distribution were uploaded.
The <CLUSTER_NAME> should be changed to a useful name, and
the number of compute nodes to be deployed should be set for <NUMBER_OF_NODE>.
$ module load cm-openstack-caas/7.2
$ os_cluster create <CLUSTER_NAME> CENTOS7u2 7.2-stable -n <NUMBER_OF_NODE>
Once the virtual cluster is installed, the “os_node” command can be used to add more nodes to it.
In order to add a node is object in cmdaemon must first be created. Replace <NODE TO CLONE> with the name of the node used and base for the cloning and <NEW NODE NAME> with the name of the node we are creating.
# cmsh -c ‘device; clone <NODE TO CLONE> <NEW NODE NAME>; commit’
# cmsh -c ‘device; clone node001 node003; commit’
Install the new node:
# os_node create <CLUSTER_NAME> -n <NUMBER_OF_NODE>
It is also possible to create nodes using a ”range” syntax:
# os_node create <CLUSTER_NAME> -r node001..node003
# os_node create <CLUSTER_NAME> -r node0[01-10]
To list all the virtual head nodes, the command “os_cluster list” can be used. Optionally, a list can be filtered with the -e flag, using a regex.
# os_cluster list
# os_cluster list -e test*
A cluster can be deleted with:
# os_cluster delete <CLUSTER_NAME>
It is also possible to create a head node so that the graphical installer for the Bright Cluster Manager can be run through step by step:
# os_cluster create <CLUSTER_NAME> none none
The graphical installer can then be launched by going to the dashboard on the controller node, which is at <Controller node hostname> or <Controller node ip address>
http://<Controller node hostname:10080>/dashboard
http://<Controller node ip address>/dashboard
Then from the dashboard, the user can go to <PROJECT> --> instance --> console --> select the Bright version from the menu options --> set distro (eg. CENTOS7u2)
and then proceed following the installation instructions .
The available versions of Bright and which versions of the operating system are available can be seen by pointing a browser at:
It is also possible to use the OpenStack dashboard, Horizon to spin-up a virtual cluster.
The package “cm-openstack-caas-dashboard” must be installed on the software images of the controller node.
# yum -y install cm-openstack-caas-dashboard.noarch
Reboot the controller node to sync the new image.
A login to the dashboard can be done using the URL:
http://<CaaS Controller Node>:10080/dashboard
and then going to the Bright dashboard. There, every cluster that has been installed can be seen, along with some useful information such as the number of nodes (Compute and Head Node), the floating IP addresses, and so on:
A warning about the “Add cluster” button in the right corner, which can be used to add a cluster: For the current (March 2016) proof-of-concept, the button works properly only if the user has carried out a login using ssh at least once. This is because the ssh CaaS key is generated by the login. If this login has not been carried out beforehand, then cluster creation will fail due to the missing Caas key, even though the GUI will say that a cluster has been added without showing an error.