Kubernetes Cluster Installation

Kubernetes Cluster Installation

  1. To get started, log in to the dashboard, find the Kubernetes Cluster in the Marketplace, and click Install. Note that this clustered solution is available only for billing customers.

 

2. Сhoose the type of installation:

  • Clean Cluster with pre-deployed Hello World example

 

  • Deploy custom helm or stack via shell commands. Type a list of commands to execute the helm chart or other commands for a custom application deployment.

 

By default, here you are offered to install the Open liberty operator with a set of commands:

OPERATOR_NAMESPACE=open-liberty

kubectl create namespace "$OPERATOR_NAMESPACE"

kubectl apply -f https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-crd.yaml

curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-cluster-rbac.yaml | sed -e "s/OPEN_LIBERTY_OPERATOR_NAMESPACE/${OPERATOR_NAMESPACE}/"  | kubectl apply -f -

curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-operator.yaml  | sed -e "s/OPEN_LIBERTY_WATCH_NAMESPACE/${OPERATOR_NAMESPACE}/"  | kubectl apply -n ${OPERATOR_NAMESPACE} -f -

kubectl apply -f https://raw.githubusercontent.com/cloudjiffy-jps/kubernetes/v1.18.10/addons/open-liberty.yaml

 

3. As a next step, choose the required topology of the cluster. Two options are available:

  • Development: one master (1) and one scalable worker (1+) - lightweight version for testing and development purposes
  • Production: multi-master (3) with API balancers (2+) and scalable workers (2+) - cluster with pre-configured high availability for running applications in production

Where:

    • Multi-master (3) - three master nodes.
    • API balancers (2+) - two or more load balancers for distributing incoming API requests. In order to increase the number of balancers, scale them horizontally.
    • Scalable workers (2+) - two or more workers (Kubernetes Nodes). In order to increase the number of workers, scale them out horizontally.

 

4. Attach dedicated NFS Storage with dynamic volume provisioning. 

 

By default, every node has its own filesystem with read-write permissions but for access from other containers or persisting after redeployments, the data should be placed to a dedicated volume. 

You can use a custom dynamic volume provisioner by specifying the required settings in your deployment Yaml files. 

Or, you can keep the already pre-configured volume manager and NFS Storage built-in to the Cloudjiffy Kubernetes cluster. As a result, the physical volumes are going to be provisioned dynamically on demand and connected to the containers. Storage Node can be accessed and managed using file manager via the dashboard, SFTP, or any NFS client. 

 

5. If necessary you can install auxiliary software to monitor and troubleshoot the K8s cluster, and enable API access with the help of complementary tools checkboxes:

  • Install Prometheus & Grafana to monitor the K8s cluster and the application's health. This software requires an additional 5GB of disk space for persistent volumes and consumes about 500 MB of RAM
  • Install Jaeger tracing tools to ensure effective troubleshooting for distributed services
  • Enable Remote API Access to provide an ability to manage K8s via API