Release Notes for Cloudjiffy 6.1.2

In this document, I will find all of the new features, enhancements, and visible changes included in the CloudJiffy PaaS 6.1.2 release.

 

New Features:

 

1. Topology Wizard Improvements

A major overhaul of the topology builder for the environment wizard was performed in the current 6.1 Cloudjiffy release. The main change is the implementation of the ability to search for the required software stack and add it to any layer. The standard approach recommends the following topology structure from top to bottom:

  • load balancers (green blocks)
  • application servers (blue)
  • databases (orange)
  • extra (gray)

With the new change, the configuration of the custom topologies is significantly simplified. For example, I can easily add Kubernetes nodes into the application servers section in the middle of the wizard.

topology wizard search

Usually, when selecting a stack for a block (layer) in the topology wizard, users are provided with a list of recommended software for the specific role according to the block position. For now, a new “More…” option is added to the stack selection drop-down list to choose a template from any role. For example, I can easily add a database into the central block, which is commonly reserved for application servers.

In order to help locate the required stack quicker, the Search field can be accessed by clicking on the current stack name at the top of the list. Start typing to see the relevant results grouped by their role. Additionally, the search option was added for the engine/version field in the central part of the wizard when a particular stack is already selected.

Other adjustments of the topology wizard include:

  • renamed the Docker tab to Custom (since different container types are available - Kubernetes, Docker Native, etc.), adjusted the corresponding icon and descriptions of the available options
  • added the default Storage block to the .NET tab
  • implemented tags search when working with custom containers based on the Docker images in topology wizard and during container redeploy

 

Starting with the 6.1 platform release, Shared Storage Cluster provides support of the Gluster Native Client for distributed shared (cloud) storage. Such a change allows connecting clients over the FUSE interface (in addition to the standard NFS).

Compared to the NFS protocol, GlusterFS offers greater reliability. It operates with multiple servers and is recommended for cases that require high concurrency, high performance of the write operations, and failover recovery upon emergencies.

Currently, only the Shared Storage Cluster can export data using GlusterFS (i.e. as a Gluster Native server). At the same time, any node (except alpine-based containers) can operate as a client and mount data via the GlusterFS protocol.

Gluster Native client type

When selecting a protocol, as a general rule, choosing NFS for better performance and Gluster Native for reliability:

  • NFS - straightforward file system protocol, designed for accelerated processing and high performance
  • Gluster Native (FUSE) - reliable file system protocol with automatic replication of the mounted data, designed for data backup and failover (requires less CPU/disk than NFS)

I can learn more about the architecture implementation of GlusterFS from the official documentation.

 

Cloudjiffy PaaS implements support for the Ubuntu 21 OS templates on all the Cloudjiffy installations regardless of the platform version. The new release offers an updated kernel, toolchain upgrades, security improvements, and more. For detailed information on Ubuntu 21, refer to the official release notes.

 

Changed Features:

 

Cloudjiffy’s Auto-Clustering feature helps users automatically set up a production-ready cluster for some of the most popular software stacks. Such an option has been provided for MongoDB and PostgreSQL databases for quite some time and receives positive feedback without any major issues. As a result, starting with the 6.0.5 Cloudjiffy release, we are removing the “beta” UI label next to the Auto-Clustering option for these stacks.

 

2. Kubernetes Cluster Domain Length Validation

The total length of the container hostname is limited to 64 characters due to the Linux specifics. Cloudjiffy PaaS automatically validates this value during environment creation. However, for Kubernetes instances, the requirement is a bit lower - 63. So, starting with the platform 6.0.6 release, such nuance is correctly considered during the Kubernetes Cluster package creation to avoid possible errors due to incorrect domain length.

 

Cloudjiffy PaaS provides a Domain Binding option that allows configuring custom domains for environments that are accessed via Shared Load Balancer (i.e. without public IP). The process is simple - I just need to create the appropriate CNAME or ANAME record for my domain and bind it to the environment via the Cloudjiffy dashboard.

Note: For environments that are accessed through the public IP (recommended for production), I don’t need to bind domains via the dashboard. Just configure A Record in the DNS panel to map a custom domain directly to the required IP address.

In order to clarify the process to users, the appropriate specifics and detailed steps were added to the Custom Domains tab. For further convenience, the exact environment domain that should be used for CNAME or ANAME records was placed in a separate field with a quick copy button. Another UI adjustment is that the form explicitly denotes if the current environment does not have any bound domains.

Additionally, the Swap Domains section was provided with a list of bound domains. As a result, I can view the list of the current and target environment domains (in the Domain Binding and Swap Domains subsections, respectively).

 

Some adjustments were applied to the backups created via the redeploy functionality on the Apache PHP stacks. In addition to the latest backup ({file_name}.backup), the platform will create and keep a copy of the required config files for every redeployment to a different tag ({file_name}.{time_stamp}). Such an improvement allows me to track the changes better, simplifying analysis and rollback in case of necessity.

Also, when redeploying to the same tag, the platform won’t overwrite the existing php.ini file.

 

Starting with the Cloudjiffy 6.1 release, file permissions for containers after an environment creation are adjusted to match the default values required for some of the most popular applications. For example, such a change ensures that cPanel can be deployed without additional configurations.

 

In order to allow a straightforward use of the Java keytool utility, a small adjustment was made to the Cloudjiffy Java-based containers. Namely, the application was added to the sudoers file, which allows using keytool with the sudo rights and adjusting the container’s keystore even if it belongs to the root user.

 

OnBeforeInit is a Cloud Scripting event that is triggered before application installation. It is usually used to dynamically configure the installation form based on some specifics (e.g. account quotas). In the current 6.1 Cloudjiffy upgrade, the onBeforeInit event was improved to support placeholders, allowing validation of the collaborator account’s permissions. Such a change ensures that the JPS installation frame can be correctly customized when installing as a collaborator.

Additionally, a new trigger condition was added for the onBeforeInit event. For now, it is possible to implement some custom initialization actions upon clicking a custom button.

 

In the current 6.1 platform version, error notifications for the failed VCS deployment operations were reviewed and adjusted to clarify the root cause of the issue or provide pointers for further troubleshooting. The new texts are aimed to help developers quicker resolve any problems related to the deployments from Git/SVN repositories.

 

 

Fixed: 

 

Debian 8 “Jessie” software stack LTS support has officially ended. It will no longer get any updates and security fixes. In the current Cloudjiffy 6.1 release, this version was removed from the list of supported OS templates to ensure that users operate with reliable and secure stacks only. The platform restricts the creation of new Debian 8 containers, but all existing ones remain fully operable. However, we strongly recommend updating such instances to the 9th or 10th release versions via built-in redeploy functionality.


Was this article helpful?

mood_bad Dislike 0
mood Like 0
visibility Views: 4173