Release Notes Playbook Structure


The Kubernetes playbook is different from the Data Fabric and the App Fabric in that it is much more involved than the other two and has more configurable options. However, to implement a vanilla on-prem installation it is easier to run than the others.

Playbook File Structure

The Playbook inside the container will have a structure similar to the following.

  • playbook
    • inventory
      • group_vars
        • all
          • all.yml
          • options.yml
    • local.ini
    • roles
      • common
      • core
      • debug


Before running the playbook, the implementer will need to check and amend the variables inside the options.yml file. These options are the basis of the playbook configuration and setup of the fraXses platform.

Registry Credentials(Harbor)

These credentials are those supplied by the Platform Development team for the specified client.


Cluster Details

These fields are very important to the setup of the Cluster


The client field defines the client name and/or name of the environment for that particular installation.

This name is appended to fraxses to generate the namespace in the cluster. I.e. a value of myclient-dev in the client field will map to a kubernetes namespace of fraxses-myclient-dev. Namespacing can be used for multi-tenet clusters so it is important to get this correct at time of implementation.


This is the amount of nodes available in the kubernetes cluster on which fraXses is being provisioned. The Playbook scales resources based on the number of nodes in the system so it is import to get this right at the point of provisioning.


This sets encryption on in fraXses cluster-wide. fraXses employs a three way encryption for all traffic in the messaging backbone (kafka). Having the encryption on add ~10-15% latency on transaction times. The choice of having this switched on should be discussed with the client but as a rule of thumb, it should be switched off for dev and uat environments but switched on in production.


This field holds the client’s part of the secret encryption key.


This section defines which underlying storage the cluster will be using.


This values defines the storage used. For on-prem installations it should be cephrbd.

For managed services, this value should match the native storage of the managed service. As well as this field there is another section which requires values that are specific to the managed service storage. Please reach out to the Platform Development team for help with managed service storage.


This section defines the ingress method expected to be used by the cluster. For on-prem installations the default is nginx, but different managed services or different clients may want to use different ingress controllers. Regardless of the controller used, they should support regex and rewrite ingress rules.


This is the type of ingress controller being used, i.e. nginx


This defines the ingress host that the gateway will be found on.
This may be a defined DNS name (that may have secure certs) i.e. or an IP address and a NodePort, i.e.

Some of the fraXses components require connecting back into the gateway and will use this for that connection.

System (Development) Token

The components in the fraXses platform sometimes will call into the gateway with a defined system token. This is commonly called the development token.


This is the system (development) token defined in the fraXses meta database.