Hyperledger Fabric Cluster with Kubernetes

Setting up Hyperledger fabric cluster on Kubernetes

About Hyperledger Fabric

Hyperledger Fabric is a permissioned blockchain implementation and one of the Hyperledger projects hosted by The Linux Foundation.

Over the past year, many organizations are adopting it for distributed ledger needs in their use cases.

Why Kubernetes for Hyperledger Fabric?

This is a question that usually comes up in our minds while thinking about the architecture of our fabric-based application. ‘Why is there even a need to set it up on a Kubernetes cluster? Why can’t we simply go with the basic docker image setup on a regular server instance?’

Kubernetes seems ideal for deploying and managing the Fabric components for several reasons:

  1. Fabric deals with containers. Kubernetes manages containers like a Pro: Fabric components are packaged into docker containers and even the chaincode (the smart contracts) creates a container dynamically. Here, Kubernetes can simplify managing containers in a cluster as it is a very popular and easy to use Container Orchestration.
  2. High Availability automated: Replication controller in Kubernetes helps in providing high availability of crucial Fabric components like ordering service and fabric peers. So, if one container goes down, Kubernetes automatically creates another. In effect, this gives us ZERO DOWNTIME for our fabric containers.

About this tutorial

Before moving forward, we assume that you’ve read about Kubernetes, Helm and Hyperledger Fabric in general or are familiar with the terminologies.

While there are resources available on the internet as well as Fabric’s official documentation for the initial limited setup for the fabric ecosystem, there are hardly any that explains how exactly to setup the Hyperledger fabric cluster using Kubernetes according to your needs.

This tutorial would walk you through a step by step procedure of setting up a dynamic cluster on Kubernetes for any number of organizations and with any number of peers per organization. We would be creating a repository with helm charts with a python script to setup the cluster. If you want to skip the nitty-gritty and get to the setup, here is the link to the repository.

Cluster Architecture

We would be having organization specific pods in the cluster and there would be a dedicated volume for each peer pod storing their certs further giving them runtime writable space.

 

Kubernet cluster

Each of the peer pods has their dedicated volume claims (PVC) in which their respective MSP and TLS certificates are present. Please note that for each organization, we can run our own application’s instance of having the business logic.

The extra app would be able to communicate with the peer pods PVC. It would also have access to a shared NFS server. The NFS would also store network-config.yaml files that are needed to install and instantiate chaincode in fabric peers via nodeSDK.

Alternatively, you can use CLI pods to install and instantiate the chaincode as well.

The best part is that we can follow either of the above-mentioned approaches to deploy our business logic using the python script.

Initial Setup

  •        The first step is to Install kubectl command line tool and helm server
  •        Create a kubernetes cluster. Here’s the link to setup a cluster in gCloud.
  •        Point kubectl in your machine to newly created cluster and initialize helm tiller

Configuration Tools

We would need cryptogen and configtxgen tools primarily to create the channel artifacts and certificates for us. You would also need Python 2.7+ version installed in your machine.

Spinning up the Magic

Clone the repository
to get all the helm charts and python script that sets up the cluster for us.

Step 1:

Here we utilize the crypto-config.yaml file to setup our cluster requirements. This is the same file which is used by cryptogen tool to create peers’ and orderers’ certificates We can modify this file to specify how many organizations we need, and how many peers are required in each organization. We may also specify our own unique application running for each organization by passing it in ExtraPods field. A typical example of the content of the crypto-config file having two organizations can be found in the link below:

crypto-config.yaml Example

 

 

As you may notice, we have added ExtraPods field to run extra application pod in each org namespace.

Step 2:

Next step would be to modify the configtx.yaml file in the same fashion. You can find a sample file with two organizations and a channel configuration in the link below:

configtx.yaml Example

 

Step 3:

Use the command make fiber-up to setup the fabric components in our cluster. This command will invoke the init_orderers.py and init_peers.py scripts that would generate the pods according to the modified files.

The script does the following tasks in chronological order:

  1. Create crypto-config folder having the peer and orderer certificates using cryptogen tool
  2. Create channel-artifact folder having the genesis-block for the fabric
  3. Spin up the Persistent volume claims (PVC) for orderer pods and copy the required certificates for pods in their respective PVC via a test pod
  4. Deletes the test pod and create Ordering Pods
  5. Spin up the Persistent volume claims (PVC) for all peer pods and copy the required certificates for pods in their respective PVC
  6. Deletes the test pod and initialize Peer pods for all organizations

Step 4(Optional):

A: Updating crypto-config.yaml and adding helm chart for your app

– Setting up the Extra Pods that you need to run per organization. You can mention these in ‘ExtraPods’ field for each PeerOrgs in crypto-config.yaml.

– A sample entry to have a simple app would look like this:

 

– In the Chart field, you need to pass the path of the helm chart of your app. You can also pass values to override in the helm chart in Values field.

 B: Setting up NFS storage

– If your extra apps would be interacting with Fabric components using the SDK, your apps would need network-config.yaml files which store the channel information and peer public certs path.

NOTE: If your extra app doesn’t need nodeSDK or network-config.yaml, you can skip Step 4.B

– To add the NFS server, we must first add a persistent disk to your project. To add a disk from cloud SDK, run the command:

 gcloud compute disks create –size=10GB –zone=us-east1-b nfs-disk

You can also go in gcloud console and create it using UI dashboard.

– In the file /public-certs-pvc/public-certs-nfs-service.yaml , update the value of gcePersistentDisk.pdName to the name of your persistent disk.

– Run the command  make nfs-up  to create the shared NFS storage and generate the network-config.yaml files for each organization.

C: Setting up Extra App Pods

– Check if all the fiber pods are up by command: kubectl get po –namespace=peers

– Once all the pods are Running run command: make extras-up run command

Testing the chaincode

The script automatically creates a default channel between two organization and joins all the peer pods in the channel. After that, you can install a chaincode in two ways:

  1. Via your Extra app pods using nodeSDK
  2. By using CLI pod for each organization.

Here is how you can do it using CLI pods:

  1. Install the chaincode

– Enter the bash of Org 1 CLI pod:

 

 In the terminal, install chaincode on both the org peer pods. The commands to install it in one peer is given below:

 

– Do the same for Peer1 in ORG_1_CLI_POD

– Repeat the same steps in ORG_2_CLI_POD as well.

  1. Instantiate the chaincode

– Enter the bash of one of the Org CLI pod and instantiate chaincode by following command

 

 

  1. Query the chaincode

– From the other ORG_CLI_POD, query the instantiated chaincode

 

 

If all goes well, you should see the passed value of the key

Bringing the Cluster down

If we want to bring down the cluster we setup, we can simply run   make down  Alternatively, if you wish to remove or recreate only a portion of our cluster, make use of the following commands:

make peer-down : to tear down only the peer pods and other organizational pods

make orderer-down : to tear down only orderer pods and namespace.

For more details about these commands, check the Makefile in the repository.

Caveats and Challenges

Kubernetes cluster comes with a handful of challenges that we have answered in our approach. We believe that it is important to go through these in order to understand the internal working of our script:

  1. Dynamic number of peers in every organization

A simple cluster with 2 Organizations and 1 Peer per organization sounds good for starting up with HLF network. However, in a production environment, you might have a dynamic number of organizations involved. Each organization may decide to have a different number of peer pods and each of them might have different instances of our application running alongside.

To handle this dynamicity, we use the crypto-config.yaml file which is also used by cryptogen tool to create certificates. Out python script parses the peer organization hierarchy and creates dynamic namespaces and pods according to it.

  1. Pre-populating Volume claims in Kubernetes
    1. For Peer pod to be up and running successfully, there are few certificates that should be pre-populated in its volumes. Same goes for orderer service pods. They require the genesis-block in their volumes before the Pods start.
    2. While it is quite achievable in usual docker volumes, it is not the same in Kubernetes PVC. The reason lies in a Kubernetes cluster, the volume is not in a single instance, instead, it is spread across different servers (Node in terms of Kubernetes). A file a.txt is present in any PVC, then we can’t be sure of its actual storage location in the cluster.
    3. This creates a challenge to pre-populate the PVC before a pod is up. We do this by the following method:
      • First, we create our PVC with a test pod attached to it.
      • Then we copy the required certs by using kubectl cp command.
      • Then we delete the test pod and bring up the target Peer pod.

 

NOTE:

·        Init Container is the ideal way to pre-populate the files in Kubernetes. But it doesn’t work in our case as we have existing files (certificates) in our local that we need to copy into PVC and can’t be hardcoded in the deployment.yaml

·        HostPath is a type of PVC that resides in only a single server cluster, but since our cluster can expand across servers, it should not be used here.

  1. Chaincode Dynamic container creation and recognition:

When a peer in Fabric instantiates a chaincode, it creates a Docker container in which the chaincode runs. The Docker API endpoint it invokes to create the container is Unix:///var/run/docker.sock.

This mechanism works well as long as the peer container and the chaincode container is managed by the same Docker engine. However, in Kubernetes, the chaincode container is created by the peer without notifying Kubernetes. Hence the chaincode and peer pods cannot connect with each other which results in failure when instantiating the chaincode.

To work around this problem, we have to pass the following environment variable in our peer and orderer pods:

CORE_VM_ENDPOINT: “Unix:///host/var/run/docker.sock”

CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE: “bridge”

GODEBUG: “netdns=go”

This environment variables would make sure that the container created outside the Kubernetes flow are also recognized by peer pods.

  1. Writable Disk Space for Extra Pods

Any Extra Pod using NodeSDK to interact with Fabric component would require a writable space that would be accessible to it. Since an organization may have many of such extra pods running, they would all need the same shared space to write files into.

Gcloud Persistent volumes don’t support ReadWriteMany class, i.e. Gcloud only encourages PVC to be writable by a single pod (ReadWriteOnce), or be readable to multiple pods (ReadOnlyMany). To overcome this challenge, we set up an independent NFS server and pod mounted on top of a gcloud Persistent Disk. As a result, each organization’s pods would have access to a specific sub-directory in the Network File System.

  1. Network-config.yaml file for Pods using NodeSDK

Any Extra Pod using NodeSDK to interact with Fabric component requires a network-config.yaml file that has info about the channel and the certificates required for it. Our solution currently generates a network-config.yaml file for a two-organization channel for each of the organization and populates it in the NFS. Each network-config file goes in the respective organization’s subfolder.

End Result and Conclusion

The end result is expected to be a properly working Hyperledger Use Case Setup on the Kubernetes cluster.

  • The unique point of this cluster is no component’s Kubernetes setup was hardcoded in the Kubernetes deployment, but was dynamically created as per each organization’s requirement.
  • Using the repository code by following this approach, we can setup a Hyperledger fabric use case in minutes by compiling the requirements in crypto-config.yaml

In the end, we would like to conclude by pointing out a few improvements that can be added to the discussed architecture.

In the big level project, each organization might have a different cluster for their resources. In this case, we would like to go ahead with Intercluster communication methods in Kubernetes like Federation service. We will be publishing a blog explaining that architecture shortly.

Your feedback and pull requests are welcome on the Github Repository.