Hyperledger Fabric Cluster with Kubernetes

Setting up Hyperledger fabric cluster on Kubernetes

About Hyperledger Fabric

Hyperledger Fabric is a permissioned blockchain implementation and one of the Hyperledger projects hosted by The Linux Foundation.

Over the past year, many organizations are adopting it for distributed ledger needs in their use cases.

Why Kubernetes for Hyperledger Fabric?

This is a question that usually comes up in our minds while thinking about the architecture of our fabric-based application. ‘Why is there even a need to set it up on a Kubernetes cluster? Why can’t we simply go with the basic docker image setup on a regular server instance?’

Kubernetes seems ideal for deploying and managing the Fabric components for several reasons:

  1. Fabric deals with containers. Kubernetes manages containers like a Pro: Fabric components are packaged into docker containers and even the chaincode (the smart contracts) creates a container dynamically. Here, Kubernetes can simplify managing containers in a cluster as it is a very popular and easy to use Container Orchestration.
  2. High Availability automated: Replication controller in Kubernetes helps in providing high availability of crucial Fabric components like ordering service and fabric peers. So, if one container goes down, Kubernetes automatically creates another. In effect, this gives us ZERO DOWNTIME for our fabric containers.

About this tutorial

Before moving forward, we assume that you’ve read about Kubernetes, Helm and Hyperledger Fabric in general or are familiar with the terminologies.

While there are resources available on the internet as well as Fabric’s official documentation for the initial limited setup for the fabric ecosystem, there are hardly any that explains how exactly to setup the Hyperledger fabric cluster using Kubernetes according to your needs.

This tutorial would walk you through a step by step procedure of setting up a dynamic cluster on Kubernetes for any number of organizations and with any number of peers per organization. We would be creating a repository with helm charts with a python script to setup the cluster. If you want to skip the nitty-gritty and get to the setup, here is the link to the repository.

Cluster Architecture

We would be having organization specific pods in the cluster and there would be a dedicated volume for each peer pod storing their certs further giving them runtime writable space.

 

Kubernet cluster

Each of the peer pods has their dedicated volume claims (PVC) in which their respective MSP and TLS certificates are present. Please note that for each organization, we can run our own application’s instance of having the business logic.

The extra app would be able to communicate with the peer pods PVC. It would also have access to a shared NFS server. The NFS would also store network-config.yaml files that are needed to install and instantiate chaincode in fabric peers via nodeSDK.

Alternatively, you can use CLI pods to install and instantiate the chaincode as well.

The best part is that we can follow either of the above-mentioned approaches to deploy our business logic using the python script.

Initial Setup

  •        The first step is to Install kubectl command line tool and helm server
  •        Create a kubernetes cluster. Here’s the link to setup a cluster in gCloud.
  •        Point kubectl in your machine to newly created cluster and initialize helm tiller

Configuration Tools

We would need cryptogen and configtxgen tools primarily to create the channel artifacts and certificates for us. You would also need Python 2.7+ version installed in your machine.

Spinning up the Magic

Clone the repository
to get all the helm charts and python script that sets up the cluster for us.

Step 1:

Here we utilize the crypto-config.yaml file to setup our cluster requirements. This is the same file which is used by cryptogen tool to create peers’ and orderers’ certificates We can modify this file to specify how many organizations we need, and how many peers are required in each organization. We may also specify our own unique application running for each organization by passing it in ExtraPods field. A typical example of the content of the crypto-config file having two organizations can be found in the link below:

crypto-config.yaml Example

 

 

As you may notice, we have added ExtraPods field to run extra application pod in each org namespace.

Step 2:

Next step would be to modify the configtx.yaml file in the same fashion. You can find a sample file with two organizations and a channel configuration in the link below:

configtx.yaml Example

 

Step 3:

Use the command make fiber-up to setup the fabric components in our cluster. This command will invoke the init_orderers.py and init_peers.py scripts that would generate the pods according to the modified files.

The script does the following tasks in chronological order:

  1. Create crypto-config folder having the peer and orderer certificates using cryptogen tool
  2. Create channel-artifact folder having the genesis-block for the fabric
  3. Spin up the Persistent volume claims (PVC) for orderer pods and copy the required certificates for pods in their respective PVC via a test pod
  4. Deletes the test pod and create Ordering Pods
  5. Spin up the Persistent volume claims (PVC) for all peer pods and copy the required certificates for pods in their respective PVC
  6. Deletes the test pod and initialize Peer pods for all organizations

Step 4(Optional):

A: Updating crypto-config.yaml and adding helm chart for your app

– Setting up the Extra Pods that you need to run per organization. You can mention these in ‘ExtraPods’ field for each PeerOrgs in crypto-config.yaml.

– A sample entry to have a simple app would look like this:

 

– In the Chart field, you need to pass the path of the helm chart of your app. You can also pass values to override in the helm chart in Values field.

 B: Setting up NFS storage

– If your extra apps would be interacting with Fabric components using the SDK, your apps would need network-config.yaml files which store the channel information and peer public certs path.

NOTE: If your extra app doesn’t need nodeSDK or network-config.yaml, you can skip Step 4.B

– To add the NFS server, we must first add a persistent disk to your project. To add a disk from cloud SDK, run the command:

 gcloud compute disks create –size=10GB –zone=us-east1-b nfs-disk

You can also go in gcloud console and create it using UI dashboard.

– In the file /public-certs-pvc/public-certs-nfs-service.yaml , update the value of gcePersistentDisk.pdName to the name of your persistent disk.

– Run the command  make nfs-up  to create the shared NFS storage and generate the network-config.yaml files for each organization.

C: Setting up Extra App Pods

– Check if all the fiber pods are up by command: kubectl get po –namespace=peers

– Once all the pods are Running run command: make extras-up run command

Testing the chaincode

The script automatically creates a default channel between two organization and joins all the peer pods in the channel. After that, you can install a chaincode in two ways:

  1. Via your Extra app pods using nodeSDK
  2. By using CLI pod for each organization.

Here is how you can do it using CLI pods:

  1. Install the chaincode

– Enter the bash of Org 1 CLI pod:

 

 In the terminal, install chaincode on both the org peer pods. The commands to install it in one peer is given below:

 

– Do the same for Peer1 in ORG_1_CLI_POD

– Repeat the same steps in ORG_2_CLI_POD as well.

  1. Instantiate the chaincode

– Enter the bash of one of the Org CLI pod and instantiate chaincode by following command

 

 

  1. Query the chaincode

– From the other ORG_CLI_POD, query the instantiated chaincode

 

 

If all goes well, you should see the passed value of the key

Bringing the Cluster down

If we want to bring down the cluster we setup, we can simply run   make down  Alternatively, if you wish to remove or recreate only a portion of our cluster, make use of the following commands:

make peer-down : to tear down only the peer pods and other organizational pods

make orderer-down : to tear down only orderer pods and namespace.

For more details about these commands, check the Makefile in the repository.

Caveats and Challenges

Kubernetes cluster comes with a handful of challenges that we have answered in our approach. We believe that it is important to go through these in order to understand the internal working of our script:

  1. Dynamic number of peers in every organization

A simple cluster with 2 Organizations and 1 Peer per organization sounds good for starting up with HLF network. However, in a production environment, you might have a dynamic number of organizations involved. Each organization may decide to have a different number of peer pods and each of them might have different instances of our application running alongside.

To handle this dynamicity, we use the crypto-config.yaml file which is also used by cryptogen tool to create certificates. Out python script parses the peer organization hierarchy and creates dynamic namespaces and pods according to it.

  1. Pre-populating Volume claims in Kubernetes
    1. For Peer pod to be up and running successfully, there are few certificates that should be pre-populated in its volumes. Same goes for orderer service pods. They require the genesis-block in their volumes before the Pods start.
    2. While it is quite achievable in usual docker volumes, it is not the same in Kubernetes PVC. The reason lies in a Kubernetes cluster, the volume is not in a single instance, instead, it is spread across different servers (Node in terms of Kubernetes). A file a.txt is present in any PVC, then we can’t be sure of its actual storage location in the cluster.
    3. This creates a challenge to pre-populate the PVC before a pod is up. We do this by the following method:
      • First, we create our PVC with a test pod attached to it.
      • Then we copy the required certs by using kubectl cp command.
      • Then we delete the test pod and bring up the target Peer pod.

 

NOTE:

·        Init Container is the ideal way to pre-populate the files in Kubernetes. But it doesn’t work in our case as we have existing files (certificates) in our local that we need to copy into PVC and can’t be hardcoded in the deployment.yaml

·        HostPath is a type of PVC that resides in only a single server cluster, but since our cluster can expand across servers, it should not be used here.

  1. Chaincode Dynamic container creation and recognition:

When a peer in Fabric instantiates a chaincode, it creates a Docker container in which the chaincode runs. The Docker API endpoint it invokes to create the container is Unix:///var/run/docker.sock.

This mechanism works well as long as the peer container and the chaincode container is managed by the same Docker engine. However, in Kubernetes, the chaincode container is created by the peer without notifying Kubernetes. Hence the chaincode and peer pods cannot connect with each other which results in failure when instantiating the chaincode.

To work around this problem, we have to pass the following environment variable in our peer and orderer pods:

CORE_VM_ENDPOINT: “Unix:///host/var/run/docker.sock”

CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE: “bridge”

GODEBUG: “netdns=go”

This environment variables would make sure that the container created outside the Kubernetes flow are also recognized by peer pods.

  1. Writable Disk Space for Extra Pods

Any Extra Pod using NodeSDK to interact with Fabric component would require a writable space that would be accessible to it. Since an organization may have many of such extra pods running, they would all need the same shared space to write files into.

Gcloud Persistent volumes don’t support ReadWriteMany class, i.e. Gcloud only encourages PVC to be writable by a single pod (ReadWriteOnce), or be readable to multiple pods (ReadOnlyMany). To overcome this challenge, we set up an independent NFS server and pod mounted on top of a gcloud Persistent Disk. As a result, each organization’s pods would have access to a specific sub-directory in the Network File System.

  1. Network-config.yaml file for Pods using NodeSDK

Any Extra Pod using NodeSDK to interact with Fabric component requires a network-config.yaml file that has info about the channel and the certificates required for it. Our solution currently generates a network-config.yaml file for a two-organization channel for each of the organization and populates it in the NFS. Each network-config file goes in the respective organization’s subfolder.

End Result and Conclusion

The end result is expected to be a properly working Hyperledger Use Case Setup on the Kubernetes cluster.

  • The unique point of this cluster is no component’s Kubernetes setup was hardcoded in the Kubernetes deployment, but was dynamically created as per each organization’s requirement.
  • Using the repository code by following this approach, we can setup a Hyperledger fabric use case in minutes by compiling the requirements in crypto-config.yaml

In the end, we would like to conclude by pointing out a few improvements that can be added to the discussed architecture.

In the big level project, each organization might have a different cluster for their resources. In this case, we would like to go ahead with Intercluster communication methods in Kubernetes like Federation service. We will be publishing a blog explaining that architecture shortly.

Your feedback and pull requests are welcome on the Github Repository.

ERC20 Tokens on HyperLedger

In this blog we’ll discuss the methodology to create an ERC20 based token Chaincode in HyperLedger, using Node.js. ERC20 is a widely tested and accepted standard in Ethereum and incorporating it in Hyperledger can make the task of writing a secure and scalable chaincode for any token based on Hyperledger easy.

We will refer to the following open source repository during our tutorial.

You can also go through README of the repository.

NOTE: We assume that you’re already familiar with HyperLedger Fabric, and your system is equipped with the prerequisites to kick-start development on HyperLedger. If not, refer to prerequisites, key concepts and tutorials in HyperLedger documentation.

Getting Started

The code in this repository has been tested in the following environment:

  • Node: v8.9.3 and v8.11.4
  • Hyperledger fabric: v1.2
  • Docker: 18.06.1-ce
  • Python: 2.7.12
  • Go: go1.9.3 linux/amd64
  • Curl: 7.47.0

We would recommend using the same version, while adapting from our code.

After making sure the prerequisites are installed properly, follow the following steps:

Once you are in the network folder, you can create our hyperledger network environment. It will create 2 organizations for you (Org1 and Org2) respectively, with an Orderer having 2 peers each.

Housekeeping

If it’s your second time running this tutorial, or you have successfully run any other HyperLedger Fabric based code then we suggest you to first run the following commands:

It will ask for a confirmation:

Press Y and continue.

Note: You can always check how many containers or volumes of docker are up and running by using the following commands:

  • docker ps
  • docker volume ls

If you struggle to shut down containers and volumes using the script, try running the following commands:

  • docker network prune
  • docker volume prune
  • docker rm -f $(docker ps -aq)

Token Network Setup

Once you’re done with the Housekeeping, you are ready to start your network by making use of the following commands:

It may take some time to execute (usually between 90- 120 seconds to execute). However, if you see the following log in your terminal, that means it executed successfully and your network is ready to use.

It created the required certificates for each entity of HyperLedger using the crypto-config.yaml file, in a folder named crypto-config within your networks directory. Check it out!
It also created channel.tx, genesis.block, Org1MSPanchors.tx and Org1MSPanchors.tx.

Note: We cannot cover everything in this README, to understand the intricacies behind the process in detail go through this tutorial.

It also created docker containers and volumes for:

  • peer0 and peer1 or Org1
  • peer0 and peer1 of Org2
  • orderer
  • cli
  • chaincode

Check them using docker ps and docker volume ls. We also created a channel name mychannel between Org1 and Org2, both the peers of each org are a part of this channel. Then installed our chaincode on peer0 of each org and instantiated our chaincode, naming it mycc. You can see the logs of respective peer/chaincode using:

 

Note: For debugging you can access your chaincode and peers logs docker logs <press TAB to see options>; and If you don’t see a container for chaincode (dev-peer0.org1.techracers.com-mycc-1.0) then there was a problem instantiating our token chaincode.

Let’s play with our token

Now that our chaincode is up and running, let’s try some getter and setter functions to understand it better. For that, we need to enter the CLI container which we have created.

Now, you’ll see something like this:

Getter functions

Once you’re in the CLI, you can call the getter functions provided in our SimpleToken. We’ll discuss each one of them accessible to you one by one:

getOwner

This function will return the owner of the token contract. Now it is the MSPID which instantiated the contract, you can see it here.

Here mychannel is our channel name and mycc is the name of our chaincode, and as you can see Org1MSP is the current owner of our chaincode.

getName

This function will return the name of our token contract. It was set to Simple Token while instantiating the contract, you can see it here.

As you can see, Simple Token is our current token name.

getSymbol

This function will return the symbol for our token contract. It was set to SMT while instantiating the contract, you can see it here.

As you can see SMT is our current token symbol.

getTotalSupply

This function will return the total supply for our token contract. It defaults to 0 until it is set. You can find the required logic here.

As you can see 0 is our current total supply.

isMintingAllowed

This getter returns the value of isMintingAllowed boolean stored on HyperLedger. It defaults to undefined until it is set once. You can find the required logic here.

As you can see isMintingAllowed is now, undefined. It will return true or false once set later.

getAllowance

This getter returns the value of allowance set by a token owner for a spender MSPID. It takes as ‘Input the MSPID token’ owner as the first argument and ‘MSPID of spender’ as the second argument. It defaults to 0 until it is set. You can find the required logic here.

As you can see, getAllowance is now 0. It will return float once set later. Let’s also check for the other combination we have and see if it returns 0.

getBalanceOf

Our last getter is getBalanceOf function, it returns the token balance of every MSPID we enter. It also defaults to 0 if the MSPID don’t have any token balance.

You can checkout the required code here.

Setter functions

Once you’re done with the getter calls, let’s explore the setter functions provided in our SimpleToken. Remember you will need to satisfy the endorsement policy before you can make these transactions happen, on that account you will see some extra fields here. It will also take some time when a setter is called for the first time to a specific peer, later it returns results almost instantaneously. Also right now the CLI’s configuration is set to Org1 peer0, which you can check using:

You can change to peer0, Org2 by running the following commands:

Use a similar strategy for other peers.

updateMintingState

We assume your config is set to peer0 of org1, otherwise set it using the following commands:

Now let’s try to update our minting state to true. We need to specify the Orderer and the peers to satisfy our endorsement policy.

Note: If you’re following this tutorial, this will be your first invocation so it might take some time.

Now run the getter to see if it actually changed:

Note: If you call it using peer0 of Org2, it will fail with the following result:

You can open another Terminal and check the error logs as follows:

Note: You can enquire about other errors in a similar fashion, just be sure you are hitting the right peer. To know more about other validations, you can check the chaincode here.

mint

This function can be used to create/mint tokens by the token owner. But isMintingAllowed should be set to true. Let’s mint some tokens for Org1MSP. Make sure your config is set to Token Owner.

You can check the balance using our getter:

If you experience errors troubleshoot them using docker log, please find the chaincode here.

transfer

Now, we know that we have 100.2345 tokens registered under Org1MSP. Let’s try to transfer 10 tokens to Org2MSP.

You can check Org2’s balance using:

If you experience errors troubleshoot them using docker log , please find the chaincode here.

updateTokenName

You can update the token name using this setter.

Check it using:

If you experience errors, troubleshoot them using docker log. Find the chaincode here.

updateTokenSymbol

You can update the token symbol using this setter.

Check it using:

If you experience errors troubleshoot them using docker log, please find the chaincode here.

updateApproval

If you want some other MSPID to spend some tokens on your behalf, you can use this setter.

Check it using:

If you experience errors troubleshoot them using docker log, please find the chaincode here.

transferFrom

Once you have approved Org2 to transfer on behalf of Org1. First set the config in cli for Org2, so you can call functions on its behalf.

Now let’s transfer a float value to a nonexistent, but valid MSPID.

Note: Such MSPIDs can be created later and will have tokens preallocated to them, just like Ethereum addresses.

Check it using:

If you experience errors troubleshoot them using docker log and you can find the chaincode here.

transferOwnership

Lastly, set your config back to the Owner of the token and try transferring Token Ownership.

 

Check it using:

If you experience errors troubleshoot them using docker log and you can find the chaincode here.

ERC20 Architecture by Zeppelin

We used Zeppelin Solidity tested standards to create this ERC20 token version on HyperLedger. It is easy for Solidity developers who are familiar with JS to incorporate. You can refer to the architectural model of ERC20 here:

  • helpers – Includes validations, checks which must be fulfilled during chaincode invocation or query; and utils for making the code DRY.
  • examples – A simple chaincode that demonstrate how to create a simple token using the basic chain codes provided in the repository.
  • tokens – A standard interface for fungible ERC20 tokens on HyperLedger.

Security

Techracers is dedicated to providing secure and simple code, but please use common sense when doing anything that deals with real money! We take no responsibility for your implementation decisions and any security concerns you might experience.

The core development principles and strategies that Techracers is based on includes: security in depth, simple and modular code, clarity-driven naming conventions, comprehensive unit testing, pre-and-post-condition sanity checks, code consistency, and regular audits. If you need further assistance, please email [email protected]

Note: We welcome recommendations and suggestions from the Open source community, if you think you can help us by raising an issue.

Hyperledger v/s Ethereum

A comparison probably can be made between Ethereum and Hyperledger Fabric (one amongst the many Hyper ledger project).

The most fundamental difference between Ethereum and Hyperledger is the way they are designed and their target audience. Ethereum with it’s EVM, smart contract and public block chain is mostly targeted towards applications that are distributed in nature and are for mass consumption. A look at ethereum dapps (distributed applications) seem to hint the same: https://dapps.ethercasts.com/.

On the other hand, Fabric has a very modular architecture and provides a lot of flexibility in terms of what you want to use and what you don’t. It’s pretty much ala carte and is targeted at businesses wanting to streamline their process by leveraging blockchain technology.

For example, it is not possible in ethereum to have a transaction visible to someone, but not visible to others (a requirement that is very common in business). Fabric allows this and much more.

Another major difference is the consensus algortihm used in Ethereum v/s Fabric. Ethereum uses PoW (Proof of work), whereas Fabric allows one to choose between No-op (no consensus needed) and PBFT (Practical Byzantine Fault Tolerance). PoW is known to be energy sucker and could really impact the practicality of using Ethereum in the long run. However, one must mention that Ethereum too is trying to move towards proof of stake in it’s next release Casper.

Ethereum has a built in cryptocurrency (eth) and thus can be a very good match for applications that need this inbuilt. However, this could also be a disadvantage as there are several use cases where the cryptocurrency is not really needed.

This is not to say that Ethereum can not be deployed as a private block chain for a business. The fact that it has a really matured ecosystem and makes the development of smart contract and business logic really simple is a huge plus. Also, at the moment it is easier to find ethereum dapps developer than fabric developer. Fabric on the other hand is pretty new on the block and just warming up.

To conclude, we feel that in future most enterprise apps would get tilted towards Fabric, whereas Ethereum would continue to be a hotbed for dapps that are more B2C.