Hyperledger Fabric Cluster with Kubernetes

Setting up Hyperledger fabric cluster on Kubernetes

About Hyperledger Fabric

Hyperledger Fabric is a permissioned blockchain implementation and one of the Hyperledger projects hosted by The Linux Foundation.

Over the past year, many organizations are adopting it for distributed ledger needs in their use cases.

Why Kubernetes for Hyperledger Fabric?

This is a question that usually comes up in our minds while thinking about the architecture of our fabric-based application. ‘Why is there even a need to set it up on a Kubernetes cluster? Why can’t we simply go with the basic docker image setup on a regular server instance?’

Kubernetes seems ideal for deploying and managing the Fabric components for several reasons:

  1. Fabric deals with containers. Kubernetes manages containers like a Pro: Fabric components are packaged into docker containers and even the chaincode (the smart contracts) creates a container dynamically. Here, Kubernetes can simplify managing containers in a cluster as it is a very popular and easy to use Container Orchestration.
  2. High Availability automated: Replication controller in Kubernetes helps in providing high availability of crucial Fabric components like ordering service and fabric peers. So, if one container goes down, Kubernetes automatically creates another. In effect, this gives us ZERO DOWNTIME for our fabric containers.

About this tutorial

Before moving forward, we assume that you’ve read about Kubernetes, Helm and Hyperledger Fabric in general or are familiar with the terminologies.

While there are resources available on the internet as well as Fabric’s official documentation for the initial limited setup for the fabric ecosystem, there are hardly any that explains how exactly to setup the Hyperledger fabric cluster using Kubernetes according to your needs.

This tutorial would walk you through a step by step procedure of setting up a dynamic cluster on Kubernetes for any number of organizations and with any number of peers per organization. We would be creating a repository with helm charts with a python script to setup the cluster. If you want to skip the nitty-gritty and get to the setup, here is the link to the repository.

Cluster Architecture

We would be having organization specific pods in the cluster and there would be a dedicated volume for each peer pod storing their certs further giving them runtime writable space.

 

Kubernet cluster

Each of the peer pods has their dedicated volume claims (PVC) in which their respective MSP and TLS certificates are present. Please note that for each organization, we can run our own application’s instance of having the business logic.

The extra app would be able to communicate with the peer pods PVC. It would also have access to a shared NFS server. The NFS would also store network-config.yaml files that are needed to install and instantiate chaincode in fabric peers via nodeSDK.

Alternatively, you can use CLI pods to install and instantiate the chaincode as well.

The best part is that we can follow either of the above-mentioned approaches to deploy our business logic using the python script.

Initial Setup

  •        The first step is to Install kubectl command line tool and helm server
  •        Create a kubernetes cluster. Here’s the link to setup a cluster in gCloud.
  •        Point kubectl in your machine to newly created cluster and initialize helm tiller

Configuration Tools

We would need cryptogen and configtxgen tools primarily to create the channel artifacts and certificates for us. You would also need Python 2.7+ version installed in your machine.

Spinning up the Magic

Clone the repository
to get all the helm charts and python script that sets up the cluster for us.

Step 1:

Here we utilize the crypto-config.yaml file to setup our cluster requirements. This is the same file which is used by cryptogen tool to create peers’ and orderers’ certificates We can modify this file to specify how many organizations we need, and how many peers are required in each organization. We may also specify our own unique application running for each organization by passing it in ExtraPods field. A typical example of the content of the crypto-config file having two organizations can be found in the link below:

crypto-config.yaml Example

 

 

As you may notice, we have added ExtraPods field to run extra application pod in each org namespace.

Step 2:

Next step would be to modify the configtx.yaml file in the same fashion. You can find a sample file with two organizations and a channel configuration in the link below:

configtx.yaml Example

 

Step 3:

Use the command make fiber-up to setup the fabric components in our cluster. This command will invoke the init_orderers.py and init_peers.py scripts that would generate the pods according to the modified files.

The script does the following tasks in chronological order:

  1. Create crypto-config folder having the peer and orderer certificates using cryptogen tool
  2. Create channel-artifact folder having the genesis-block for the fabric
  3. Spin up the Persistent volume claims (PVC) for orderer pods and copy the required certificates for pods in their respective PVC via a test pod
  4. Deletes the test pod and create Ordering Pods
  5. Spin up the Persistent volume claims (PVC) for all peer pods and copy the required certificates for pods in their respective PVC
  6. Deletes the test pod and initialize Peer pods for all organizations

Step 4(Optional):

A: Updating crypto-config.yaml and adding helm chart for your app

– Setting up the Extra Pods that you need to run per organization. You can mention these in ‘ExtraPods’ field for each PeerOrgs in crypto-config.yaml.

– A sample entry to have a simple app would look like this:

 

– In the Chart field, you need to pass the path of the helm chart of your app. You can also pass values to override in the helm chart in Values field.

 B: Setting up NFS storage

– If your extra apps would be interacting with Fabric components using the SDK, your apps would need network-config.yaml files which store the channel information and peer public certs path.

NOTE: If your extra app doesn’t need nodeSDK or network-config.yaml, you can skip Step 4.B

– To add the NFS server, we must first add a persistent disk to your project. To add a disk from cloud SDK, run the command:

 gcloud compute disks create –size=10GB –zone=us-east1-b nfs-disk

You can also go in gcloud console and create it using UI dashboard.

– In the file /public-certs-pvc/public-certs-nfs-service.yaml , update the value of gcePersistentDisk.pdName to the name of your persistent disk.

– Run the command  make nfs-up  to create the shared NFS storage and generate the network-config.yaml files for each organization.

C: Setting up Extra App Pods

– Check if all the fiber pods are up by command: kubectl get po –namespace=peers

– Once all the pods are Running run command: make extras-up run command

Testing the chaincode

The script automatically creates a default channel between two organization and joins all the peer pods in the channel. After that, you can install a chaincode in two ways:

  1. Via your Extra app pods using nodeSDK
  2. By using CLI pod for each organization.

Here is how you can do it using CLI pods:

  1. Install the chaincode

– Enter the bash of Org 1 CLI pod:

 

 In the terminal, install chaincode on both the org peer pods. The commands to install it in one peer is given below:

 

– Do the same for Peer1 in ORG_1_CLI_POD

– Repeat the same steps in ORG_2_CLI_POD as well.

  1. Instantiate the chaincode

– Enter the bash of one of the Org CLI pod and instantiate chaincode by following command

 

 

  1. Query the chaincode

– From the other ORG_CLI_POD, query the instantiated chaincode

 

 

If all goes well, you should see the passed value of the key

Bringing the Cluster down

If we want to bring down the cluster we setup, we can simply run   make down  Alternatively, if you wish to remove or recreate only a portion of our cluster, make use of the following commands:

make peer-down : to tear down only the peer pods and other organizational pods

make orderer-down : to tear down only orderer pods and namespace.

For more details about these commands, check the Makefile in the repository.

Caveats and Challenges

Kubernetes cluster comes with a handful of challenges that we have answered in our approach. We believe that it is important to go through these in order to understand the internal working of our script:

  1. Dynamic number of peers in every organization

A simple cluster with 2 Organizations and 1 Peer per organization sounds good for starting up with HLF network. However, in a production environment, you might have a dynamic number of organizations involved. Each organization may decide to have a different number of peer pods and each of them might have different instances of our application running alongside.

To handle this dynamicity, we use the crypto-config.yaml file which is also used by cryptogen tool to create certificates. Out python script parses the peer organization hierarchy and creates dynamic namespaces and pods according to it.

  1. Pre-populating Volume claims in Kubernetes
    1. For Peer pod to be up and running successfully, there are few certificates that should be pre-populated in its volumes. Same goes for orderer service pods. They require the genesis-block in their volumes before the Pods start.
    2. While it is quite achievable in usual docker volumes, it is not the same in Kubernetes PVC. The reason lies in a Kubernetes cluster, the volume is not in a single instance, instead, it is spread across different servers (Node in terms of Kubernetes). A file a.txt is present in any PVC, then we can’t be sure of its actual storage location in the cluster.
    3. This creates a challenge to pre-populate the PVC before a pod is up. We do this by the following method:
      • First, we create our PVC with a test pod attached to it.
      • Then we copy the required certs by using kubectl cp command.
      • Then we delete the test pod and bring up the target Peer pod.

 

NOTE:

·        Init Container is the ideal way to pre-populate the files in Kubernetes. But it doesn’t work in our case as we have existing files (certificates) in our local that we need to copy into PVC and can’t be hardcoded in the deployment.yaml

·        HostPath is a type of PVC that resides in only a single server cluster, but since our cluster can expand across servers, it should not be used here.

  1. Chaincode Dynamic container creation and recognition:

When a peer in Fabric instantiates a chaincode, it creates a Docker container in which the chaincode runs. The Docker API endpoint it invokes to create the container is Unix:///var/run/docker.sock.

This mechanism works well as long as the peer container and the chaincode container is managed by the same Docker engine. However, in Kubernetes, the chaincode container is created by the peer without notifying Kubernetes. Hence the chaincode and peer pods cannot connect with each other which results in failure when instantiating the chaincode.

To work around this problem, we have to pass the following environment variable in our peer and orderer pods:

CORE_VM_ENDPOINT: “Unix:///host/var/run/docker.sock”

CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE: “bridge”

GODEBUG: “netdns=go”

This environment variables would make sure that the container created outside the Kubernetes flow are also recognized by peer pods.

  1. Writable Disk Space for Extra Pods

Any Extra Pod using NodeSDK to interact with Fabric component would require a writable space that would be accessible to it. Since an organization may have many of such extra pods running, they would all need the same shared space to write files into.

Gcloud Persistent volumes don’t support ReadWriteMany class, i.e. Gcloud only encourages PVC to be writable by a single pod (ReadWriteOnce), or be readable to multiple pods (ReadOnlyMany). To overcome this challenge, we set up an independent NFS server and pod mounted on top of a gcloud Persistent Disk. As a result, each organization’s pods would have access to a specific sub-directory in the Network File System.

  1. Network-config.yaml file for Pods using NodeSDK

Any Extra Pod using NodeSDK to interact with Fabric component requires a network-config.yaml file that has info about the channel and the certificates required for it. Our solution currently generates a network-config.yaml file for a two-organization channel for each of the organization and populates it in the NFS. Each network-config file goes in the respective organization’s subfolder.

End Result and Conclusion

The end result is expected to be a properly working Hyperledger Use Case Setup on the Kubernetes cluster.

  • The unique point of this cluster is no component’s Kubernetes setup was hardcoded in the Kubernetes deployment, but was dynamically created as per each organization’s requirement.
  • Using the repository code by following this approach, we can setup a Hyperledger fabric use case in minutes by compiling the requirements in crypto-config.yaml

In the end, we would like to conclude by pointing out a few improvements that can be added to the discussed architecture.

In the big level project, each organization might have a different cluster for their resources. In this case, we would like to go ahead with Intercluster communication methods in Kubernetes like Federation service. We will be publishing a blog explaining that architecture shortly.

Your feedback and pull requests are welcome on the Github Repository.

ERC20 Tokens on HyperLedger

In this blog we’ll discuss the methodology to create an ERC20 based token Chaincode in HyperLedger, using Node.js. ERC20 is a widely tested and accepted standard in Ethereum and incorporating it in Hyperledger can make the task of writing a secure and scalable chaincode for any token based on Hyperledger easy.

We will refer to the following open source repository during our tutorial.

You can also go through README of the repository.

NOTE: We assume that you’re already familiar with HyperLedger Fabric, and your system is equipped with the prerequisites to kick-start development on HyperLedger. If not, refer to prerequisites, key concepts and tutorials in HyperLedger documentation.

Getting Started

The code in this repository has been tested in the following environment:

  • Node: v8.9.3 and v8.11.4
  • Hyperledger fabric: v1.2
  • Docker: 18.06.1-ce
  • Python: 2.7.12
  • Go: go1.9.3 linux/amd64
  • Curl: 7.47.0

We would recommend using the same version, while adapting from our code.

After making sure the prerequisites are installed properly, follow the following steps:

Once you are in the network folder, you can create our hyperledger network environment. It will create 2 organizations for you (Org1 and Org2) respectively, with an Orderer having 2 peers each.

Housekeeping

If it’s your second time running this tutorial, or you have successfully run any other HyperLedger Fabric based code then we suggest you to first run the following commands:

It will ask for a confirmation:

Press Y and continue.

Note: You can always check how many containers or volumes of docker are up and running by using the following commands:

  • docker ps
  • docker volume ls

If you struggle to shut down containers and volumes using the script, try running the following commands:

  • docker network prune
  • docker volume prune
  • docker rm -f $(docker ps -aq)

Token Network Setup

Once you’re done with the Housekeeping, you are ready to start your network by making use of the following commands:

It may take some time to execute (usually between 90- 120 seconds to execute). However, if you see the following log in your terminal, that means it executed successfully and your network is ready to use.

It created the required certificates for each entity of HyperLedger using the crypto-config.yaml file, in a folder named crypto-config within your networks directory. Check it out!
It also created channel.tx, genesis.block, Org1MSPanchors.tx and Org1MSPanchors.tx.

Note: We cannot cover everything in this README, to understand the intricacies behind the process in detail go through this tutorial.

It also created docker containers and volumes for:

  • peer0 and peer1 or Org1
  • peer0 and peer1 of Org2
  • orderer
  • cli
  • chaincode

Check them using docker ps and docker volume ls. We also created a channel name mychannel between Org1 and Org2, both the peers of each org are a part of this channel. Then installed our chaincode on peer0 of each org and instantiated our chaincode, naming it mycc. You can see the logs of respective peer/chaincode using:

 

Note: For debugging you can access your chaincode and peers logs docker logs <press TAB to see options>; and If you don’t see a container for chaincode (dev-peer0.org1.techracers.com-mycc-1.0) then there was a problem instantiating our token chaincode.

Let’s play with our token

Now that our chaincode is up and running, let’s try some getter and setter functions to understand it better. For that, we need to enter the CLI container which we have created.

Now, you’ll see something like this:

Getter functions

Once you’re in the CLI, you can call the getter functions provided in our SimpleToken. We’ll discuss each one of them accessible to you one by one:

getOwner

This function will return the owner of the token contract. Now it is the MSPID which instantiated the contract, you can see it here.

Here mychannel is our channel name and mycc is the name of our chaincode, and as you can see Org1MSP is the current owner of our chaincode.

getName

This function will return the name of our token contract. It was set to Simple Token while instantiating the contract, you can see it here.

As you can see, Simple Token is our current token name.

getSymbol

This function will return the symbol for our token contract. It was set to SMT while instantiating the contract, you can see it here.

As you can see SMT is our current token symbol.

getTotalSupply

This function will return the total supply for our token contract. It defaults to 0 until it is set. You can find the required logic here.

As you can see 0 is our current total supply.

isMintingAllowed

This getter returns the value of isMintingAllowed boolean stored on HyperLedger. It defaults to undefined until it is set once. You can find the required logic here.

As you can see isMintingAllowed is now, undefined. It will return true or false once set later.

getAllowance

This getter returns the value of allowance set by a token owner for a spender MSPID. It takes as ‘Input the MSPID token’ owner as the first argument and ‘MSPID of spender’ as the second argument. It defaults to 0 until it is set. You can find the required logic here.

As you can see, getAllowance is now 0. It will return float once set later. Let’s also check for the other combination we have and see if it returns 0.

getBalanceOf

Our last getter is getBalanceOf function, it returns the token balance of every MSPID we enter. It also defaults to 0 if the MSPID don’t have any token balance.

You can checkout the required code here.

Setter functions

Once you’re done with the getter calls, let’s explore the setter functions provided in our SimpleToken. Remember you will need to satisfy the endorsement policy before you can make these transactions happen, on that account you will see some extra fields here. It will also take some time when a setter is called for the first time to a specific peer, later it returns results almost instantaneously. Also right now the CLI’s configuration is set to Org1 peer0, which you can check using:

You can change to peer0, Org2 by running the following commands:

Use a similar strategy for other peers.

updateMintingState

We assume your config is set to peer0 of org1, otherwise set it using the following commands:

Now let’s try to update our minting state to true. We need to specify the Orderer and the peers to satisfy our endorsement policy.

Note: If you’re following this tutorial, this will be your first invocation so it might take some time.

Now run the getter to see if it actually changed:

Note: If you call it using peer0 of Org2, it will fail with the following result:

You can open another Terminal and check the error logs as follows:

Note: You can enquire about other errors in a similar fashion, just be sure you are hitting the right peer. To know more about other validations, you can check the chaincode here.

mint

This function can be used to create/mint tokens by the token owner. But isMintingAllowed should be set to true. Let’s mint some tokens for Org1MSP. Make sure your config is set to Token Owner.

You can check the balance using our getter:

If you experience errors troubleshoot them using docker log, please find the chaincode here.

transfer

Now, we know that we have 100.2345 tokens registered under Org1MSP. Let’s try to transfer 10 tokens to Org2MSP.

You can check Org2’s balance using:

If you experience errors troubleshoot them using docker log , please find the chaincode here.

updateTokenName

You can update the token name using this setter.

Check it using:

If you experience errors, troubleshoot them using docker log. Find the chaincode here.

updateTokenSymbol

You can update the token symbol using this setter.

Check it using:

If you experience errors troubleshoot them using docker log, please find the chaincode here.

updateApproval

If you want some other MSPID to spend some tokens on your behalf, you can use this setter.

Check it using:

If you experience errors troubleshoot them using docker log, please find the chaincode here.

transferFrom

Once you have approved Org2 to transfer on behalf of Org1. First set the config in cli for Org2, so you can call functions on its behalf.

Now let’s transfer a float value to a nonexistent, but valid MSPID.

Note: Such MSPIDs can be created later and will have tokens preallocated to them, just like Ethereum addresses.

Check it using:

If you experience errors troubleshoot them using docker log and you can find the chaincode here.

transferOwnership

Lastly, set your config back to the Owner of the token and try transferring Token Ownership.

 

Check it using:

If you experience errors troubleshoot them using docker log and you can find the chaincode here.

ERC20 Architecture by Zeppelin

We used Zeppelin Solidity tested standards to create this ERC20 token version on HyperLedger. It is easy for Solidity developers who are familiar with JS to incorporate. You can refer to the architectural model of ERC20 here:

  • helpers – Includes validations, checks which must be fulfilled during chaincode invocation or query; and utils for making the code DRY.
  • examples – A simple chaincode that demonstrate how to create a simple token using the basic chain codes provided in the repository.
  • tokens – A standard interface for fungible ERC20 tokens on HyperLedger.

Security

Techracers is dedicated to providing secure and simple code, but please use common sense when doing anything that deals with real money! We take no responsibility for your implementation decisions and any security concerns you might experience.

The core development principles and strategies that Techracers is based on includes: security in depth, simple and modular code, clarity-driven naming conventions, comprehensive unit testing, pre-and-post-condition sanity checks, code consistency, and regular audits. If you need further assistance, please email [email protected]

Note: We welcome recommendations and suggestions from the Open source community, if you think you can help us by raising an issue.

How is Blockchain Disrupting the Fintech Industry?

Fintech is a much-hyped buzzword in the financial services industry these days and everyone from the corporate giants to the infant startups is talking about it. Though Fintech is gaining this attention for all the right reasons, its specific meaning always gets diluted along the way. It is proclaimed to be a game-changing, disruptive innovation that holds the capability of shaking up the traditional financial markets. This blog aims to clearly define what is Fintech and how Blockchain technology is driving disruption in the financial services industry.

What is Fintech?

Fintech, an abbreviation for Financial Technology that describes the evolving intersection of financial services and technology. The term Fintech was originally referred to the technology applied to the back-end of established consumer and trade financial institutions. Since the internet revolution, the term Fintech stands for technologies that are disrupting traditional financial services, including mobile payments, money transfers, loans, fundraising, and asset management. Although, in its broadest sense, Fintech stands for technologies used and applied in the financial services sector, it also touches every other business the financial services industry deals with.

Every time you go online to see your financial transactions or use tools to manage your spending and investments, you are making use of the Financial technology or Fintech.

Another most hyped term in the financial world today is Blockchain. Blockchain plays a significant role in financial innovations and is the backbone technology that is driving the Fintech revolution.

Everywhere you look these days, someone has a crazy new solution built off of Blockchain technology and you may be asking yourself this question:  What does Blockchain mean, and how is it relevant to me? If you are an investor, entrepreneur, government worker, teacher, or well anyone who is collecting a paycheck, then this is the most important advancement to date, of any kind. It will have significant consequences on our lives and how we do business in the future.

 

Defining Blockchain

A Blockchain is a type of decentralized and distributed ledger for maintaining a permanent and immutable record of transactional data in a chronological order. Blockchain stores transactional data in a continuously growing list of records called blocks. Blockchain uses cryptography to link and secure these blocks. Each block typically contains three elements:

  • A Hash pointer-  link to the previous block
  • A timestamp
  • Transaction data

For better understanding, the key features of Blockchain can be categorized as follows :

Decentralized– A Blockchain-enabled decentralized network operates on a peer to peer basis.  Meaning that by storing data across its network, blockchain eliminates the risks that come with data being held centrally.

Distributed ledger– A distributed ledger allows sharing of a ledger of activity- such as arbitrary data or virtually anything of value between multiple parties. Each of the computers in the distributed network maintains a copy of the ledger to ensure transparency and also prevent a single point of failure (SPOF) and all copies are updated and validated simultaneously.

Immutable record– By design, blockchains are inherently resistant to modification of data. All blockchain networks adhere to a certain protocol for validating new blocks. Once recorded, the data in any given block cannot be altered without the alteration of all the subsequent blocks, which requires the consensus of the network majority.

Blockchain opens new doors of opportunities for all the stakeholders in the financial world. The future of the financial services industry depends on how these stakeholders capitalize on this technology and how do they interact with each other. Let us have a look at the different participants of the Fintech ecosystem and what are the various challenges faced by them.

 

The Fintech Ecosystem

The Fintech ecosystem is a very fluid environment and is creating surprising winners and the most stunning losers in the financial world. According to Jeff Koyen, an active blockchain investor, entrepreneur and journalist:

It’s a very interesting space to watch. It’s clear that blockchain has the potential to make finance more efficient, but the big players are well-established. And establishments don’t tend to favor innovation. I’d keep an eye on the startups who want to disrupt, but also know how to play nice with the institutions.”

The industry is growing at a rate of 23% year on year. To make sure that you don’t fall back or get lost, let us understand what are the roles of the various participants and the various challenges faced by them.

Participants
Governments, financial services companies, and Fintech startups together form an ecosystem. All the participants of this ecosystem face different challenges and opportunities and with the advancement in technology every day, this landscape becomes more dynamic and complex than ever.

Financial Institutions
Traditional financial institutions, also referred to as incumbents are trying to leverage the best outcome by adding technology to their existing legacy systems and preventing them to become obsolete. Holland FinTech (2015) forecasts that approximately $660 billion in revenue may migrate from traditional financial services to Fintech services in the areas of payments, crowdfunding, wealth management, and lending. Banks are investing more heavily in innovation, however, they haven’t yet fully diffused these innovation strategies to all their processes – owing it to the threats these might cause to the existing system and the huge clientele.

The Fintech Startups
The value of global investment in Fintech startups has increased from approximately three billion U.S. dollars in 2013 to eight billion U.S. dollars in 2018. While the disruption opportunity for Fintech startups is massive, startups will have to find a way to scale out their business while facing increased regulations, higher costs, and larger infrastructures.

Governments
Governments have an important role in the evolution of Fintech, but they need to balance their activities carefully – encouraging innovation without inhibiting evolution. As governments develop policies and programs, there needs to be active engagement with stakeholders, whether through formal feedback mechanisms or ad hoc opportunities and conversations, in order to shape a future that benefits all stakeholders of the Fintech ecosystem.

 

Blockchain Driving Disruption In The Fintech Landscape

There is no doubt in saying that Blockchain is the backbone technology which is revolutionizing the Fintech industry. And as the financial service industry is moving from exploration phase to application phase, it is very important for the financial institutions or experts to understand the role of Blockchain in the Fintech if they want to take advantage of this financial revolution.

Why Fintech needs Blockchain?

The biggest challenge a Fintech company face is trust. How to make people trust them, and how to make a safe and secure financial product? Banks and financial institutions have huge cash reserves using which they create secure networks on which banking transactions take place. Fintech companies lack funds which restrict them from developing or procuring a high-security system.

Enter Blockchain. Blockchain is cheap in terms of developing and also highly secure or “trustless” as we call it. As Blockchain is a series of immutable blocks, this allows companies to track the complete lifecycle of a financial transaction. Blockchain has given the opportunity to create secure and safe financial products and bring innovation in the financial sector.

Blockchain has the potential to truly disrupt multiple industries and make the processes more democratic, secure, transparent, and efficient. This leads us to another question:

 

What makes Blockchain so powerful?

The answer to this question lies in the two inherent properties of a Blockchain – Decentralized and Distributed.

For centuries we have trusted a third party for carrying out all our transactions. All the data is centrally stored and these central parties majorly formed the way economies work. Have you ever wondered what would happen if one or all of these third parties went corrupt? It would create huge chaos in the society. With blockchain the data is centralized and there is no single authority. The blockchain potentially cuts out the middleman, giving back the power to the owner of the assets – data or tokens carrying some financial value.

The distributed infrastructure provides blockchain with an ability to share information that is secure and provide for the unalterable transfer of data– ensuring data integrity. This makes the blockchain technology an important tool in building trust among business and consumers. A distributed ledger can take over many of the functions performed by central third parties. This is particularly relevant for the financial services industry which trusts these third parties to build trust.

If you are running a business or are a part of leadership, then you must seriously start re-imagining your business model and explore how you can integrate blockchain to remain viable.

Now that you know what is blockchain and Fintech, and the way blockchain is driving disruption in the Fintech industry let us take a look at the current use cases of blockchain technology in the Financial services. It is clear that there are a number of major areas for applying the blockchain technologies emerging but, right now, it seems to me these are the major ones:

 

Blockchain technology use cases in the Financial Services

1: Smart Contracts: A smart contract is a computer code running on top of a blockchain containing a set of rules under which the parties to that smart contract agree to interact with each other. When these predefined rules are met, the agreement is automatically enforced. The smart contract code has the ability to facilitate, verify, and enforce the negotiation or performance of an agreement or transaction.

2: Digital Payments: The transfer of value or assets has always been a slow and expensive process. Imagine you have to send $100 from the USA to your friend in your Europe, who have an account with a local bank, it takes a number of banks and institutions to finally collect the money. With Blockchain, this process is simplified and faster at a cost much less than the traditional banking institutions.

3: Digital Identity: When identity management is moved to blockchain technology, users are able to choose how they identify themselves and with whom their identity is shared. Users still need to register their identity on the blockchain of course. But, they don’t need a new registration for every service provider, provided those providers are also connected to the blockchain.

4: Share Trading: Buying and selling stocks and shares involves many middlemen, such as brokers and the stock exchange itself. A blockchain is a decentralized and secure ledger that gives every stakeholder a say in the validation of a transaction and eliminates some of the ‘middlemen’ while changing the role of others. Eliminating the middlemen from the share trading process speeds up the settlement process and allows for greater trade accuracy.

 

Future of Blockchain Technology in Fintech

Although the Fintech industry is thrilled about blockchain, the technology will take a few years to become a mainstream financial model. As with any emerging technology- Blockchain poses certain challenges that need to be addressed to fully utilize its potential in the financial services industry.

It is obvious that even though Blockchain technology is still in its growing phase and the possibilities are still being explored, it is important to research and keep up with new developments to make the best use of this technology to fully transform how we carry out our financial processes in our day-to-day life.

To discuss the Blockchain technology possibilities for your organization, please get in touch with us at [email protected]

Use of Blockchain in Auditing

The traditional accounting is a labor-intensive work which brings upon huge human resource cost along with lower efficiency. The entire ecosystem also involves a lot of audit of orders, delivery notes, invoicing and payment records which are usually maintained by third-party verifications.

How auditing currently works:

  • The organization’s management prepares the financial report. It must be prepared in accordance with legal requirements and financial reporting standards.
  • The organization’s directors approve the financial report.
  • Auditors start their examination by gaining an understanding of the organization’s activities and considering the economic and industry issues that might have affected the business during the reporting period.
  • For each major activity listed in the financial report, auditors identify and assess any risks which could have a significant impact on the financial position or financial performance, and also some of the measures (called internal controls) that the organization has put in place to mitigate those risks.
  • Based on the risks and controls identified, auditors consider what management does has done to ensure the financial report is accurate, and examine supporting evidence.
  • Auditors then make a judgment as to whether the financial report taken as a whole presents a true and fair view of the financial results and position of the organization and its cash flows, and is in compliance with financial reporting standards and, if applicable, the Corporations Act.
  • Finally, auditors prepare an audit report setting out their opinion, for the organization’s shareholders or members.

How Can Blockchain help in auditing?

– Blockchain can help in maintaining transparency in audits.

One of the most appealing aspects of blockchain technology is the degree of transparency that it can provide. The technology can allow for the immutable tracking of anything across the audit process. The use of data visualization will allow auditors to not only provide assurance over the systems, but it will also allow consulting firms to assist with planning and decision making. Following are some of the key points for usage of blockchain in audit transparency:

  1. The entire audit process can be ported to blockchain technology. In a nutshell, this will provide two benefits:
    • Many organizations hire multiple audit teams. There is a need for a corruption-free ecosystem for synchronization between audit reports between the auditing teams. When we use blockchain for auditing process, auditors will be aware of the audit process done by another audit team, thereby increasing the genuineness of the system.
    • The organization will be aware of the audit process, thereby tracking the audit process lucidly.
  2. The audit reports will be put on the blockchain. The next auditors can verify if the audit is done right.
  3. The financial reports of the organization can be put on the blockchain, this will ensure that the reports are immutable and cannot be denied by the organization later.

– Removes dependency of the enterprise in auditing.

An audit is often a heavy process that requires a team of professionals to spend a significant amount of time to review a large number of transactions and accounts of the client’s books. In this scenario, blockchain technology could play a really disruptive role.

As blockchain has its foundation in the distributed ledger concept and cryptology — which promises transparency, immutability, security, auditability, high cost-efficiency and is ‘ever available’ — an immediate application of blockchain technology in the audit verifications is connected to external confirmation procedures.

External confirmations are a critical part of all audit processes, as they give the audit team the ability to check external sources of the information that are provided internally by the company. But what if the ledger of such an enterprise is in a decentralized, public blockchain?

In a scenario like that, the auditors would be able to obtain all the information related to the financial transactions of a company without the need to confirm them through an external confirmation procedure, hence saving time and resources.

An environment where all the ledgers would be easily accessible, cross-checks of transactions would be still possible. If, for example, Company A has a liability with Company B, the auditors or any stakeholders could easily verify whether that is correctly recorded, by cross-checking the respective public ledgers.

Ready-to-access information will also facilitate the review of bank details, where the external auditors examine all the information pertaining to a company and commercial banks, including bank accounts, loans, guarantees and signatory powers. Apparently, blockchain can remodel the conventional techniques for invoicing, paperwork and contracts as it provides a canonical source of truth.

Reimagining the Possible: Blockchain For Connected Health Ecosystem (IoMT)

The two mega trends IoT and Blockchain are causing a great deal of hype and excitement in the wider business world. Experts claim that these technological trends are geared up to revolutionize all aspects of our daily lives, however, others stipulate that there is a huge amount of hot air around both domains, and a plenty more is yet to be proved.

If we combine these global trends together and in theory, as a result, we will get an immutable, verifiable, secure and a permanent method of recording healthcare data. A lot of remarkable concepts have been created around blockchain and IoT devices that are already disrupting the existing systems.

The data transactions that occur between different healthcare institutions, service providers, and HIE platforms, the data custodianship gets passed between different parties. In contrary, the transactions made in a blockchain network are nature transparent because every event and activity can be tracked and analyzed by the authorities connected to the blockchain.

Between $.30 and $.40 of every dollar spent on health care are spent on the costs of poor quality. This extraordinary number represents slightly more than a half-trillion dollars a year. A vast amount of money is wasted on overuse, underuse, misuse, duplication, system failures, unnecessary repetition, poor communication, and inefficiency.” Information Source

Current Industry Challenges:

  • As the number of connected devices keeps increasing, the hacking threat surrounding medical devices keep increasing which must be taken on a serious note by OEMs (Original Equipment Manufacturer)
  • FDA in the States have already emphasized on several issues related to OEMs and published a new guidance documenting which focuses on post-market management of cybersecurity in medical devices
  • Keeping cyber-attacks and patient data theft in consideration, providers must choose a cutting-edge technology to combat such threats
  • Because of such privacy and security issues surrounding medical devices, the manufacturers have only been able to connect with appx. 30% with the IoT devices
  • Majority of the global health authorities have already appealed for serialization requirement mandates, which means everyone will have to ensure drug supply chain provenance shortly

Examples of failure with existing Healthcare based systems:

A medical device for medication administration must ensure complete traceability so as to monitor accurate data for Health Information Exchange.

At present, the healthcare data management systems are inefficient and unreliable when it comes to tracking point of failures and accountability. If we talk about medical devices such as infusion pumps, they often operate on 3rd party software or IT systems and in case of any error in medication administration or in warning systems, the regulatory agencies will find out who is responsible. As a result, the providers are charged with hefty fines and in the worst case, they are asked to discontinue the future commercialization.

In scenarios when an OEM (Original Equipment Manufacturer) or any other party involved wants to prove that they don’t stand responsible for their equipment failure, there is no such significant method to prove their claims.

Blockchain & IoT: An ecosystem that no malignant actor can bypass

At present, the healthcare data management systems are inefficient and unreliable when it comes to tracking point of failures and accountability. Employing blockchain technology for IoT devices can detect the countless security vulnerabilities that can expose sensitive patient data and confidential reports which might lead to cyber-attacks.

This unique combination of AI and Digital Ledger Technology, healthcare organizations, vendors, doctors as well as patients can keep an audit trail of all the actions taking place. Since the big data analytics make use of predictive modeling, then when combined with blockchain can drastically reduce healthcare costs and enhance the overall patient experience. As a result, blockchain-based IoMT (Internet of Medical Things) will the existing connected health ecosystem and everything else based on the similar concept more reliable and secure.

The IoMT might be the panacea for communities to address this existing overburdened healthcare system which is most likely to be under more stress as the human population continues to grow and new diseases keep surfacing.

To discover more on IoMT and blockchain you can download this Aranca report on the Internet of Medical Things and read our previous blogs on healthcare.

Blockchain Adding Transparency to Coffee Supply Chain

 

Coffee is one of the widely consumed beverages in the world. The supply of coffee is being gradually increasing day by day.  As we can see from the 2017 statistics Brazil is considered the top coffee producing country all over the world. In that year, Brazil has produced somewhere around 55 million 60 kilogram bags of coffee followed by Vietnam as the second highest production country, with about 28.5 million 60 kilogram bags of coffee.

The journey of raw coffee beans from the farmer’s hand to the consumer is very lengthy. It’s a chain which starts with a farmer and end with a consumer. Between this start and end point, there are many who connect the two end of the chain they are called middlemen. They are responsible for communicating between the farmer and the end user to complete the coffee supply chain. In the current scenario of the traditional coffee supply chain, the involvement of middlemen is affecting the pocket of the farmers. The farmer only receives on an average of 10% of the whole profit in the coffee production. This a very unfair for the farmer pocket and this leads the farmers to survive a miserable lifestyle.

Apart from farmers, it’s also not a win-win situation for an end coffee consumer. The time span between the farmer first picks a coffee cherry, to the time the consumer has his first sip of coffee is called lead time. And, the lead time of the traditional coffee supply chain is about 6 month. With time the taste and quality of coffee get worse and the quality lost through age can never be recovered, no matter how lavish the coffee machine is. In a funny way, we can say that using six-month-old coffee beans in an expensive coffee machine is like using a water diluted petrol in Harley Davidson.

Effect of Blockchain technology at various points in the supply chain

The current coffee supply chain is not only disorganized but also it’s unfair. To overcome the issue the current system is facing here comes the need for a technology that will make the process streamlined and profitable for the end user of the chain while maintaining the performance of the system.

 Blockchain Adding Transparency to Coffee Supply Chain

The blockchain is an immutable, decentralized ledger technology used to record the transaction on digital assets between two parties at very less time as compared to other transactions systems. Integration of blockchain technology in the supply chain is not only going to resolve the issue it’s facing from the current system but will also be going to enhance the overall performance of the system. We can take blockchain as an online marketplace which directly connects the two end of the chain, helping them to transact via the trustless medium.

Here are few ways Blockchain Technology is improving the lives of people involved in the Coffee supply chain:

  • Payment: Right now the supply chain industry has to deal with many intermediaries bodies and centralized choke points. To complete a single transaction the third parties charges a good percentage of the total amount. The rate of increase in a cup of coffee is reciprocal to the number of middlemen involved in a supply chain. Blockchain will remove the third parties involved for the transaction, it carries the transaction using cryptocurrency and record every transaction over Blockchain.

 

  • Contracts: In traditional supply chain the coffee trade still relies on the fax or emails attachments to send, receive and update the coffee contracts around the world. This leads to a slow and error-prone situation. Blockchain introduces smart contract applications it’s quite similar in use like a traditional contract but it’s in digital form and it can be represented as a small program stored in Blockchain. The main idea behind using a smart contract is removing the dependency of third parties and introducing the distributed public ledger between the two parties. The data is encrypted on a distributed ledger, making it impossible to lose the information stored on the blocks.

 

  • Logistics: In the current scenario the logistic companies have to manage a lot of papers work to track the import exports of goods. The exporters and shipping companies suffer from the same inefficiencies in payment and paperwork.  Blockchain will resolve both the payment delay and data transmission security issue. Via blockchain, the payments will be cheaper and faster and the paperwork will be streamlined and verifies sooner. Thus improving the quality while decreasing the overall cost.

 

By integrating blockchain in the coffee supply chain the overall time and cost for a cup of coffee can be reduced. This technology will be a game changer for both ends of this coffee supply. Blockchain will ensure, an error-free, fast and effective process by taking the data security into account.

Want to know more how techracers is using blockchain to track provenance in the supply chain?

About Techracers

Techracers is an end to end Blockchain development studio helping enterprises to create new products or integrate Blockchain technology in their existing products/services.

Reach out to us to learn more about how Blockchain technology can impact your business be it supply chain, healthcare or any other domain. Specializing in blockchain application development, we are a team of expert developers and consultants who can help you create an impact by applying Blockchain Technology in your business.

Upgradeability Improvement Protocol 3— Near Transcendence

“With great power, comes great responsibility ”— Uncle Ben

Introduction

We will try to culminate the series by uttilizing all the knowledge from the previous UIPs and a few other solidtiy features to create the ultimate protocol. In the last part, we tried to use logic and data separation to bring about upgradeability. None of the UIPs have been perfect until now as we had to trade off some or the other factors. We want that we be able to upgrade anything at will, not change address and not having to relist anything. Convenience is what we are looking for. Let’s go all out!

Additional learning before reading this tutorial

  1. Understand the solidity documentation on delegatecall, memory slots, to understand a few nuances in our implementation (https://solidity.readthedocs.io/en/v0.3.1/introduction-to-smart-contracts.html)(http://solidity.readthedocs.io/en/develop/miscellaneous.html).

UIP3. The Pseudo Token

Our approach throughout the UIPs was to remove code from the token contract. But we have already robbed the Token contract of all its might, what next? The crux of the UIP 2.2 limitation lies in the fact that we have pre-defined skeletal functions like transfer and totalSupply which disallows us from adding any other logic (event, requires, etc.) to the function or adding complete function altogether. Let’s remove them(You should have seen this coming)!

Delegate Call

delegatecall is an inbuilt solidity function and will be the vital component of this architecture. Link to the solidity documetation on delegate call was attached in the start of this blog. It looks something like this-

This line will call the function totalSupply that exists in the contract ‘to’ in the context of the current contract(contract in which this line exists). All of this fits into the PseudoToken contract. We would use the delegatecall with assembly, because it enables us to pass on the return value.

The inner workings

The approach is plain and simple, but is going to be completely different than UIP 2. We loose the DataCentre, the ControlCentre, the multiple external calls, the extra gas costs, the headache in understanding the call flow and pretty much all of the code in the Token Contract. We just have 2 components. A LogicBank and a PsuedoToken.

The LogicBank can be thought to be the ControlCentre extension. But here, we want to code the LogicBank as if the LogicBank was the Token contract. Just extending our previous examples, this is how the LogicBank would look, a simple ERC20 token with a few additional functions that facilitate the pseudo implementation-

The kill function has been a common feature of our previous UIPs. The need for the initialize function will be explained later.

The trick here lies in the fact that none of the functions expect kill will ever be called in the context of this contract. All of that will happen in the PseudoToken, which contains some high level gibberish and again, is practically devoid of any ERC20 functions!

A quick question- What would happen when you call the transfer function of this Token contract, which doesnt even have a transfer function? Lets say transfer 10 tokens from

address A- 0xf8bd89bfca1c5db120971bc0f7423be720413d35 (msg.sender)

to

address B- 0xacaeb46e5ae394e3011bad7ee50a7e6eee63e3c0

All of the call data for transfer, i.e

‘falls’ to the fallback function in the msg.data.

Now, the pseudo just takes this msg.data, and delegates the call to the LogicBank. From here on, the transfer function will be executed in the context of the PseudoToken, i.e as if it was a function IN the PseudoToken contract. The msg.sender will still be the address A.

The storage slots also get created in the PseudoToken as and when required. When you transfer from address A to B(assuming B had no balance before),

balances[B] = 10 is created in the PseudoToken.

Now tracking back to the need for the initialize function. We cannot have a constructor in the PseudoToken that would initialize the balances for the contract creator, simply because the PseudoToken doesn’t even know it is going to have the ERC20 variables at this point. All of that can only happen after the ownership of the PseudoToken is transferred to the LogicBank. Like always, implementation might tie up loose ends in your understanding.

To much written, lets hit the code!

The deployment sequence for this simple example is as follows-

  1. Deploy the PseudoToken.
  2. Deploy the LogicBank with the PseudoToken address in constructor params
  3. transferOwnershipof the PseudoToken to the LogicBank

This time it is not a 2 way link, but just a way to keep the upgradeability format similar to the UIP 2’s

Steps to reproduce working on remix-

Upgradeability Improvement Protocol 3— Near Transcendence
  1. After completing the deployment sequence, you would have 2 contracts on your remix run tab, ideally the left part of the above image.
  2. Copy the address of the PseudoToken contract and create an instance using the LogicBank ABI, like the right part of the above image.
  3. Click on the ‘At Address’ button to create the instance.

This instance will allow us to call all the ERC20 functions at the PseudoToken address.

4. initialize the PseudoToken contract and assign yourself some tokens.

Try out the balanceOftotalSupply, transfer functions of the PseudoToken and witness magic!

How does it upgrade?

It is all just the same now-

  1. Deploy the new LogicBank
  2. kill the old LogicBank contract and enter the new LogicBank address into the params.
  3. transferOwnership of the PseudoToken to the new LogicBank contract.

Note-

  1. Before step 3, transferring ownership of the PseudoToken to the LogicBank, you have no superpowers! You cannot access any of the ERC20 functions.
  2. Always try to keep the storage layout of all the LogicBank contracts similar.
  3. The storage space separation between the LogicBank and the Token can be puzzling at times.

Advantages of the PseudoToken-

  1. Reduction in gas cost than UIP 2’s and a much simpler call flow.
  2. Upgradeability max: Add new storage variables, functions, events and what not!

Limitations-

Albeit Perfect, a few major limitations/warnings-

  1. This time around all the STORAGE variables are in the Token contract. This contract can never be modified in case of bugs.
  2. There were cases in previous solidity versions when the Pseudo storage slots are overwritten with blank slots. The slots had to aligned to take care of this factor because if the owner of the Pseudo is overwritten to address 0x00, access to all the data is lost forever.

Everything comes at a cost. Extreme caution must be exercised while using this architecture! Only after thorough testing should it be deployed to the mainnet. 

And that’s about it! If you were able to survive through the complete series, you would have learnt a number of the upgradeability protocols to save your day. Based on your use case, requirement and personal preference you can choose the UIP of your choice to build smart contracts in the future. Never again should a contract owner feel helpless because of the inability to fix his mistakes!

Mistakes

We will be back with more blogs and series on blockchain in general. Keep close watch for future posts that might explore into blockchain scalability initiatives, other smart contract platforms like NEO, Lisk, Stellar, EOS and more in depth content on ethereum.

PS: Like always, check the ERC-1214 EIP raised to the ethereum foundation here.

Thanks for reading!

Graphene Consensus & Testnet Setup

We have been hearing a lot about EOS lately. With a four-billion-dollar ICO and numerous blockchain personnel raving about it, EOS has taken the blockchain sphere by storm. It is the brainchild of Dan Larimer, the mastermind behind various other mainstream blockchain projects such as Bitshares and Steemit. However, due to the success of these well-marketed ones, one of Dan’s project has always been shunned: Graphene.

The Blockchain community is privy to the fact that EOS, Bitshares, and Steemit would have never seen the light of day if Dan hadn’t come up with Graphene, the then most scalable blockchain. But because of a lack of documentation, the potential of this underrated gem could never be fully realized.

So, here’s a handy guide for those who wish to dive into Graphene.

For ease of understanding, the article has been split into two parts: Consensus and Testnet setup.

Consensus in Graphene

As we are aware, Bitcoin and Ethereum are based upon POW consensus mechanism where all the nodes can participate in the competition of adding blocks to the blockchain. The POW mechanism seems completely decentralized on paper, but in reality, with increasing load on the system, grows the computational power needed to mine these blocks. After a certain point, it becomes nearly impossible for a person with a desktop computer to add any value to the system. In such scenarios, only huge mining rigs and pools are able to mine and add blocks.

screencapture-etherscan-io-stat-miner-2018-07-11-14_28_18

Follow this link for latest data

From the above figure, it can be gathered that Ethereum in its current form is not completely decentralized as the three major rigs/pools contribute to more than 50% of blocks produced.

Along with centralization POW based blockchains are also facing a major backlash given the amount of power being used in the mining process. As reported by CBS, the Bitcoin network alone consumes more electricity than 159 individual countries including Ireland.

With these drawbacks, there was a growing need for a less power hungry and more decentralized consensus mechanism.

 

ENTER — The Delegated Proof of Stake Consensus (DPOS)

Graphene is based on the DPOS consensus mechanism where ‘N’ number of witnesses are selected via continuous voting by stakeholders to produce blocks, where ‘N’ is an odd number. Only these witnesses produce the blocks in their respective time slots until the next maintenance interval. After the maintenance interval, the algorithm chooses the next set of witnesses based on the voting results. It is important to understand here that only stakeholders can participate in the voting process and one stakeholder can only vote one witness.

Apart from witnesses, the stakeholders also elect delegates who have the privilege of proposing changes to the network parameters, ranging from something as simple as transaction fees to the number of elected witnesses. After a proposed change is approved by a majority of delegates, the stakeholders are given a two-week period during which they may vote out the delegates and veto the proposed changes. However, these changes to the network are not very likely to be proposed.

Thus, under DPOS it is safe to assume that the administrative authority rests in the hands of the users, just like a democracy. Although, unlike witnesses, the delegates are not compensated for retaining their positions.

 

Testnet Setup

  1. To run testnet on your local, first you have to install boost libraries in your system. You can install it using https://github.com/cryptonomex/graphene/wiki/build-ubuntu
  2. Clone the repo https://github.com/cryptonomex/graphene.git. Also, clone the sub-modules by this command
    git submodule update –init –recursive 
  3. Make the build by the following steps:
    cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_C_FLAGS=”-fpermissive” .
    Make
  4. Once the build is successful, create the genesis file:
    mkdir -p genesis
    programs/witness_node/witness_node –create-genesis-JSON genesis/my-genesis.json

Edit the genesis.json accordingly, for example, initially how much accounts should be there, maintenance interval time, ’N’ the number of witnesses, initial balances in accounts etc.

  1. Then start the witness-node with the following command depending on the number of witnesses you have initialized in my-genesis.json:-
    ./witness_node –RPC-endpoint “127.0.0.1:8090″ –enable-stale-production –genesis-JSON genesis/my-genesis.json -w \””1.6.0″\” \””1.6.1″\” \””1.6.2″\” \””1.6.3″\” \””1.6.4″\”

graphene_node_start

Congratulations on understanding the basics, now you should be having some basic knowledge of Graphene’s DPOS consensus and a running graphene tested node in your local environment.

In our next blog, we will cover further concepts like- Multinode Setup and Smart Contracts.

Upgradeability Improvement Protocol 2.2 — Modularization Overload!

“Anything worth doing is worth overdoing” — David Letterman

Introduction

We follow on from the UIP2 that we discussed in part 2 of this series. We modularized the Token contract so that we could separate out the data that allowed us upgradeability to a huge amount.

But we want to make the UIP as perfect and as hassle free as possible. No stone must be left unturned. No limitation should live!

Sack the Token Contract

Some people might call this, ‘taking it too far’, but this approach has its own merits and demerits. Let’s change our viewpoint. Look at the whole upgradeability improvement protocols with the idea of removing code from the token contract. We removed data in the last approach. Now a question, why are we stopping there? What if we also remove the logic? Does this not make sense? Here goes nothing.

UIP2.2 The Skeletal Token

We would modularize the Token Contract even further in this approach. We need some new ‘Centre’ contract here. So, we bring you the ControlCentre!

After the ControlCentre introduction, the token contract is just a skeleton now. We took everything from it. It does have all the required methods, but each contain only one line, which are call forwards to the ControlCentre. And judging by its name, ControlCentre is to be the brains of this token architecture. So the ControlCentre processes all the calls, and thereby writes all storage into the DataCentre(which essentially remains the same as before). Might sound a little complicated in theory, lets get to code.

Upgradeability Improvement Protocol 2.2 
Assuming we are breaking the token from UIP-2, this is how the ControlCentre would look.
 
 

The ControlCentre basically has all the token functions, but takes one additional parameter for each function because the msg.sender has to be transmitted.

Note the onlyToken modifier. We need to make sure that the trust call was processed from the Token contract.

This is how the token contract would look now.


 

Except for the constant function, each function passes the msg.sender. Now the token doesn’t know about a DataCentre. It just ‘trusts’ the ControlCentre and sends to and receives information from it. Notice that events will still have to be emitted from the subordinate contracts (to make them own it).

The deployment sequence is as follows. Take a shot at running this on remix using this code

  1. Deploy the Token.
  2. Deploy the DataCentre
  3. Deploy the ControlCentre with the addresses of Token and DataCentre as constructor parameters.
  4. transferOwnership of the Token and DataCentre to the ControlCentre.

Again, we use the 2 way linking like in the last UIP, but this time it is between the ControlCentre and either of it’s subordinates.

How does it upgrade?

The upgradeability protocol is very similar to the previous UIP 2, expect for upgrading the ControlCentre instead of the token (because we want the address of the token contract to not change)-

  1. Deploy the new ControlCentre
  2. killthe old ControlCentre contract and enter the newControlCentre address into the params.

Note-

  1. Deploying a new DataCentre at any point would completely ruin the whole point of this series (keeping your data intact)
  2. The token contract is merely a Satellite, and transmits data between the user and the ControlCentre.

Advantages of the SkeletalToken-

  1. Technically, all the limitations to the upgradeAgent are gone now. As all the logic is in the ControlCentre, you can fix it in case you have any bug and still have the same old Token contract.
  2. A much more controllable architecture than UIP2. The contract owner just needs to own the ControlCentre to have total control. This power is felt when you need to have crowdsales for this token.

Limitations-

  1. You have a central point of failure. If the ControlCentre is compromised, so is the complete architecture.
  2. The gas cost shoots up a lot because of the multiple external calls. For a simple token transfer that took 35000, costs about 62000 now.
  3. Now that the Token contract has to stay on the blockchain, you cannot add/remove/modify code in the Token contract, IF you want the address to not change(This is the core of the 6th limitation of UIP 1).

If you are okay with the Token address changing, you can follow this protocol-

  1. Deploy the new Token satellite.
  2. Deploy a new ControlCentre.
  3. killof the old ControlCentre contract and enter the new control centre into the params.
  4. transferOwnershipof the NEW Token to the new ControlCentre contract.

In essence, if we have the old data, we can play around with the other contracts as and when we want.

The implementations till now were the easy ones. We have combined every bit of knowledge we had about smart contracts to come up with UIP3. Continue this path to improve your smart contract prowess with the final part of this series. Feel the Force!

PS: Again, this contract architecture has also been raised as an EIP (ERC-1200). You can check it out here. And follow the EIP issue on this link.

Getting started with smart contracts on EOSIO

Overview of smart contract development

Getting started with smart contracts on EOSIO

Clone eosio from : https://github.com/EOSIO/eos with –recursive flag

Buid eosio: EOSIO community give an automated build script for select environments

1. Amazon 2017.09 and higher
2. Centos 7
3. Fedora 25 and higher (Fedora 27 recommended)
4. Mint 18
5. Ubuntu 16.04 (Ubuntu 16.10 recommended)
6. Ubuntu 18.04
7. MacOS Darwin 10.12 and higher (MacOS 10.13.x recommended)

Image for a successful build using eosio-build.sh..

EOSIO

Smart contracts for EOSIO are coded in c++, Usually (for most of the smart contracts) you have to specify a header file with the declaration of functions(that correspond to smart contract actions) and a .cpp source file with their definitions. A .abi file (Application Binary Interface) is also created that holds all the actions that the Smart contract has. This file is needed by external systems for executing the smart contract operations.

EOSIOCPP

EOSIO provide a compilation tool called, eosiocpp and recommend it be used for the compilation of the smart contracts.

EOSIO ships with its standard c++ libraries, i.e When you clone the source, you will notice that a folder: {$PROJECT_ROOT}/contracts/libc++ ⇒ This directory has all the standard c++ library headers and this is where eosiocpp looks for the included standard header files.

EOSIO-1

A detour to .wasm

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications. It is meant to enable executing code nearly as fast as running native machine code. It was envisioned to complement JavaScript to speed up performance-critical parts of web applications and later on to enable web development in languages other than JavaScript.WebAssembly does not attempt to replace JavaScript, but to complement it.[4] It is developed at the World Wide Web Consortium (W3C) with engineers from Mozilla, Microsoft, Google and Apple.

More on wasm can be found here: https://webassembly.org/

Smart contract overview

Smart contracts are self-executing contracts with the terms of the agreement between buyer and seller being directly written into lines of code. The code and the agreements contained therein exist across a distributed, decentralized blockchain network. Smart contracts permit trusted transactions and agreements to be carried out among disparate, anonymous parties without the need for a central authority, legal system, or external enforcement mechanism. They render transactions traceable, transparent, and irreversible.

Before we jump on the deploying our escrow smart contract, there are a couple of preliminary things that are needed to be carried on. The official documentation page for EOSIO has this in a very clear and precise manner: https://developers.eos.io/eosio-cpp/docs/introduction-to-smart-contracts

Assuming that you were able to execute the steps mentioned in the above link, let’s proceed further with our smart contract under discussion.

ESCROW SMART CONTRACT

@breif: An escrow is basically a buffer where the token(currency) involved in the transaction is temporarily held until we know that the transaction can be completed without dispute. The operation under focus for this article is estransfer ( i.e escrow transfer); which is the operation used to transfer the fund from the sender to the escrow. This operation is complemented by esrelease which releases the funds from the escrow to the recipient.

First, define the class for your smart contract in the header file.


All the member functions corresponding to the actions, should be made public and they should be declared in the .abi file of the smart contracts.

Since our escrow smart contracts also issues the tokens needed for transactions it needs to have a table to store the data corresponding to the issued token and the balance for any account of that token. This is done using the multi_index container as provided by the EOSIO library (which is modelled after boost::multi_index_container) as follows:

Tables:

The two tables defined are accounts and stat. The accounts table is made up of different account objects each holding the balance for a different token. The stat table is made up of currency_stats objects (defined by struct currency_stats) that holds a supply, a max_supply, and an issuer. Before continuing on, it is important to know that this contract will hold data into two different scopes. The accounts table is scoped to an EOSIO account, and the stat table is scoped to a token symbol name.

According to the ‘eosio::multi_index’ definitions, code is the name of the account which has write permission and the scope is the account where the data gets stored.

The scope is essentially a way to compartmentalize data in a contract so that it is only accessible within a defined space. In the token contract, each EOSIO account is used as the scope of the accounts table. The accounts table is a multi-index container which holds multiple account objects. Each account object is indexed by its token symbol and contains the token balance. When you query a users’ accounts table using their scope, a list of all the tokens that user has an existing balance for is returned.

This is our smart contract’s header file.

.cpp implementation

The actions are implemented in the .cpp source file

The definition of our estransfer action starts off with some assertion checks:

First, assert checks if the sender and receiver accounts are not same, then we check that the transaction has an authority of the sender, authority for a transaction can be specified by using -p <account_name> flag with cleos.

 Next, we fetch the table corresponding to the current token.

These lines notify the specified accounts after a successful transaction

This is followed by some more checks

Finally, we call the two most important utility functions, add_balance, and sub_balance, the former adds the tokens to a specified account and the later deducts it.
https://gist.github.com/akhiltiwari-tr/15aa69e85246a835ff4c757f6ee97a92

Let’s look at sub_balance in a little more detail:

First, we fetch the table of accounts objects, the account being that of the sender in estransfer. This is followed by a few checks, if all the checks pass, modify the account’s balance by subtracting the value to be transferred and if the number of tokens to be transferred are equal to the total available tokens in the accounts we essentially delete the entry for the sender account from the accounts table.

Here is our escrow.cpp file in entirety.

This is the structure of the ABI file that documents the actions of the smart contract.

Here is the escrow.abi file:

Working with our escrow smart contract using cleos (command line utility as provided by EOSIO)

First, we need to set the contract and create a token

EOSIO

Second, we can check at the moment if any account has some token balance for the token we just created, the result should not surprise you.

EOSIO 4

Once the escrow smart contract is set and the token balance has been created, we can push the action for issuing this token to some user “ned”

EOSIO-5

Now, when we check the account for “ned” we can see the token balance for this account

blog-6

Now, we are in a situation to transfer funds from “ned” to “jon” (R+L=J)

EOSIO-7

We have used the account “escrow” with which we had deployed the smart contract as our Escrow for this transaction.

Then, we can check the balance for “escrow”

EOSIO-8

This is the final table structure after pushing the release action.

Eosio-9

 

REFERENCES:

One stop developer’s doc: https://developers.eos.io/

Great resource for understanding token contract: https://medium.com/coinmonks/understanding-the-eosio-token-contract-87466b9fdca9

Resource for pushing actions using eosjs lib: https://steemit.com/devs/@eos-asia/eos-smart-contracts-part-1-getting-started-ping-equivalent-in-eos

 

 

 

 

 

Upgradeability Improvement Protocol 2— Modularization 101

People don’t seem to understand the power, in the separation of powers

Introduction

In part 1 of this series, we built a simple smart contract that allows us to upgrade to a new contract by transferring each user’s data individually. Now, let’s kick into advanced mode!

Isolating the innocent data from all the violence

We think this thought would have crossed a number of solidity developers, “What if we could deploy a new contract, but keep the old data?”. And if you come to think of it, assume you have another place where you had just all of your data, you might never have to worry. And the answer to the above question would solve almost all of the UpgradeAgent limitations. Let’s consider a traditional ERC20 token contract, where everything is in one place. Now what if, the token logic(functions, events) and the data (balances[address]totalSupply, etc) remained at 2 different places that interact with each other through external calls.

UIP2. The Logical Token (ERC-1067 draft)

Before getting to the logical token, we present to you the facilitor for the logical token, the DataCentre contract!

It is a very simple, yet powerful contract that stores all of the data and has a couple of getter setter methods. 

UIP2. The Logical Token (ERC-1067 draft)

And judging by its name, the logical token will not concern itself with storage. It only has functions, and when required, forwards the menial tasks of storing and retrieving information to the DataCentre. So we can deploy a new logic part(the token), and re-reference the new address of the token whenever we want, without having to touch the data. Don’t worry if this went above your head, lets get to the implementation.

A very simplified example for how the DataCentre contract would look for tokens, just to give you an idea, can be seen below

Note: All the setter methods are onlyOwner functions and the owner of the DataCentre will always be the Token contract, except during deployment.

To accomodate the DataCentre, some changes have to be made to the token contract as well.

This is how the deployment sequence for this token architecture would look like (Feel free to check all of this on remix using complete code here)-

  1. Deploy the DataCentre
  2. Deploy the token with the address of the DataCentre in params
  3. transferOwnership of the DataCentre to the Token contract.

This way we create a 2 way link between the Token and the DataCentre. This link between the contracts is very important and will be used in the coming approaches as well.

How does it upgrade?

I hope it looks very easy now. We need to go through 2 steps to upgrade the contract-

  1. Deploy the new token contract
  2. a. kill the old token contract and enter new token contract into the params.

      2. b. the kill function automaticallytransfersOwnership of the DataCentre to the new token contract.

And that is as easy as it gets! Well, at least for now.

Notes-

  1. The 2nd transaction carries the weight of the couple 1000 individual transactions in previous approach. Screw this, and everything is lost!
  2. The ownership is transferred to the token contract. Insteadmsg.sendershould be used for a better approach, because, this address will surely be a private ethereum address(unless someone gets fancy) or at least an address(maybe a multisig contract) that will surely be active. The new contract address can be checked and confirmed and then one can manually transfer the ownership of the data centre to the token. On the other hand, this also opens the door to a lot of other features, out of scope for now.

Advantages to the LogicalToken-

  1. This approach completely kicks out the first 5 problems that the UpgradeAgent approach had.
  2. The contract owners have complete control and can upgrade whenever they want. Major bugs in tokens that might hurt market sentiments remain undisclosed.

Limitations-

  1. Increase in gas costs of about 35% due to the external calls between the logicalToken and the data centre contract.
  2. The 6th limitation of UIP1still remains. The contract is deployed to a new address, and the contract owners will have to re-list everywhere. But still, a breather for the investors!
  3. The DataCentre implementation brings a small limitation with it. Once deployed, you cannot add, remove, change the public variables. This can be taken care of though, if you consider all the possible cases and create a cure-all DataCentre like the following, you still cannot add more datatypes though. (https://github.com/grepruby/UIP2/blob/master/contracts/DataCentre.sol).

The storage layout is a lot more complicated here but allows to create different types of variables using mappings of mappings.

Though this method is better than UIP1, it is not perfect! And we want to tend to perfection. The next approach is something that looks like a bumped version of this method. If I may, Approach 2–2.0. Or maybe not. Lets get right to the part 3 of this series.

PS: This contract architecture has been submitted for an EIP as ERC-1067. Follow the issue on this link.

About Techracers

Techracers is a global leader in creating end-to-end blockchain solution and services that help business create an everlasting impact for the end user in this new era of digital economy. Our engineering solutions help organizations to achieve unprecedented production efficiency, facilitate continuous improvement across the product realization value stream, whilst accelerating the overall transformation through blockchain innovation.

Upgradeability Improvement Protocol 1 — The Upgrade Agent

                                  Once deployed to the blockchain, a contract stays there forever

Introduction

After coming from coding in languages like javascript, coding in solidity feels like coding on a Nokia 3320 phone, extremely difficult. You have to be stingy, cautious and make sure that you make no ‘oopsies’. And if you know anything about the blockchain, you know that once you deploy a contract to the mainnet, there is nothing you can do to change the code(unless you selfdestruct it), be it a bug or an upgrade.

Everyone has heard about dozens of companies over the years that endured helplessness and lost fortunes because of the immutability factor of blockchain, that supposedly is its strength. We want to help in securing users against such hacks and vulnerabilities. So we have crafted a complete series (this blog just being part 1) on writing upgradeable smart contracts so that you can atleast make sure you are covered.

Scope of this series

We would cover 4 Upgradeability Improvement Protocols in this series or UIPs, as they will be referred to from hereon. Each one will be covered in individual parts.

We would cover the following-

  1. A brief summary of each upgradeability protocol
  2. Sample code for each of the examples
  3. Reproduction of working on remix
  4. Advantages and limitations to each approach

Note- If you dont understand the approach during the summary, give it time to sink in till the implementation during step 3.

Remix would be the tool used by us to run and test the code.

What you are expected to know

  1. Basic understanding on how to write solidity smart contracts and know how to run remix. If you want to get a head start on that, check out this post ( https://www.techracers.com/smart-contract-solidity )
  2. Have read about the modular helper contracts (Ownable, SafeMath etc). It will be taken for granted you know about the functions and modifiers that they implement. You can take a look at them with inline comments, here ( https://github.com/OpenZeppelin/zeppelin-solidity/blob/master/contracts/math/SafeMath.sol ) https://github.com/OpenZeppelin/zeppelin-solidity/blob/master/contracts/ownership/Ownable.sol )

Note- I will be skipping out the above mentioned modular imports so that code remains concise.

Bringing about upgradeability

So to track back our introduction, what you heard about smart contracts is not completely true. Yes, once deployed to the mainnet, there is nothing you can do to change the existing logic. Nevertheless, smart contracts can be built with certain upgradeability protocols. Let’s walk through UIP 1. We would be using token contracts for simplicity, but each can be extended to ANY type of a smart contract.

UIP1. The Upgrade Agent

In basic language, this is fair and simple. Let’s assume there are 100 token holders having some balance. We get them to transfer their balances from the old to the new contract with separate transactions for each token holder. Every holder will have to upgrade themselves. Now for the advanced readers, the old token contract will have an upgrade function, that each holder can call. It simply calls the new token contract and transfers the balance to it. This is mostly done if the old token contract is found with a bug or if they want to change their ERC standard from the old 20 to some of the new ones. An example snippet of the code can be found below.

The old token contract will look like this(highly simplified).

And the new token contract must be able to catch these tokens using the upgradeFrom function and should be something along these lines

Steps to reproduce working on remix

You can find a link to the entire code with the imports here. Paste the entire code into the remix editor and compile it. You should get the contracts, OldToken and NewToken in the ‘run’ tab. Lets go through the steps-

  1. Create the OldToken contract. You should get 10000 tokens as initialized in the constructor. Call the constant balances function to make sure.
  2. Create the NewToken contract. The constructor takes address of the old token contract for parameters.
  3. Now go back to the OldToken contract and call the setUpgradeAgentfunction and input the address of the new token contract as params.
  4. Now that the setup is all done, we are ready to upgrade. Just call the upgrade function and input the number of tokens you want to upgrade(you obviously cannot upgrade more than you have :P). Upgrade event emitted makes sure that the upgrades can be tracked. Look for it in the logs part of details in the console.
  5. Go and witness it by checking your balance in the NewToken contract.

Advantages to the UpradeAgent-

  1. It is simple. Usually said that the simplest solutions are the best ones.
  2. Transparent process. Everyone can look into the verified contract on etherscan and look for themselves what goes on.
  3. Users have a say in the upgrade which is the heart of the blockchain fundamental.

Limitations-

All of the approaches that we discuss have some or the other limitations associated with them. It is always a trade-off between a few dozen factors when it comes to upgradeability in blockchain.

  1. Each user has to upgrade on their own. This calls for thousands of separate transactions that the users will have to make.
  2. Too slow. Upgrade process can last for days, weeks, even years.
  3. If there is an inactive token holder or if someone has lost his private keys, the upgrade process might never reach completion.
  4. Some users might be holding the tokens in cold storage wallets, which might make this process a bit too lengthy for them.
  5. Not every token holder is well equipped to be calling a smart contract function. Instructions can be given to them, but it almost causes panic to someone who doesn’t understand the workings of ethereum.
  6. The new token contract will have a new address(obviously). This means that the contract creator has to go through all the trouble of listing the contract on etherscan, exchanges, etc. again.

And that’s it for the first tutorial. You just learnt to write a simple upgradeability enabled smart contract that allows you to transfer data to another improved version. Make sure you check the complete codebase here.

But this is not what we were looking for! Read on to the next part of the series to explore the better contract architecture of UIP2.