Getting Started

This section provides overview of Züs concepts, repository configuration, and installation .



A blockchain-based decentralized storage network

Züs Service Providers

Users (Miners, Sharders, and Blobbers) that perform several duties necessary for functional blockchain and storage systems.


They build, verify, notarize, and finalize blocks based on consensus. Miners need to store wallet and smart contract states on their ledger to ensure submitted transactions are executed correctly.


They store blocks, keep records of View Changes through magic blocks, and respond to queries from users. Anyone joining the network queries the Sharders to determine the active members of the blockchain.


They store data of any size and provide a single source of truth for that data


Mechanisms used in blockchain systems to achieve the necessary agreement/conditions on a single state of the network


Züs native token for rewarding service providers

System requirements

To properly deploy Züs components, you must have a virtual machine setup with the following requirements:

  • Linux (Ubuntu Preferred)

  • 4 vCPU, 8 Gb Memory at minimum

  • 200 GB of space to store the initial 0Chain deployment components. Expandable block storage to handle the network growing needs.

Note: These are the minimal requirements to run the deployment.

Required Software dependencies

Installing and running the Z components requires deployment-specific dependencies to be preinstalled


Installing MicroK8s

0miner automates 0Chain deployment using Kubernetes distribution MicroK8s.

The easiest way to install MicroK8s as a root user is :

sudo snap install microk8s --classic --channel=1.17/stable

Here we have used an older 1.17 stable release of microk8s by using the --channel option. For example, to install the latest stable version, simply run:

snap install microk8s --classic

Starting MicroK8s

To check whether MicroK8s has properly installed. Simply start the service using:

sudo snap start microk8s

and check its status using

sudo microk8s status --wait-ready

Output for status command will be this:

microk8s is running
cilium: disabled
dashboard: disabled
dns: disabled
fluentd: disabled
gpu: disabled
helm: disabled
ingress: disabled
istio: disabled
jaeger: disabled
knative: disabled
linkerd: disabled
metrics-server: disabled
prometheus: disabled
rbac: disabled
registry: disabled
storage: disabled

As you can see, MicroK8s is running as expected and we can see a long list of addon components provided in microK8s

Jq(JSON processor)

0miner deployment needs. The easiest way to install it is:

sudo apt update && sudo apt install jq -y


kubectl, is a command-line tool that allows the running of commands against Kubernetes clusters. kubectl in 0miner deployment can be used to inspect and manage cluster resources.

To install Kubectl in ubuntu, Execute the following commands

curl -LO`curl -s`/bin/linux/amd64/kubectl

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

The curl commands will download the kubectl package while the other two commands will replace the binary distribution of kubectl with the corresponding distribution for the Operating System (OS).

Python3 & pip3

To build 0Chain components, you need a working installation of python libraries. Get it by executing the following commands

sudo apt update && sudo apt install python3-pip -y
pip3 install -U PyYAML


To mantain 0miner Kubernetes components, Helm is required as a package manager. To install helm use the following command

curl | bash

Installing Züs components on microK8s

Once all the software dependencies are installed

  1. Configure kubectl on the host by generating the required config :

sudo su
mkdir ~/.kube
microk8s config > ~/.kube/config
kubectl get po -A

2.Clone the repository:

git clone

3.Change directory to 0miner/0chain_setup

cd 0miner/0chain_setup

4.Run the batch file located in the directory to install k8s components

bash utility/

5.Also, install the required python packages mentioned in the requirements text file

pip3 install -r utility/requirements.txt
  1. For installation, you will be required to input your VM IP and public IP range.

    For an IP range, If having VM IP of enter

Configuring 0Chain components

The configuration for 0Chain components is done statically via the on-prem_input_microk8s_standalone.json file located in 0chain_setup/utility/config folder

  1. Navigate to the 0chain_setup/utility/config folder

    cd utility/config
  2. Edit the config file using nano editor

     nano on-prem_input_microk8s_standalone.json

​ Here is a sample on-prem_input_microk8s_standalone.json file .

  "cloud_provider": "on-premise",
  "cluster_name": "test",      // Namespace in which all your resources will be created
  "sharder_count": "1",        // number of sharder you want to deploy 
  "miner_count": "1",         // number of miner you want to deploy 
  "blobber_count": "1",      // number of blobber you want to deploy 
  "deploy_main": true, 
  "deploy_auxiliary": true,
  "host_address": "<your-domain>", // Host url for your public IP 
  "host_ip": "",       // Host ip 
  "kubeconfig_path": "",      // path to your kubeconfig, keep it empty to use system configured kubeconfig
  "n2n_delay": "",           // Delay between node to slow down block creation
  "fs_type": "microk8s-hostpath", // valid file system type (On-premise) [standard/ microk8s-hostpath/ openebs-cstore-sc]
  "repo_type": "0chaintest", // Repository to use 0chainkube or 0chaintest
  "image_tag": "latest",     // image version to be used 
  "record_type": "A",        // Dns record type supported by cloud provider (AWS) [CNAME] || (OCI) [A]
  "deployment_type": "public",   // Use of deployment "PUBLIC" or "PRIVATE"
  "monitoring": {
    "elk": "true", // always true 
    "elk_address": "elastic.<your-domain>", // leave empty if you want to access elk on nodeport
    "rancher": "true",
    "rancher_address": "rancher.<your-domain>",
    "grafana": "true",
    "grafana_address": "grafana.<your-domain>" // leave empty if you want to access grafana on nodeport
  "on_premise": {
    "environment": "microk8s", 
    "host_ip": "" // Host ip
  "standalone": {
    "public_key": "",
    "private_key": "",
    "network_url": "", // url of the network you want to join
    "blobber_delegate_ID": "20bd2e8feece9243c98d311f06c354f81a41b3e1df815f009817975a087e4894",
    "read_price": "",
    "write_price": "",
    "capacity": ""

3.Necessary Configuration Changes *

  • The elk_address,rancher_address,grafana_adresss has to be updated with registered domain name

  • host_ipfield has to be updated with Virtual Machine public IPv4 address.

  • host_address field has to be updated with a registered domain name.

  • To change the network network_url field has to be replaced with url of the network you want to join. In this case it should be

4. Create DNS records for your registered domain name under domain settings for your VM instance. .

5. Add the following A-type DNS records for connecting the domain to IP associated with the VM.\

DNS Record NameDNS Record TypeValue/Route Traffic to













Here is an example with a sample domain ( and Instance IP(

DNS Record NameDNS Record TypeValue/Route Traffic to





7. After creation of the four records mentioned above you should be automatically provided with NS DNS record which contains name servers that should be copied/updated under your domain registrar (GoDaddy,HostGator,NameCheap) nameserver settings. For example check the screenshot below of AWS domain settings for our VM instance . In this case AWS is providing nameservers( under NS DNS record type which should be updated in the domain registrar nameserver settings.

Start the Züs components

  1. Execute the JSON script file using the bash command

bash --input-file utility/config/on-prem_input_microk8s_standalone.json

During the first run, the necessary parameter data will be initialized and all the necessary components will be installed. This can be gigabytes of data installed once. We recommend waiting until the whole process is completed.

2. Once the installation is complete, verify and validate the deployment throughhttps://<network_url>/sharder01/_diagnostics

3. Here is a sample output when you visit the network URL ( Note that the network URL changes with configuration in the JSON file.

Managing 0Chain Components

Once the script is executed, Kibana and Grafana dashboards are available at URL's specified in the configuration file for metrics and Logging. Cluster pods can be managed using the Rancher which will be available at the rancher.<domain-name>.

Below are examples of how Grafana, Kibana, and Rancher look after the deployment


1.Enter the Grafana domain you have specified in the config file for example( into your browser. On successful response you will see a window to log in to the dashboard.

2.Sign in with your credentials.

3.After successful login. you will see graphs and resource Usage(CPU,Memory) for the blobbers deployed using 0miner. You can see the test namespace being created Also to provide a durable location for all blobbers,sharders .and miners data persistent volume claims(pvc) are created.

4.As you navigate down you will see resource Usage ( CPU and memory Usage) for Sharders, and Miners as well. Whenever the data will be accessed or a number of deployed 0miner components are increased these resource usage will change.

4.You can also create custom dashboards by clicking on the plus button on upper left (see the dashboard screenshot on step 3) and try to see usage for certain metrics. For instance if you try to find etcd(key/value) object counts for API objects or state of pods in the cluster. It will return visualizing graphs.You only have to select that metric in the dropdown here in this below dashboard screenshot we have select `etcd__object__counts

Logs using Kibana

1.Enter the Elastic Kibana domain you have specified in the config file for example ( into your browser and log in to the dashboard.

2.Sign in with your credentials.

*Note for every deployment a new password is generated, so have a look at your command shell during the script execution for the password.

3.After successful login you will see logging activity for sharders and other components. The graph count describes the total logging events.You can see the test namespace being created You can filter these logs based on fields and visualize graphs by searching in the search field names box on the upper left corner.

4.For example lets try to see logging events forkubernetes.pod.namefield Once you search this field in the search field names box. It will be listed in available fields to add.

5 Click on the field. You will see a visualizing graph of logging activities in descending order for different types of pods in the cluster.

Manage Pods using Rancher

1.Enter the Rancher domain you have specified in the config file for example( into your browser and log in to the dashboard.

2.Rancher will ask you to create a random or specific password for the admin user, default view for the dashboard, and agree to terms and conditions.

3.After successful login .it will ask you to select your test cluster and you will see your cluster dashboard listed with cluster statistics(pods, namespaces, services, etc). The metrics count describes the number of pods used, cores, and, memory reserved.

4.From here you can click on the upper right shell logo which will execute the kubectl shell for running cluster commands.

5.Here is a screenshot of kubectl shell running the rancher dashboard.

6..For managing pods in the cluster to increase cpu and memory resources for 0Chain components simply click on the pods in the left of the dropdown menu. You will see a list of pods already created. To create a new pod click on create from yaml on the upper right corner.

7.You will see a sample yaml config for pods. Make any desired changes and click create. You will see your new pods listed in the pods section.

Also, you can edit existing deployments to increase cpu and memory resources.Go to Deployments drop-down menu on the left > Click on three dots opposite edit > Edit Yaml > Change resource values and save.

💻 Community

For support related to 0miner deployment check our community forum at [].

If you are interested in manually setup 0Chain please have a look at 0chain and blobber repos.

You might also want to check our 0Chain blog here


All of our code for 0Chain is open source. No matter your level of expertise. Help us make 0Chain better by sending a pull request here

  • Mine Data: For those looking to provide services as a storage provider

  • Store Data: For those looking for decentralized storage.


For discussions, join our community chat directly on




Last updated