Getting Started
This section provides overview of Züs concepts, repository configuration, and installation .
Concepts
Züs
A blockchain-based decentralized storage network
Züs Service Providers
Users (Miners, Sharders, and Blobbers) that perform several duties necessary for functional blockchain and storage systems.
Miners
They build, verify, notarize, and finalize blocks based on consensus. Miners need to store wallet and smart contract states on their ledger to ensure submitted transactions are executed correctly.
Sharders
They store blocks, keep records of View Changes through magic blocks, and respond to queries from users. Anyone joining the network queries the Sharders to determine the active members of the blockchain.
Blobbers
They store data of any size and provide a single source of truth for that data
Consensus
Mechanisms used in blockchain systems to achieve the necessary agreement/conditions on a single state of the network
ZCN
Züs native token for rewarding service providers
System requirements
To properly deploy Züs components, you must have a virtual machine setup with the following requirements:
Linux (Ubuntu Preferred)
4 vCPU, 8 Gb Memory at minimum
200 GB of space to store the initial 0Chain deployment components. Expandable block storage to handle the network growing needs.
Note: These are the minimal requirements to run the deployment.
Required Software dependencies
Installing and running the Z components requires deployment-specific dependencies to be preinstalled
MicroK8s:
Installing MicroK8s
0miner automates 0Chain deployment using Kubernetes distribution MicroK8s.
The easiest way to install MicroK8s as a root user is :
Here we have used an older 1.17 stable release of microk8s by using the --channel
option. For example, to install the latest stable version, simply run:
Starting MicroK8s
To check whether MicroK8s has properly installed. Simply start the service using:
and check its status using
Output for status command will be this:
As you can see, MicroK8s is running as expected and we can see a long list of addon components provided in microK8s
Jq(JSON processor)
0miner deployment needs. The easiest way to install it is:
Kubectl
kubectl, is a command-line tool that allows the running of commands against Kubernetes clusters. kubectl in 0miner deployment can be used to inspect and manage cluster resources.
To install Kubectl in ubuntu, Execute the following commands
The curl commands will download the kubectl package while the other two commands will replace the binary distribution of kubectl with the corresponding distribution for the Operating System (OS).
Python3 & pip3
To build 0Chain components, you need a working installation of python libraries. Get it by executing the following commands
Helm
To mantain 0miner Kubernetes components, Helm is required as a package manager. To install helm use the following command
Installing Züs components on microK8s
Once all the software dependencies are installed
Configure kubectl on the host by generating the required config :
2.Clone the repository:
3.Change directory to 0miner/0chain_setup
4.Run the batch file located in the directory to install k8s components
5.Also, install the required python packages mentioned in the requirements text file
For installation, you will be required to input your VM IP and public IP range.
For an IP range, If having VM IP of 3.134.116.182 enter 3.134.116.182-3.134.116.182
Configuring 0Chain components
The configuration for 0Chain components is done statically via the on-prem_input_microk8s_standalone.json
file located in 0chain_setup/utility/config
folder
Navigate to the
0chain_setup/utility/config
folderEdit the config file using nano editor
Here is a sample on-prem_input_microk8s_standalone.json file .
3.Necessary Configuration Changes *
The
elk_address
,rancher_address
,grafana_adresss
has to be updated with registered domain namehost_ip
field has to be updated with Virtual Machine public IPv4 address.host_address
field has to be updated with a registered domain name.To change the network
network_url
field has to be replaced with url of the network you want to join. In this case it should bebeta.0chain.net
.
4. Create DNS records for your registered domain name under domain settings for your VM instance. .
5. Add the following A-type DNS records for connecting the domain to IP associated with the VM.\
<DOMAIN_NAME>
A
<VM_INSTANCE_IP>
<grafana.DOMAIN_NAME>
A
<VM_INSTANCE_IP>
<kibana.DOMAIN_NAME>
A
<VM_INSTANCE_IP>
<rancher.DOMAIN_NAME>
A
<VM_INSTANCE_IP>
Here is an example with a sample domain (zerominer.xyz) and Instance IP(3.22.152.211)
zerominer.xyz
A
3.22.152.211
grafana.zerominer.xyz
A
3.22.152.211
kibana.zerominer.xyz
A
3.22.152.211
rancher.zerominer.xyz
A
3.22.152.211
7. After creation of the four records mentioned above you should be automatically provided with NS
DNS record which contains name servers that should be copied/updated under your domain registrar (GoDaddy,HostGator,NameCheap) nameserver settings.
For example check the screenshot below of AWS domain settings for our VM instance . In this case AWS is providing nameservers(ns-1466.awsdns-55.org) under NS DNS record type which should be updated in the domain registrar nameserver settings.
Start the Züs components
Execute the JSON script file using the bash command
During the first run, the necessary parameter data will be initialized and all the necessary components will be installed. This can be gigabytes of data installed once. We recommend waiting until the whole process is completed.
2. Once the installation is complete, verify and validate the deployment throughhttps://<network_url>/sharder01/_diagnostics
3. Here is a sample output when you visit the network URL (https://beta.0chain.net/sharder01/_diagnostics). Note that the network URL changes with configuration in the JSON file.
Managing 0Chain Components
Once the script is executed, Kibana and Grafana dashboards are available at URL's specified in the configuration file for metrics and Logging. Cluster pods can be managed using the Rancher which will be available at the rancher.<domain-name>.
Below are examples of how Grafana, Kibana, and Rancher look after the deployment
Grafana
1.Enter the Grafana domain you have specified in the config file for example(grafana.example.com) into your browser. On successful response you will see a window to log in to the dashboard.
2.Sign in with your credentials.
3.After successful login. you will see graphs and resource Usage(CPU,Memory) for the blobbers deployed using 0miner. You can see the test namespace being created Also to provide a durable location for all blobbers,sharders .and miners data persistent volume claims(pvc) are created.
4.As you navigate down you will see resource Usage ( CPU and memory Usage) for Sharders, and Miners as well. Whenever the data will be accessed or a number of deployed 0miner components are increased these resource usage will change.
4.You can also create custom dashboards by clicking on the plus button on upper left (see the dashboard screenshot on step 3) and try to see usage for certain metrics. For instance if you try to find etcd(key/value) object counts for API objects or state of pods in the cluster. It will return visualizing graphs.You only have to select that metric in the dropdown here in this below dashboard screenshot we have select `etcd__object__counts
Logs using Kibana
1.Enter the Elastic Kibana domain you have specified in the config file for example (kibana.example.com) into your browser and log in to the dashboard.
2.Sign in with your credentials.
*Note for every deployment a new password is generated, so have a look at your command shell during the script execution for the password.
3.After successful login you will see logging activity for sharders and other components. The graph count describes the total logging events.You can see the test namespace being created You can filter these logs based on fields and visualize graphs by searching in the search field names box on the upper left corner.
4.For example lets try to see logging events forkubernetes.pod.name
field Once you search this field in the search field names box. It will be listed in available fields to add.
5 Click on the field. You will see a visualizing graph of logging activities in descending order for different types of pods in the cluster.
Manage Pods using Rancher
1.Enter the Rancher domain you have specified in the config file for example(rancher.example.com) into your browser and log in to the dashboard.
2.Rancher will ask you to create a random or specific password for the admin user, default view for the dashboard, and agree to terms and conditions.
3.After successful login .it will ask you to select your test cluster and you will see your cluster dashboard listed with cluster statistics(pods, namespaces, services, etc). The metrics count describes the number of pods used, cores, and, memory reserved.
4.From here you can click on the upper right shell logo which will execute the kubectl shell for running cluster commands.
5.Here is a screenshot of kubectl shell running the rancher dashboard.
6..For managing pods in the cluster to increase cpu and memory resources for 0Chain components simply click on the pods in the left of the dropdown menu. You will see a list of pods already created. To create a new pod click on create from yaml on the upper right corner.
7.You will see a sample yaml config for pods. Make any desired changes and click create. You will see your new pods listed in the pods section.
Also, you can edit existing deployments to increase cpu and memory resources.Go to Deployments drop-down menu on the left > Click on three dots opposite edit > Edit Yaml > Change resource values and save.
💻 Community
For support related to 0miner deployment check our community forum at [https://community.0chain.net/].
If you are interested in manually setup 0Chain please have a look at 0chain and blobber repos.
You might also want to check our 0Chain blog here
Contribute
All of our code for 0Chain is open source. No matter your level of expertise. Help us make 0Chain better by sending a pull request here
Mine Data: For those looking to provide services as a storage provider
Store Data: For those looking for decentralized storage.
Chat
For discussions, join our community chat directly on
Last updated