# Getting Started

This section provides overview of Züs concepts, repository configuration, and installation .

### Concepts

**Züs**

A blockchain-based decentralized storage network

**Züs Service Providers**

Users (Miners, Sharders, and Blobbers) that perform several duties necessary for functional blockchain and storage systems.

**Miners**

They build, verify, notarize, and finalize blocks based on consensus. Miners need to store wallet and smart contract states on their ledger to ensure submitted transactions are executed correctly.

**Sharders**

They store blocks, keep records of View Changes through magic blocks, and respond to queries from users. Anyone joining the network queries the Sharders to determine the active members of the blockchain.&#x20;

**Blobbers**

They store data of any size and provide a single source of truth for that data

**Consensus**

Mechanisms used in blockchain systems to achieve the necessary agreement/conditions on a single state of the network

**ZCN**

Züs native token for rewarding service providers

### System requirements

To properly deploy Züs components, you must have a virtual machine setup with the following requirements:

* Linux (Ubuntu Preferred)
* 4 vCPU, 8 Gb Memory at minimum
* 200 GB of space to store the initial 0Chain deployment components. Expandable block storage to handle the network growing needs.

Note: These are the minimal requirements to run the deployment.

### Required Software dependencies

Installing and running the Z components requires deployment-specific dependencies to be preinstalled

#### **MicroK8s:**

**Installing MicroK8s**

0miner automates 0Chain deployment using Kubernetes distribution MicroK8s.

The easiest way to install MicroK8s as a root user is :

```bash
sudo snap install microk8s --classic --channel=1.17/stable
```

Here we have used an older 1.17 stable release of microk8s by using the `--channel` option. For example, to install the latest stable version, simply run:

```bash
snap install microk8s --classic
```

**Starting MicroK8s**

To check whether MicroK8s has properly installed. Simply start the service using:

```
sudo snap start microk8s
```

and check its status using

```
sudo microk8s status --wait-ready
```

Output for status command will be this:

```
microk8s is running
addons:
cilium: disabled
dashboard: disabled
dns: disabled
fluentd: disabled
gpu: disabled
helm: disabled
ingress: disabled
istio: disabled
jaeger: disabled
knative: disabled
linkerd: disabled
metrics-server: disabled
prometheus: disabled
rbac: disabled
registry: disabled
storage: disabled
```

As you can see, MicroK8s is running as expected and we can see a long list of addon components provided in microK8s

**Jq(JSON processor)**

0miner deployment needs. The easiest way to install it is:

```bash
sudo apt update && sudo apt install jq -y
```

**Kubectl**

[kubectl](https://kubernetes.io/docs/reference/kubectl/kubectl/), is a command-line tool that allows the running of commands against Kubernetes clusters. kubectl in 0miner deployment can be used to inspect and manage cluster resources.

To install Kubectl in ubuntu, Execute the following commands

```
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
```

The curl commands will download the kubectl package while the other two commands will replace the binary distribution of *kubectl* with the corresponding distribution for the Operating System (OS).

**Python3 & pip3**

To build 0Chain components, you need a working installation of python libraries. Get it by executing the following commands

```bash
sudo apt update && sudo apt install python3-pip -y
pip3 install -U PyYAML
```

**Helm**

To mantain 0miner Kubernetes components, Helm is required as a package manager. To install helm use the following command

```
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
```

### Installing Züs components on microK8s

Once all the software dependencies are installed

1. Configure kubectl on the host by generating the required config :

```
sudo su
mkdir ~/.kube
microk8s config > ~/.kube/config
kubectl get po -A
```

2.Clone the repository:

```bash
git clone https://github.com/0chain/0miner.git
```

3.Change directory to 0miner/0chain\_setup

```
cd 0miner/0chain_setup
```

4.Run the batch file located in the directory to install k8s components

```
bash utility/local_k8s.sh
```

5.Also, install the required python packages mentioned in the requirements text file

```
pip3 install -r utility/requirements.txt
```

1. For installation, you will be required to input your VM IP and public IP range.

   For an IP range, If having VM IP of 3.134.116.182 enter 3.134.116.182-3.134.116.182

### Configuring 0Chain components

The configuration for 0Chain components is done statically via the `on-prem_input_microk8s_standalone.json` file located in `0chain_setup/utility/config` folder

1. Navigate to the `0chain_setup/utility/config` folder

   ```
   cd utility/config
   ```
2. Edit the config file using nano editor

   ```
    nano on-prem_input_microk8s_standalone.json
   ```

​ Here is a sample on-prem\_input\_microk8s\_standalone.json file .

```
{
  "cloud_provider": "on-premise",
  "cluster_name": "test",      // Namespace in which all your resources will be created
  "sharder_count": "1",        // number of sharder you want to deploy 
  "miner_count": "1",         // number of miner you want to deploy 
  "blobber_count": "1",      // number of blobber you want to deploy 
  "deploy_main": true, 
  "deploy_auxiliary": true,
  "host_address": "<your-domain>", // Host url for your public IP 
  "host_ip": "18.217.219.7",       // Host ip 
  "kubeconfig_path": "",      // path to your kubeconfig, keep it empty to use system configured kubeconfig
  "n2n_delay": "",           // Delay between node to slow down block creation
  "fs_type": "microk8s-hostpath", // valid file system type (On-premise) [standard/ microk8s-hostpath/ openebs-cstore-sc]
  "repo_type": "0chaintest", // Repository to use 0chainkube or 0chaintest
  "image_tag": "latest",     // image version to be used 
  "record_type": "A",        // Dns record type supported by cloud provider (AWS) [CNAME] || (OCI) [A]
  "deployment_type": "public",   // Use of deployment "PUBLIC" or "PRIVATE"
  "monitoring": {
    "elk": "true", // always true 
    "elk_address": "elastic.<your-domain>", // leave empty if you want to access elk on nodeport
    "rancher": "true",
    "rancher_address": "rancher.<your-domain>",
    "grafana": "true",
    "grafana_address": "grafana.<your-domain>" // leave empty if you want to access grafana on nodeport
  },
  "on_premise": {
    "environment": "microk8s", 
    "host_ip": "18.217.219.7" // Host ip
  },
  "standalone": {
    "public_key": "",
    "private_key": "",
    "network_url": "two.devnet-0chain.net", // url of the network you want to join
    "blobber_delegate_ID": "20bd2e8feece9243c98d311f06c354f81a41b3e1df815f009817975a087e4894",
    "read_price": "",
    "write_price": "",
    "capacity": ""
  }
}
```

3.Necessary Configuration Changes **\***

* The `elk_address`,`rancher_address`,`grafana_adresss` has to be updated with registered domain name
* `host_ip`field has to be updated with Virtual Machine public IPv4 address.
* `host_address` field has to be updated with a registered domain name.
* To change the network `network_url` field has to be replaced with url of the network you want to join. In this case it should be `beta.0chain.net`.

4\. Create DNS records for your registered domain name under domain settings for your VM instance. .

5\. Add the following A-type DNS records for connecting the domain to IP associated with the VM.\\

| DNS Record Name         | DNS Record Type | Value/Route Traffic to |
| ----------------------- | --------------- | ---------------------- |
| \<DOMAIN\_NAME>         | A               | \<VM\_INSTANCE\_IP>    |
| \<grafana.DOMAIN\_NAME> | A               | \<VM\_INSTANCE\_IP>    |
| \<kibana.DOMAIN\_NAME>  | A               | \<VM\_INSTANCE\_IP>    |
| \<rancher.DOMAIN\_NAME> | A               | \<VM\_INSTANCE\_IP>    |

Here is an example with a sample domain (zerominer.xyz) and Instance IP(3.22.152.211)

| DNS Record Name       | DNS Record Type | Value/Route Traffic to |
| --------------------- | --------------- | ---------------------- |
| zerominer.xyz         | A               | 3.22.152.211           |
| grafana.zerominer.xyz | A               | 3.22.152.211           |
| kibana.zerominer.xyz  | A               | 3.22.152.211           |
| rancher.zerominer.xyz | A               | 3.22.152.211           |

7\. After creation of the four records mentioned above you should be automatically provided with `NS` DNS record which contains name servers that should be copied/updated under your domain registrar (GoDaddy,HostGator,NameCheap) nameserver settings.\
\
For example check the screenshot below of AWS domain settings for our VM instance . In this case AWS is providing nameservers(ns-1466.awsdns-55.org) under NS DNS record type which should be updated in the domain registrar nameserver settings.

![](/files/-MZVZhO9GCrA-_YNHKx9)

### Start the Züs components

1. Execute the JSON script file using the bash command

```
bash 0chain-standalone-setup.sh --input-file utility/config/on-prem_input_microk8s_standalone.json
```

During the first run, the necessary parameter data will be initialized and all the necessary components will be installed. This can be gigabytes of data installed once. We recommend waiting until the whole process is completed.

2\. Once the installation is complete, verify and validate the deployment through`https://<network_url>/sharder01/_diagnostics`

3\. Here is a sample output when you visit the network URL ([https://beta.0chain.net/sharder01/\_diagnostics](https://two.devnet-0chain.net/sharder01/_diagnostics)). Note that the network URL changes with configuration in the JSON file.

![](/files/-MYkZUatcatrWP1Gv4sr)

### **Managing 0Chain Components**

Once the script is executed, Kibana and Grafana dashboards are available at URL's specified in the configuration file for metrics and Logging. Cluster pods can be managed using the Rancher which will be available at the rancher.\<domain-name>.

Below are examples of how Grafana, Kibana, and Rancher look after the deployment

**Grafana**

1.Enter the Grafana domain you have specified in the config file for example(grafana.example.com) into your browser. On successful response you will see a window to log in to the dashboard.

![](/files/-MYk_x_F1k8wvbjsnBo5)

2.Sign in with your credentials.

3.After successful login. you will see graphs and resource Usage(CPU,Memory) for the blobbers deployed using 0miner. You can see the test namespace being created Also to provide a durable location for all blobbers,sharders .and miners data persistent volume claims(pvc) are created.

![](/files/-MYkaatAX8-LPmhgi4x-)

4.As you navigate down you will see resource Usage ( CPU and memory Usage) for Sharders, and Miners as well. Whenever the data will be accessed or a number of deployed 0miner components are increased these resource usage will change.

![](/files/-MYkbUXyhQ_bqmuQNBCV)

![](/files/-MYkbPCVue8DPndytmO7)

4.You can also create custom dashboards by clicking on the plus button on upper left (see the dashboard screenshot on step 3) and try to see usage for certain metrics. For instance if you try to find etcd(key/value) object counts for API objects or state of pods in the cluster. It will return visualizing graphs.You only have to select that metric in the dropdown here in this below dashboard screenshot we have select \`etcd\_\_object\_\_counts

![](/files/-MYkbu5f-bM8J-Of3c3Q)

#### Logs using Kibana

1.Enter the Elastic Kibana domain you have specified in the config file for example (kibana.example.com) into your browser and log in to the dashboard.

![](/files/-MYkdnaascWmm114dq63)

2.Sign in with your credentials.

\*Note for every deployment a new password is generated, so have a look at your command shell during the script execution for the password.

3.After successful login you will see logging activity for sharders and other components. The graph count describes the total logging events.You can see the test namespace being created You can filter these logs based on fields and visualize graphs by searching in the search field names box on the upper left corner.

![](/files/-MYkduz5oHLW9WMq-22T)

4.For example lets try to see logging events for`kubernetes.pod.name`field Once you search this field in the search field names box. It will be listed in available fields to add.

![](/files/-MYkfRE-U-ia1XZgbCqr)

5 Click on the field. You will see a visualizing graph of logging activities in descending order for different types of pods in the cluster.

![](/files/-MYkfbco-tRMnHG8HN7i)

#### Manage Pods using Rancher

1.Enter the Rancher domain you have specified in the config file for example(rancher.example.com) into your browser and log in to the dashboard.

![](/files/-MYkgdLzyS32E8K-Tsmx)

2.Rancher will ask you to create a random or specific password for the admin user, default view for the dashboard, and agree to terms and conditions.

3.After successful login .it will ask you to select your test cluster and you will see your cluster dashboard listed with cluster statistics(pods, namespaces, services, etc). The metrics count describes the number of pods used, cores, and, memory reserved.

![](/files/-MYkhKlalFbIoPJrk-7h)

4.From here you can click on the upper right shell logo which will execute the kubectl shell for running cluster commands.

![](/files/-MYkhTKLyAf50DXRpvCw)

5.Here is a screenshot of kubectl shell running the rancher dashboard.

![](/files/-MYkhez0YQIOjkZLzs0l)

6..For managing pods in the cluster to increase cpu and memory resources for 0Chain components simply click on the pods in the left of the dropdown menu. You will see a list of pods already created. To create a new pod click on create from yaml on the upper right corner.

![](/files/-MYkiSv-r4o5t7ZCJ18t)

7.You will see a sample yaml config for pods. Make any desired changes and click create. You will see your new pods listed in the pods section.

Also, you can edit existing deployments to increase cpu and memory resources.Go to Deployments drop-down menu on the left > Click on three dots opposite edit > Edit Yaml > Change resource values and save.

![](/files/-MYkhs8OmhfK-YnOH7QM)

## 💻 Community

For support related to 0miner deployment check our community forum at \[<https://community.0chain.net/>].

If you are interested in manually setup 0Chain please have a look at [0chain ](https://github.com/0chain/0chain)and [blobber ](https://github.com/0chain/blobber)repos.

You might also want to check our 0Chain blog [here](https://medium.com/0chain)

#### Contribute

All of our code for 0Chain is open source. No matter your level of expertise. Help us make 0Chain better by sending a pull request [here](https://github.com/0chain)

* [Mine Data](https://0chain.net/page-miners.html): For those looking to provide services as a storage provider
* [Store Data](https://one.devnet-0chain.net/0box/dashboard/listallocations): For those looking for decentralized storage.

#### Chat

For discussions, join our community chat directly on

[Telegram](https://t.me/Ochain)

[Twitter](https://twitter.com/0chain)

[Reddit](https://www.reddit.com/r/0chain/)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs-old.zus.network/guides/add-a-miner-sharder/getting-started.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
