Setting up your chain
Participants (hardware providers or nodes) contribute computational resources to the network and are rewarded based on the amount and quality of resources they provide.
To join the network, you need to deploy two services:
- Network node – a service consisting of two nodes: a chain node and an API node. This service handles all communication. The chain node connects to the blockchain, while the API node manages user requests.
- Inference (ML) node – a service that performs inference of large language models (LLMs) on GPU(s). You need at least one ML node to join the network.
The guide describes the scenario where both services are deployed on the same machine, and each participant has one MLNode. Services are deployed as Docker containers.
Prerequisites
This section provides guidance on configuring your hardware infrastructure to participate in Gonka Network launch. The goal is to maximize protocol rewards by aligning your deployment with network expectations.
Supported Model Classes
The protocol currently supports the following model classes:
- Large Models —
DeepSeek R1
,Qwen3-235B
,gpt-oss-120b
- Medium Models —
Qwen3-32B
,Gemma-3-27b-it
- Small Models —
Qwen2.5-7B
Governance and model classification
- The exact deployment parameters for each category are defined in the genesis configuration.
- Models may be classified into a category if approved by governance.
- Decisions about adding or changing supported models are made by governance.
- For details on governance procedures and how to propose new models, see the Transactions and Governance Guide.
Configuration for Optimal Rewards
To earn the highest rewards and maintain reliability, each Network Node should serve all three model classes, with a minimum of 2 MLNodes per class. This setup:
- Improves protocol-level redundancy and fault tolerance
- Enhances model-level validation performance
- Aligns with future reward scaling logic
Proposed Hardware Configuration
To run a valid node, you need machines with supported GPU(s). We recommend grouping your hardware into 2–5 Network Nodes, each configured to support all model classes. Below is a reference layout:
Model Class | Model Name | MLNodes (min) | Example Hardware | Total VRAM |
---|---|---|---|---|
Large | DeepSeek R1 / Qwen3-235B |
≥ 2 | 8× H200 per MLNode | 640 GB |
Medium | Qwen3-32B / Gemma-3-27B-it |
≥ 2 | 4× A100 or 2× H100 per MLNode | 80 GB |
Small | Qwen2.5-7B |
≥ 2 | 1× 3090 or 8× 3090 per MLNode | 24 GB |
This is a reference architecture. You may adjust node count or hardware allocation, but we recommend following the core principle: each node should support multiple MLNodes across all three model tiers.
More details about optimal deploy configuration can be found here.
It also should have:
- 16 CPU cores
- At least 1.5x RAM of the GPU VRAM
- Linux OS
- Docker
- Docker Compose
- NVIDIA Container Toolkit
- Nvidia GPUs belonging to generations after Tesla, with a minimum of 16 GB VRAM per GPU
Ports open for public connections
- 5000 - Tendermint P2P communication
- 26657 - Tendermint RPC (querying the blockchain, broadcasting transactions)
- 8000 - Application service (configurable)
Download Deployment Files
Clone the repository with the base deploy scripts:
git clone https://github.com/gonka-ai/gonka.git -b main && \
cd gonka/deploy/join
And copy config
file template:
cp config.env.template config.env
Authentication required
If prompted for a password, use a GitHub personal access token (classic) with repo
access.
After cloning the repository, you’ll find the following key configuration files:
File | Description |
---|---|
config.env |
Contains environment variables for the Network Node |
docker-compose.yml |
Docker Compose file to launch the Network Node |
docker-compose.mlnode.yml |
Docker Compose file to launch the ML node |
node-config.json |
Configuration file used by Network Node, it describes inference nodes managed by this Network Node |
node-config-qwq.json |
Configuration file specifically for Qwen/QwQ-32B on A100/H100 |
node-config-qwq-4x3090.json |
Optimized config for QwQ-32B using 4x3090 setup |
node-config-qwq-8x3090.json |
Optimized config for QwQ-32B using 8x3090 setup |
Copy and modify the config that best fits your model and GPU layout.
Note
The network is initially launched with two models: Qwen/Qwen2.5-7B-Instruct
and Qwen/QwQ-32B
. Configuration examples for these models can be found in node-config.json
and node-config-qwq.json
. Decisions about adding or changing supported models are made by governance. For details on how model governance works and how to propose new models, see the Transactions and Governance Guide.
Pre-download Model Weights to Hugging Face Cache (HF_HOME)
Inference nodes download model weights from Hugging Face. To ensure the model weights are ready for inference, we recommend downloading them before deployment. Choose one of the following options:
Option 1: Local download
export HF_HOME=/path/to/your/hf-cache
~/hf-cache
) and pre-load models if desired:
huggingface-cli download Qwen/Qwen2.5-7B-Instruct
Mount shared cache:
sudo mount -t nfs 172.18.114.147:/mnt/toshare /mnt/shared
export HF_HOME=/mnt/shared
/mnt/shared
only works in the 6Block testnet with access to the shared NFS.
Authenticate with Docker Registry
Some Docker images used in this instruction are private. Make sure to authenticate with GitHub Container Registry:
docker login ghcr.io -u <YOUR_GITHUB_USERNAME>
Required token scopes
When creating a new Personal Access Token (Classic) on GitHub, make sure to select the following scopes:
repo
→ Full control of private repositoriesread:packages
→ Download packages from GitHub Package Registry
Setup Your Network Node
Key Management Overview
Before configuring your Network Node, you need to set up cryptographic keys for secure operations.
You recommend to read Key Management Guide before launching a production node.
We use a two-key system:
- Account Key (Cold Wallet) - Created on your local secure machine for high-stakes operations
- ML Operational Key (Warm Wallet) - Created on the server for automated AI workload transactions
Install the CLI Tool
The inferenced
CLI is required for local account management and network operations. It's a command-line interface utility that allows you to create and manage Gonka accounts, register participants, and perform various network operations from your local machine.
Download the latest inferenced
binary from GitHub releases and make it executable:
chmod +x inferenced
./inferenced --help
MacOS Users
On MacOS, you may need to allow execution in System Settings
→ Privacy & Security
if prompted. Scroll down to the warning about inferenced
and click Allow Anyway
.
Create Account Key (Local Machine)
IMPORTANT: Perform this step on a secure, local machine (not your server)
Create your Account Key using the file
keyring backend (you can also use os
for enhanced security on supported systems):
./inferenced keys add gonka-account-key --keyring-backend file
CLI will ask you for passphrase and show data about created key-pair.
❯ ./inferenced keys add gonka-account-key --keyring-backend file
Enter keyring passphrase (attempt 1/3):
Re-enter keyring passphrase:
- address: gonka1rk52j24xj9ej87jas4zqpvjuhrgpnd7h3feqmm
name: gonka-account-key
pubkey: '{"@type":"/cosmos.crypto.secp256k1.PubKey","key":"Au+a3CpMj6nqFV6d0tUlVajCTkOP3cxKnps+1/lMv5zY"}'
type: local
**Important** write this mnemonic phrase in a safe place.
It is the only way to recover your account if you ever forget your password.
pyramid sweet dumb critic lamp various remove token talent drink announce tiny lab follow blind awful expire wasp flavor very pair tell next cable
CRITICAL: Write this mnemonic phrase down and store it in a secure, offline location. This phrase is the only way to recover your Account Key.
Hardware Wallet Support
Current Status: Hardware wallets are not yet supported at network launch.
For Now: Store your Account Key on a secure, dedicated machine with minimal internet exposure and strong encryption.
Important: Always keep your mnemonic phrase as backup regardless of future hardware wallet adoption.
Edit Your Network Node Configuration
config.env
export KEY_NAME=<FILLIN> # Edit as described below
export KEYRING_PASSWORD=<FILLIN> # Edit as described below
export API_PORT=8000 # Edit as described below
export PUBLIC_URL=http://<HOST>:<PORT> # Edit as described below
export P2P_EXTERNAL_ADDRESS=tcp://<HOST>:<PORT> # Edit as described below
export ACCOUNT_PUBKEY=<ACCOUNT_PUBKEY_FROM_STEP_ABOVE> # Use the pubkey from your Account Key (without quotes)
export NODE_CONFIG=./node-config.json # Keep as is
export HF_HOME=/mnt/shared # Directory you used for cache
export SEED_API_URL=http://195.242.13.239:8000 # Keep as is
export SEED_NODE_RPC_URL=http://195.242.13.239:26657 # Keep as is
export SEED_NODE_P2P_URL=tcp://195.242.13.239:26656 # Keep as is
export DAPI_API__POC_CALLBACK_URL=http://api:9100 # Keep as is
export DAPI_CHAIN_NODE__URL=http://node:26657 # Keep as is
export DAPI_CHAIN_NODE__P2P_URL=http://node:26656 # Keep as is
export RPC_SERVER_URL_1=http://89.169.103.180:26657 # Keep as is
export RPC_SERVER_URL_2=http://195.242.13.239:26657 # Keep as is
export PORT=8080 # Keep as is
export INFERENCE_PORT=5050 # Keep as is
export KEYRING_BACKEND=file # Keep as is
Which variables to edit:
Variable | What to do |
---|---|
KEY_NAME |
Manually define a unique identifier for your node. |
KEYRING_PASSWORD |
Set a password for encrypting the ML Operational Key stored in the file keyring backend on the server. |
API_PORT |
Set the port where your node will be available on the machine (default is 8000). |
PUBLIC_URL |
Specify the Public URL where your node will be available externally (e.g.: http://<your-static-ip>:<port> , mapped to 0.0.0.0:8000). |
P2P_EXTERNAL_ADDRESS |
Specify the Public URL where your node will be available externally for P2P connections (e.g.: http://<your-static-ip>:<port1> , mapped to 0.0.0.0:5000). |
HF_HOME |
Set the path where Hugging Face models will be cached. Set this to a writable local directory (e.g., ~/hf-cache ). If you’re part of the 6Block network, you can use the shared cache at /mnt/shared . |
ACCOUNT_PUBKEY |
Use the public key from your Account Key created above (the value after "key": without quotes) |
All other variables can be left as is.
Load the configuration:
source config.env
Using Environment Variables
The examples in the following sections will reference these environment variables (e.g., $PUBLIC_URL
, $ACCOUNT_PUBKEY
, $SEED_API_URL
) in both local machine commands and server commands. Make sure to run source config.env
in each terminal session where you'll be executing these commands.
Launch node
The quickstart instruction is designed to run both the Network Node and the inference node on a single machine (one server setup).
Multiple nodes deployment
If you are deploying multiple GPU nodes, please refer to the detailed Multiple nodes deployment guide for proper setup and configuration. Whether you deploy inference nodes on a single machine or across multiple servers (including across geographical regions), all inference nodes must be connected to the same Network Node.
1. Pull Docker Images (Containers)
Make sure you are in the gonka/deploy/join
folder before running the next commands.
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml pull
2. Start Initial Services
Start the essential services needed for key setup (excluding the API service):
source config.env && \
docker compose up tmkms node -d --no-deps
We start these specific containers first because:
tmkms
- Generates and securely manages the Consensus Key needed for validator registrationnode
- Connects to the blockchain and provides the RPC endpoint to retrieve the Consensus Keyapi
- is deliberately excluded at this stage because we need to create the ML Operational Key inside it in the next step
Recommendation
You can check logs to verify the initial services started successfully:
docker compose logs tmkms node -f
If you see the chain node continuously processing block events, then the setup is working correctly.
3. Complete Key Setup and Participant Registration
Now we need to complete the key management setup by creating the warm key, registering the participant, and granting permissions:
3.1. Create ML Operational Key (Server)
Create the warm key inside the api
container using the file
keyring backend (required for programmatic access). The key will be stored in a persistent volume mapped to /root/.inference
of the container:
docker compose run --rm --no-deps -it api /bin/sh
Inside the container, create the ML operational key:
printf '%s\n%s\n' "$KEYRING_PASSWORD" "$KEYRING_PASSWORD" | inferenced keys add "$KEY_NAME" --keyring-backend file
Example output:
~ # printf '%s\n%s\n' "$KEYRING_PASSWORD" "$KEYRING_PASSWORD" | inferenced keys add "$KEY_NAME" --keyring-backend file
- address: gonka1gyz2agg5yx49gy2z4qpsz9826t6s9xev6tkehw
name: node-702105
pubkey: '{"@type":"/cosmos.crypto.secp256k1.PubKey","key":"Ao8VPh5U5XQBcJ6qxAIwBbhF/3UPZEwzZ9H/qbIA6ipj"}'
type: local
**Important** write this mnemonic phrase in a safe place.
It is the only way to recover your account if you ever forget your password.
again plastic athlete arrow first measure danger drastic wolf coyote work memory already inmate sorry path tackle custom write result west tray rabbit jeans
3.2. Register Participant (Server)
From the same container, we can register the participant with URL, Account Key, and Consensus Key (fetched automatically) on chain:
inferenced register-new-participant \
$DAPI_API__PUBLIC_URL \
$ACCOUNT_PUBKEY \
--node-address $DAPI_CHAIN_NODE__SEED_API_URL
Expected output:
...
Found participant with pubkey: Au+a3CpMj6nqFV6d0tUlVajCTkOP3cxKnps+1/lMv5zY (balance: 0)
Participant is now available at http://36.189.234.237:19250/v1/participants/gonka1rk52j24xj9ej87jas4zqpvjuhrgpnd7h3feqmm
Per-Node Account Key Configuration
Always generate a unique ACCOUNT_PUBKEY
for each Network Node to ensure proper separation of participants.
Then we can exit the container:
exit
3.3. Grant Permissions to ML Operational Key (Local Machine)
IMPORTANT: Perform this step on your secure local machine where you created the Account Key
Grant permissions from your Account Key to the ML Operational Key:
./inferenced tx inference grant-ml-ops-permissions \
gonka-account-key \
<ml-operational-key-address-from-step-3.1> \
--from gonka-account-key \
--keyring-backend file \
--gas 2000000 \
--node $SEED_API_URL/chain-rpc/
Expected output:
...
Transaction sent with hash: FB9BBBB5F8C155D0732B290C443A0D06BC114CDF43E8EE8FB329D646C608062E
Waiting for transaction to be included in a block...
Transaction confirmed successfully!
Block height: 174
3.4. Launch Full Node (Server)
Finally, launch all containers including the API:
source config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -d
Verify Node Status
After launching the node, wait a few minutes. You should see your node listed at the following URL:
http://195.242.13.239:8000/v1/participants
Once your node completes the Proof of Work stage (typically within a few hours), visit the following URL to see your node:
http://195.242.13.239:8000/v1/epochs/current/participants
Once your node is running, check your node status using Tendermint RPC endpoint of your node (26657 of node
container)
curl http://<PUBLIC_IP>:<PUBLIC_RPC_PORT>/status
curl http://0.0.0.0:26657/status
curl http://195.242.13.239:26657/status
Stopping and Cleaning Up Your Node
Make sure you are in gonka/deploy/join
folder.
To stop all running containers:
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml down
docker-compose.yml
file without deleting volumes or data unless explicitly configured.
To clean up cache and start fresh, remove the local .inference
and .dapi
folders (inference runtime cache and identity):
rm -rf .inference .dapi
docker volume rm join_tmkms_data
(Optional) Clear model weights cache:
rm -rf $HF_HOME
Note
Deleting $HF_HOME
will require re-downloading large model files from Hugging Face or re-mounting the NFS cache.
Need help? Join our Discord server for assistance with general inquiries, technical issues, or security concerns.