The Helium Network

The Helium Network

We are excited to announce the release of the Helium Network, which is the second testnet of the Crynux Network.

The Crynux Network evolves through the sequential release of various networks, each building upon its predecessor by introducing additional features and improvements. Initial releases are designated as testnets, serving as preliminary versions for testing purposes. Subsequent releases are mainnets, which incorporate upgrades to the previous networks through forking.

The networks are named after the elements in the Periodic Table, in which Helium is the second element. In the last testnet, the Hydrogen Network, a decentralized Stable Diffusion task execution engine is implemented, the applications could use the network to generate images, and everyone possessing an Nvidia graphic card could start a node to join the network, exchanging the computation power for tokens.

Building upon the foundation of the Hydrogen Network, the Helium Network introduces an array of enhancements. These include a variety of new application use cases and support for more node device types which are poised to significantly increase the number of nodes connecting to the network.

Running the GPT Text Generation Tasks

The long expected feature to run the GPT text generation tasks on the Crynux Network is finally out. AI Chatbot applications could have already been built on top of the Crynux Network. Choose from most of the LLM models on the Huggingface, such as LLaMa 2 and Gemma, specify the model ID and the prompt in a GPT task, and the task could be executed in the Crynux Network to get output texts.

A chatbot UI has been deployed to demonstrate the ability. To try the chatbot yourself, go to https://chat.crynux.ai:

After the user selects an open source LLM model, and submits the question in the WebUI, the question will be sent to the Crynux Network as a GPT task. The result will be returned to the user when the execution is finished.

To the application developers, it is easy to connect the application to the Crynux Network, enhancing the application with AI ability instantly. Just follow the getting started guide for the developers.

Mac as the Computation Node

Now the Crynux Node could be started on the Mac with Apple Silicon chips (m1, m2 and m3 series). All the owners of a supported Mac could now join the network to earn tokens by simply downloading the app and starting it with one click.

The Unified Memory Architecture of the Mac, which allows the shared use of system memory to both the CPU and GPU with high bandwidth, offers a significant advantage when working with large AI models. Although the execution might be slower, Mac's ability to utilize its extensive system memory enables it to successfully complete certain AI tasks involving large models—tasks that might fail on Nvidia cards due to their limited VRAM. This capability offers a cost-effective alternative in scenarios where execution speed is not the primary concern.

To start a node on your Mac, follow the node starting guide.

Flexible Task Pricing

The task price is now determined completely by the market. If there are more nodes to execute the same number of tasks, the average task price will drop. If there are more tasks to be executed by the same set of nodes, the task price will rise.

The task price is not set by the Crynux Network though, neither fixed nor dynamically. Instead, the price is set by the user, and the Crynux Network will choose the tasks with higher prices to execute first.

If the user needs his task to be completed faster, he could set a higher price on his task. However, if the price is a critical concern to the user, he could also finish the task at a lower price, and wait longer for the result.

The flexible task pricing mechanism allows for an effective and efficient allocation of tasks across the network. By allowing the market to dictate task prices, Crynux ensures that both nodes and users benefit from a dynamically balanced system. Users are incentivized to offer competitive prices for quicker task execution, while more nodes are attracted to join the network when the task prices go higher, providing more computation power to the Crynux Network.

Even more, the order of task execution is not simply determined by the price set by the user, but rather a task value estimated by taking the task execution duration into account. This approach effectively identifies tasks that provide optimal value—those that contribute a significant amount without demanding an excessive portion of resources, maximizing the incomes of all the nodes in the network.

The details about the task pricing strategy is described in the documentation,

Improvements on the Network Efficiency and Availability

Crynux Network is a loosely coupled decentralized network, where a stable connection to a node can never be expected. A node could be shutdown while still claimed online on the blockchain, or stop responding at any time during the task execution. Which, if not handled properly, will cause a lot of task failures that renders the network unusable to the applications.

A lot of efforts have been made to improve the network efficiency and availability, such as giving higher probability to select nodes that providing better services, kicking out nodes that constantly fails, and adjust the incentivization based on the node availability.

The details can be found in the documentation.

A complete list of all the features and improvements of the Helium Network can be found in the release node:

Helium Network | Crynux Network
[Jan 30, 2024] Decentralized GPT Task Execution Engine

The Helium Network is a big step towards a minimum yet complete network to be massively adopted. We are already close to a mainnet release that brings the exciting experience to the real-life applications and users.

Stay tuned!