[Bittensor x Crynux] Creative Content Revolution: the Federated Fine-tuning Subnet

[Bittensor x Crynux] Creative Content Revolution: the Federated Fine-tuning Subnet

Videos, music, images and text, generative multi-modality models are revolutionizing the way we create content. These advanced tools are not only incredibly useful but are also pioneering new avenues in creative content production, significantly reducing the workload involved.

For instance, consider a music generation model capable of composing chiptune pieces for video game backgrounds, the soundtracks could be dynamically created to align with the unfolding game narrative, offering players an immersive and tailored gaming experience.

This approach not only streamlines the soundtrack creation process but also democratizes game music production, significantly reducing labor costs and lowering the barriers to entry in the industry.

Specifically tailored models for the target styles, areas, industries, or applications can dramatically enhance the quality of the generated content. Unlike general-purpose large models, tailored models understand the nuances and intricacies of their designated domain, leading to outputs that are more refined, accurate, and contextually appropriate. This specialization not only elevates the content quality but also reduces the computational resources necessary for running these models, translating into significant cost savings.

Federated Fine-tuning for Maximizing Extensive Data Usage

The fidelity and efficiency of these models come from the fine-tuning/distilling from general purpose models, with large-scale data from extensive user collections.

The model quality is not static. They could be continuously improved through iterative fine-tuning process. This process ensures that the models evolve, becoming more sophisticated and capable of producing even more accurate and high-quality content over time.

Federated Learning (FL) plays an important role here. FL enables models to learn from decentralized data sources without actually moving the data, enhancing privacy and allowing the model to continuously improve from real-world use across different devices and users.

FL will enable a broader participation in the creative model fine-tuning, pooling a richer diversity of data, which in turn elevates the quality and effectiveness of the models.

Bittensor Incentivized Data Contribution

Thanks to the incentivization mechanism provided by Bittensor, the subnet offers a compelling proposition where community members are motivated to contribute to data collection and the fine-tuning of models.

By contributing the data, individuals can earn rewards, creating a sustainable ecosystem where every contribution is valued and rewarded. This method not only preserves the privacy and security of user data but also ensures a continuous stream of income for those actively participating in the system. As more members join and contribute, the diversity and volume of data for fine-tuning improve, leading to more accurate and effective models.

In essence, Bittensor's incentivization mechanism fosters a vibrant community-driven environment, where the collective efforts of participants drive the advancement of AI technology in a secure and profitable manner.

Lower the Barrier of Participation using Crynux Network

The need for costly, high-end AI computing cards for validators and miners to carry out model fine-tuning and validation tasks is no longer an issue, thanks to Crynux Network's decentralized AI service cloud.

Miners can now concentrate solely on supplying higher quality data, as miner nodes can be operated on standard laptops or even smartphones. The federated computing tasks will be executed remotely via the Crynux Network.

Similarly, validators can obtain model evaluation results effortlessly through a straightforward API call on regular servers, eliminating the need for GPU cards.

Enforced Transparent Revenue Distribution, End-2-End

Models developed on the subnet will be hosted on the Crynux Network, offering services across all applications. Validators and miners contributed to these models on the subnet will share ongoing rewards from the payments of the models' usage.

Crynux Network transforms the creative models into AI assets. Thanks to its decentralized model service cloud, all services related to the models—from inference to fine-tuning—are transparently processed on the blockchain. This ensures direct payments from users of the models to the holders of the model assets, eliminating intermediaries and manual intervention, technically securing the profit of the asset holders. Embrace a new era of trust and efficiency.

Additional AI-Fi applications may also spring up around the model and data assets, expanding their reach by linking these assets to a diverse array of AI-Fi applications and thus enlarging the current DeFi ecosystem.

The Genki-Dama Subnet

Genki-Dama, inspired by the iconic Dragon Ball technique, is a Bittensor subnet that utilizes decentralized data and harnesses decentralized computing resources. Built upon the incentive mechanism of Bittensor, and the computing network of Crynux, Genki-Dama empowers federated learning in a decentralized manner, shattering the limitation of centralized approaches.

The subnet aims to incentivize miners to contribute high quality data and train creative generative models with federated learning.

It includes two parts:

  • Genki: federated learning SDK to utilize Bittensor incentive mechanism and Crynux decentralized computing resources
  • Dama: open-sourced model checkpoints trained by Genki, we will focus on generative models for creative contents.

The first Dama is called Ruby, which is an Electronic Chiptune style music model that could be used to generate music for games. A demo video of fine-tuning such a model, and using it to generate music is given on X:

Find more information about the Genki-Dama Subnet in the GitHub repo:

GitHub - crynux-ecosystem/genki-dama
Contribute to crynux-ecosystem/genki-dama development by creating an account on GitHub.

And follow the tutorials inside the README file to start a validator/miner node.