How It Works

The stakeholders in this A2A subnet are as follow:

  1. Subnet owner: Virtuals Protocol will play the role of the subnet owner, owning the subnet and creating the modules for miners and validators to train and evaluate the generated animation. Also decide the parameters required to evaluate the animation’s performance.

  2. Miners: Generate animations with A2A models. Miners can download the reference models provided by subnet owner or use other models. Miners can also train and develop new A2A models, either with new architectures, training procedures or custom datasets and use these new A2A models to generate animations.

  3. Validators: Provide audio prompts and evaluate the submitted animation from miners based on the parameters suggested by the subnet owner.

  4. Bittensor protocol: Aggregating weights using Yuma Consensus, determining the final weights and allocation ratios for each miner.

Workflow

  1. Audio Prompting: Validators generate audio prompts and distributes them to Miners on the network.

  2. Animation Generation: Miners, upon receiving the audio prompts, utilize A2A models (using reference A2A models provided or training/uploading new A2A models) to convert the audio into animation (i.e. dance, gestures etc.).

  3. Quality Evaluation: Validators assess the quality of the generated animation from Miners, with respect to a reference animation library. In the future, the evaluation procedures will consider more factors such as naturalness, semantic-closeness to the audio prompt and potentially even incorporating human feedback.

  4. Evaluation Submission: The ranking and weights of the Miners’ output will be submitted by Validators to the Bittensor protocol.

  5. Reward Distribution: Based on the aggregated weights achieved via Yuma Consensus at the root network, Miners will be rewarded accordingly. Validators will also be rewarded based on the evaluation work they perform.

Last updated