Agent Contribution
Last updated
Last updated
Decentralized contribution is a fundamental aspect of our ecosystem, allowing external contributors to help drive exponential growth by enhancing the capabilities of AI agents. Contributors can improve various aspects of an agent’s functionality, and successful contributions are minted as NFTs and transferred to the contributor. This serves as proof of contribution and facilitates reward distribution. Contributions are categorized into two main types:
1. Model Contribution
Contributors can submit AI models that improve an agent’s current functionality. This may involve refining existing models or proposing new models that enhance the cognitive, voice, or visual aspects of the agent.
2. Dataset Contribution
Contributors can provide knowledge-based datasets that enhance the agent’s domain expertise or further develop the agent’s personality. These datasets help refine an agent’s knowledge and interaction abilities by expanding its understanding of specific subjects or improving how it embodies its character.
Each agent’s capabilities are modularized, making it easier for contributors to enhance specific areas. Contributions can target one or more of the following core capabilities:
The Cognitive Core acts as the "brain" of the AI agent, defining its central intelligence and personality. Contributions to this core can involve fine-tuning large language models (LLMs) or providing domain-specific datasets.
Model Contributions: Contributing AI models or fine-tuned LLMs that enhance the agent's reasoning and decision-making capabilities.
Dataset Contributions: Providing datasets for knowledge expansion, personality development, or domain-specific expertise. These datasets could include dialogue samples, industry-specific knowledge, or general information to make the AI agent more effective and engaging in its interactions.
The key is to supply rich, high-quality text data or models that improve the agent’s ability to understand and respond to user inputs, making it more knowledgeable and capable of dynamic, relevant conversations.
Voice Core
This core governs the voice of the agent, allowing it to communicate with users. Contributions here can include:
Voice Model Contributions: AI models that enhance the agent’s voice quality, intonation, and overall speech generation capabilities.
Voice Data Contributions: Datasets of speech or sound that improve the agent’s ability to generate realistic and emotionally expressive voices.
Contributors can also help improve the agent’s language abilities, adding regional accents or multiple languages to expand its versatility.
Visual Core
The Visual Core gives the agent a 3D visual appearance. Contributions to this core enhance how the agent looks and moves, providing more immersive interactions.
Facial Core: This aspect handles the agent's facial expressions. Contributors can provide models or data that translate the agent's voice into appropriate facial movements, allowing it to show emotion during interactions.
Animation Core: This module allows the agent to perform gestures based on voice inputs. Contributions can include gesture data or animation models that synchronize body movements with speech, giving the agent a more natural, lifelike presence.