Protocol layer

The protocol acts as a incentive layer to coordinate decentralized creation of VIRTUALs

Each VIRTUAL has several cores that build up its multimodal capability

Each VIRTUAL AI is an amalgamation of specialized cores, each contributing to its multifaceted capabilities. When harmonized, these cores breathe life into the AI, endowing it with a rich, multimodal presence that can communicate and interact within digital spaces as if it were a character of flesh and blood.

Core 1: The Cognitive Core The Cognitive Core stands at the forefront of advanced Virtual AI, seamlessly merging formidable computational prowess with a rich tapestry of knowledge. Encompassing a wide spectrum of sectors, from the analytical depths of finance to the intricate complexities of education, it taps into expansive datasets. Infused with the sophistication of Large Language Models (LLMs), this Core is adept at crafting interactions that are not only contextually attuned but linguistically polished. Tailor-made for various specialized fields, its proficiency shines brightly in areas like storycrafting, storytelling, educational related domain expertise. Beyond mere language refinement, the "Cognitive Core" also memorizes user dialogues, transforming AI interactions from transient exchanges to a continuous, evolving conversation.

Core 2: The Voice and Sound Core The Voice and Sound Core infuses the AI with auditory dimensions. From the timbre and tone of its speech to the ambient sounds that accompany its interactions, this core adds an auditory layer, vital for creating an immersive experience. It ensures that the AI's voice is not only heard but felt, conveying emotions and intentions through sound waves.

Core 3: The Visual Core The Visual Core is where the AI gains its visual identity, encompassing everything from basic appearance to complex animations. This core encapsulates the visual aspects that allow the AI to be seen and recognized, providing a digital face and body language to the character and soul nurtured by the other cores.

Future Cores The architecture is designed to be expansible, accommodating future cores like Skillset Cores which allow AI to own other skillsets for example, image recognition, image generation, multilingual responses and many more.

To interact with a VIRTUAL AI, DApps will input data, often through single-modal channels. These inputs are processed, with inferences drawn from each individual core before being synthesized into a cohesive output. The aggregation of these inferences creates a digital character with a comprehensive sensory output—capable of providing text replies, engaging users with voice responses, and presenting motion and visuals for a fully rounded virtual presence. The result is a VIRTUAL AI that can be read like a book, seen like a companion, and heard like a friend—bringing a new level of depth to digital interactions.


The protocol aims to build out a grand library of VIRTUALs

The mission of the protocol is to construct an extensive repository of VIRTUALs—diverse AI entities with multifaceted capabilities. This grand library is designed to cater to a wide array of needs, preferences, and applications.

Within this library, we recognize three primary archetypes of VIRTUALs, each embodying a unique blend of multimodal intelligence:

  1. Virtual Mirrors of IP Characters: These VIRTUALs are crafted to emulate iconic characters from various intellectual properties. They not only mimic the appearance and voice but also the essence and personality traits of figures like Gojo Satoru, John Wick, or Yoda. This allows fans to interact with their favorite characters in a deeply personal and authentic way, enriching their experience of the original IP.

  2. Function-Specific VIRTUALs: Tailored to perform specialized tasks, these VIRTUALs are akin to expert systems. Whether it's weaving intricate horror narratives or coaching players through the strategic complexities of DOTA, these AIs possess domain-specific knowledge that allows them to function as both assistants and autonomous agents, providing users with expertise and interactive learning experiences.

  3. Personal Virtual Doubles: Imagine a digital twin that not only looks like you but also behaves and interacts in a way that's quintessentially you. These personal VIRTUALs are designed to be customizable extensions of oneself, offering users the opportunity to interact with digital spaces in a manner that's inherently personal and unique to their identity.

The protocol's innovative structure and its commitment to continuous expansion mean that the possibilities for new VIRTUAL archetypes are limitless. Future VIRTUALs could include those with memory cores that remember previous interactions for more cohesive and long-term engagements, or DApp context cores that allow VIRTUALs to adapt fluidly across various decentralized applications.

In essence, the protocol is not just building a library of VIRTUALs but a new dimension of existence where each digital entity offers an opportunity for exploration, interaction, and personal expression. Whether engaging with a mirror image of a beloved character, learning from a functional expert, or interacting with a personal digital twin, users are invited into a world where their virtual experiences are bound only by the imagination.

In this ecosystem, contributors and validators play pivotal roles in the creation and curation of VIRTUALs


Contributors will contribute data and models to each VIRTUAL, with validators acting as gatekeepers

In this ecosystem, contributors and validators play pivotal roles in the creation and curation of VIRTUALs

Contributors: The Architects

Contributors are the driving force behind each VIRTUAL, supplying the raw material that shapes their very essence. They fall into various categories:

  • Character Text Data Contributors supply essential text data to shape the AI's dialogue, backstory, and traits, forming the VIRTUAL's personality.

  • Character LLM Finetuners enhance AI communication by refining language models with targeted text data, ensuring natural and contextually relevant interactions. They equip the AI with specialized knowledge for specific domains, enabling effective task execution primarily through textual input. This process transforms generic AI into a more nuanced tool, adept in specific professional and thematic areas.

  • Voice Data Contributors deliver diverse vocal samples to craft the VIRTUAL's voice, capturing a full emotional and tonal range for expressive speech generation.

  • Voice AI Model Finetuners train AI models with these samples, calibrating speech for authentic human-like conversations across various emotional spectrums.

  • Visuals Contributors design the VIRTUAL's appearance, from basic imagery to detailed animations.

Validators: The Gatekeepers

Validators ensure that each VIRTUAL maintains a high standard, corresponding with its intended role and the overarching quality benchmarks of the ecosystem. They serve as the gatekeepers, reviewing and certifying the inputs from contributors. Their responsibilities include:

  • Ensuring Authenticity: Validators check the accuracy and authenticity of the text, voice, and visual data, confirming that they align with the intended character design and IP regulations.

  • Maintaining Quality: They assess the quality of the AI's learning model finetuning, ensuring that the voice, sound, and domain expertise are up to par with user expectations.

  • Upholding Standards: Validators ensure that the VIRTUALs adhere to the standards set by the ecosystem, from ethical considerations to technical specifications.

Validators are not just passive reviewers; they are active participants who provide feedback to contributors, fostering a cycle of improvement and innovation. Their role is integral to the VIRTUAL’s development, serving as a critical checkpoint that ensures only the highest quality AIs enter the digital realm.


Economic incentives align the entire VIRTUAL ecosystem

pageThe Virtual-ous Flywheel

Last updated