# Vader Introduction

<figure><img src="https://4041254935-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F81pitYIUDntlv7QvRjb3%2Fuploads%2FEFWwWCFVUesMv2xvMngG%2FDEXS_Header_1500x500%20(2).png?alt=media&#x26;token=6491b6f2-90fe-4925-8c85-3ec84b11863d" alt=""><figcaption></figcaption></figure>

**VADER** is building the data layer for the robotics revolution - an ecosystem where human experience becomes the foundation for machine intelligence.

We believe that the next generation of artificial intelligence will not live inside screens but inside **robots** - machines that perceive, move, and act in the real world. But for robots to understand the world as humans do, they need one thing above all: **data**. Not text, not static images but **egocentric video**: the visual record of what it’s like to see and act from the human point of view.

VADER is a vertically integrated platform designed to **collect, process, and monetize real-world human experience data** - transforming everyday actions into structured intelligence for training embodied AI and humanoid systems.

* **EgoPlay** is a gamified platform where users record real-world tasks through smart glasses and smartphones, earning rewards while contributing egocentric video data.
* **Orn** is a proprietary AI architecture that processes, anonymizes, and converts raw videos into robotics model training-ready datasets.
* **VADER** is the reward, medium of exchange and network token that aligns users and validators through transparent incentives and tokenized participation.

#### The Vision

We believe that in the coming decade, humanoid robots will become as ubiquitous as smartphones.\
They will cook, clean, build, drive, care, and serve - but to do these things, they must first learn **from us**.

VADER’s mission is to accelerate that learning process by **crowdsourcing human physical experience** at a global scale. Every time a user folds laundry, pours water, or washes dishes through EgoPlay, they contribute a tiny piece of intelligence to the collective dataset that trains future robots.

This data, captured through first-person (egocentric) video, fills the missing gap in robotics: real, diverse, contextual experience from the human perspective. Our vision is to create a **decentralized data network** where millions of people around the world help teach robots how to act, move, and think — and get rewarded for it.

#### Building the Foundation of Physical AI

In the same way that text and image datasets powered the rise of ChatGPT and Stable Diffusion, **VADER is building the dataset that will power Physical AI** - the intelligence that governs robots in the real world.

Through smart glasses, gamified participation, and token-aligned incentives, VADER transforms human labor into digital learning - a scalable, permissionless data engine that continuously improves itself through feedback loops between humans and machines.

Our goal is not just to train robots but to build a **shared intelligence layer between humans and machines**, where every task, every motion, and every contribution moves the world closer to general-purpose robotics.

The future of AI will not be typed. It will be **lived, recorded, and learned** - through the collective eyes of humanity.

That is the vision of VADER.
