About Nataris

We're building AI infrastructure that belongs to everyone.

Our Mission

AI is transforming every industry, but access remains concentrated in the hands of a few large providers. We believe the future of AI should be decentralized, affordable, and accessible to everyone.

Nataris is a peer-to-peer marketplace that connects developers who need AI inference with a global network of mobile devices that can provide it. No data centers. No gatekeepers. Just a network of devices, owned and operated by people around the world.

The Name

Nataris blends the Sanskrit root nāṭa — meaning movement, rhythm, and dynamic action — with a modern system suffix. It means "a system in continuous motion."

Like the philosophy behind Nataraja (the cosmic dancer who symbolizes continuous transformation), Nataris represents platforms that are not static tools, but living systems that continuously adapt, learn, and evolve.

How It Works

1

Developers integrate via API

Use our OpenAI-compatible API to send inference requests — text generation, speech-to-text, text-to-speech, or voice agents.

2

Nataris orchestrates and routes

Our backend matches your request with available provider devices. For complex tasks, the orchestration layer chains multiple inference steps — research, code generation, or document analysis — across the network automatically.

3

Mobile devices run inference locally

Provider devices running the Nataris app process your request using on-device AI models. No cloud. No data centers.

4

Providers earn, developers save

Providers earn for each job completed. Open-source AI, privately processed. 85% goes to providers.

Platform Trust & Security

Nataris is designed for reliability and transparency. From device verification to request routing, we provide decentralized AI inference: by default your prompts and responses are not stored, and inference runs on distributed devices. If you use conversation memory, that data is stored on our servers to power that feature.

Play Integrity Verified

Every provider device is verified using Google Play Integrity API to ensure genuine, unmodified devices.

Health Monitoring

Continuous battery, thermal, and memory monitoring protects both devices and inference quality.

Smart Routing

Intelligent request matching based on model support, device capability, and network conditions.

On-Device Processing

AI inference runs locally on provider devices. Requests are routed through our backend; by default we don't store prompts. Optional conversation memory stores data when you use it.

Quality Scoring

Providers build reputation through successful jobs. Higher scores earn priority routing and better earnings.

Automatic Failover

If a device goes offline mid-job, requests are automatically rerouted to ensure delivery.

Built on Open Source

Nataris is built on the shoulders of giants. We use the RunAnywhere SDK for on-device inference, llama.cpp for efficient LLM execution, ONNX Runtime for model portability, and open-weight models like Llama, Qwen, and Phi.

View open source credits →

Join the Network

Whether you're a developer looking for affordable AI or someone with a spare phone, there's a place for you in the Nataris network.