
The People's AI Network
Powered by people, not datacenters.
Unfiltered access to open-weight models — no content filtering, no model training, no vendor lock-in. Developers call an OpenAI-compatible API; the jobs route to Android phones worldwide. Phone owners keep 85%.
Developer
Routing
Provider
Two ways to use Nataris. Pick yours.
Text generation, multi-step workflows, and conversation memory — all through one OpenAI-compatible API, routed to the best available device on the network.
# OpenAI-compatible Chat Completions API
curl -X POST https://api.nataris.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "nataris-balanced",
"messages": [
{"role": "user", "content": "Explain quantum computing in one paragraph."}
],
"max_tokens": 256
}'Qwen 2.5 0.5B
Text
Fast & reliable
Llama 3.2 1B
Text
General purpose
Phi-3 Mini 3.8B
Text
Code & reasoning
Nataris is OpenAI-compatible. Change the base URL — the rest of your code stays identical.
from openai import OpenAI
client = OpenAI(
base_url="https://api.nataris.ai/v1", # only change
api_key="YOUR_NATARIS_KEY"
)
response = client.chat.completions.create(
model="nataris-balanced" # or nataris-fast / nataris-quality,
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)Works with any OpenAI-compatible client. View full API docs →
Open-weight models, your rules. Build what's hard to do on locked-down clouds.
Open-weight models without vendor-mandated content policies. No filtering on outputs — fiction, dialogue, experimental narratives, whatever you need to build.
Games, interactive fiction, creative writing tools, brainstorming apps, unrestricted chatbots.
High-volume inference for community bots, game systems, and workflows. No strict rate limits choking your app at scale.
Discord/Telegram bots, dynamic NPC dialogue, workflow automation, customer support.
$5 free credit, no credit card, multiple models to try. Good for MVPs, demos, and testing before committing to a vendor.
Early-stage products, model comparison, hackathons and learning.
Research, code generation, and analysis — orchestrated across multiple inference steps with built-in budget controls.
Deep research, code review pipelines, document Q&A, agent-style reasoning.
With most AI APIs, a corporation sits between you and the model — logging requests, analyzing patterns, potentially training on your data. With Nataris, inference runs on independent community hardware. Provider devices are nameless compute: they receive your prompt, return the response, retain nothing about you. No usage profiling. No training pipeline.
Internal tools, research, analysis — without your content in a corporate data pipeline.
Nataris isn't another GPU cloud. It's the first AI inference marketplace that runs on real devices — phones, not racks.
io.net, Akash, Render — all datacenter GPUs. We route inference to smartphones. Billions of idle devices, zero capital cost to the network.
Open-weight models without vendor-mandated moderation. No content policies on outputs. You choose what you build.
Every dollar flows to real people, not data centers. Best-in-class provider share — direct payouts via UPI/Stripe (coming soon).
Fast retry across multiple devices with automatic re-routing. If one device fails, the next one picks up instantly — up to 3 attempts with automatic re-routing.
Text generation, multi-step workflows, conversation memory — all production-ready. Audio features (STT, TTS) coming soon.
Optional Google Play Integrity attestation for providers. Route to verified devices only — no other P2P AI network offers this.
The network is launching. Once providers start running inference and developers start sending requests, live stats will appear here — total inferences, tokens processed, and earnings paid to providers. All real numbers, nothing made up.
13+
Models Ready
Text
Audio & RAG coming soon
85%
To Providers
OpenAI-compatible API. Start in minutes.
Multi-step workflows and conversation memory — all built in.
# Install and use like OpenAI
pip install openai
from openai import OpenAI
client = OpenAI(
api_key="YOUR_NATARIS_KEY",
base_url="https://api.nataris.ai/v1"
)
response = client.chat.completions.create(
model="nataris-balanced" # or nataris-fast / nataris-quality,
messages=[{"role": "user", "content": "Hello!"}]
)Your phone. Your compute. Your earnings.
Pre-register now. When Nataris launches on Google Play, your phone can run AI models in the background while you sleep, browse, or work. 85% of every inference fee goes directly to you — the best provider share in the industry.
Pre-register
Join the list on Google Play
Get notified
We will email you when Nataris launches
Install & go online
Install the app, sign in, and start accepting jobs
Earn
Paid for every inference your phone completes
Honest answers, no fluff.
Nataris blends the Sanskrit root nāṭa — movement, rhythm, dynamic action — with a modern system suffix. It means "a system in continuous motion."
We're building AI infrastructure that adapts, grows, and belongs to the people who power it.
Q1 2026 ← We are here
Beta Launch
Android app, core models
Q2 2026
iOS App
Enterprise tier, SLAs
Q3 2026
Ecosystem
Connectors, governance
Token Launch
Provider rewards, staking