The People's AI Network
Nataris is a peer-to-peer inference network that runs on real smartphones, not racks or cloud VMs. Every device is independently owned. Every provider earns directly for the work their device does.
Out of closed beta — now live on Google Play · 21 providers · 3 paying developers · 2,741 inference jobs · 343,371 tokens processed
What
A peer-to-peer AI inference marketplace. Developers access an OpenAI-compatible API and get AI inference without content filtering or prompt logging.
How
Inference runs on real Android smartphones owned by independent providers, who earn 85% of every job their device completes. No data centers. No cloud VMs.
Who it's for
Developers building AI apps who need uncensored, private inference — and Android owners who want to earn from idle compute.
Stage
Out of closed beta — now live on Google Play, pre-registration open. 21 providers · 3 paying developers · 2,741 inference jobs · 343,371 tokens processed.
The world's smartphones carry enough compute to run open-weight AI models entirely on-device. Most of them spend more time idle than active, plugged in, doing nothing.
The harder question was whether you could build a network around it that actually works. Getting inference to run reliably across thousands of different devices means dealing with OEM battery optimizers that kill background processes mid-job, model loading times that vary across hardware generations, and thermal behavior that shifts depending on what else the phone is doing. We spent a long time on these problems.
What came out of it is an OpenAI-compatible API backed by a real device network. Developers get inference without the content filtering or prompt logging that comes with centralized providers. Providers get 85 cents of every dollar their device earns. No single company owns the infrastructure.
Nataris comes from Nataraja, the Lord of Dance, one of the most recognizable images in Indian philosophy. Nataraja performs the ananda tandava, the dance of bliss, in a ring of fire. The ring represents the cycle of creation and destruction. The dance is continuous.
We took the root nāṭa, which means movement, rhythm, and truth expressed through action, and added a system suffix.
Every device in the network operates independently. It decides when to come online, which models to download, when to charge. The devices don't coordinate with each other. But the network as a whole behaves like something coherent, a system in rhythm.
Drop in your existing OpenAI SDK. Change the base URL to api.nataris.ai/v1. Text generation, multi-step orchestration, conversation memory — all there, without touching your code.
The backend finds the best available device for your request — factoring in the model, device capability, thermal state, and reputation built from thousands of completed jobs. For complex tasks, the orchestration layer chains steps across multiple devices automatically.
The provider's device receives the job over WebSocket, loads the model (or finds it already warm in memory), and streams tokens back in real time. On-device. No cloud hop. The response goes directly from their device to your API response.
85% of every inference fee goes to the provider — credited instantly, withdrawable via UPI/Stripe. No monthly minimums. No payout thresholds that vanish into fine print. The network pays the people who power it.
These aren't features we bolted on. They follow from the architecture.
No content filtering
We run open-weight models on independent devices. There is no central layer enforcing what you can generate. You define your own content policy.
No prompt logging by default
Prompts and responses are not stored on our servers unless you opt into conversation memory. We keep billing metadata. That is all.
85% to providers
We take 15% to run the platform. The rest goes to whoever's device did the work.
Open-weight models
We run models the research community has audited and published openly. The selection grows as the ecosystem does. No proprietary black boxes.
Play Integrity verified devices
Every provider device passes Google Play Integrity verification. Reputation scores accumulate from real job history over time.
Automatic failover
If a device goes offline mid-job, the request re-routes. Device health monitoring runs continuously, not just at onboarding.
The open-source ecosystem made Nataris possible. Efficient on-device inference runtimes, portable model formats, and openly published weights from the research community gave us the foundation to build on. We've layered the network infrastructure, the marketplace, and the reliability systems on top of that foundation.
As the ecosystem grows, so does what's available through Nataris.
View open source credits →10 years building products at Publicis across APAC and India. Previously co-founded Arkreach, a media analytics company. Built Nataris end-to-end — backend, Android app, developer portal, and inference routing layer.
Operations and growth. Background in brand strategy and consumer marketing. Leads provider growth, partnerships, and go-to-market for Nataris.
If you're a developer who wants unfiltered access to open-weight models — or someone with a phone that spends its nights doing nothing — the network is open.