Security Architecture
Last updated: January 30, 2026
Overview
Nataris is a peer-to-peer AI inference marketplace. This document explains our security architecture, what protections we provide, and what you should know before using our service.
We believe in transparency - understanding our architecture helps you make informed decisions about what data to process through our platform.
How Our Network Works
Our backend orchestrates the matching between your requests and available provider devices. All network communication is encrypted, but the inference itself happens on third-party devices.
Security Measures We Implement
TLS/HTTPS Encryption
All API communication is encrypted using TLS 1.2+
Secure WebSocket (WSS)
Real-time provider connections use encrypted WebSockets
API Key Authentication
All requests require valid API keys with secure storage
Provider Verification
Providers must register and authenticate with unique tokens
Rate Limiting
Per-API-key rate limits prevent abuse
Audit Logging
Job assignments are logged for compliance (metadata only, not content — unless you opt into conversation memory)
Provider Data Agreements
All providers agree to data handling obligations before processing jobs
Important: Data Visibility on Provider Devices
During the inference process, your prompts and responses are visible to provider devices.
While we encrypt data in transit and providers are contractually prohibited from storing or sharing data, we cannot provide the same technical guarantees as centralized cloud providers where you control the infrastructure.
What This Means:
- Provider apps can see your prompts while processing them
- We cannot technically prevent a malicious actor from logging data
- Providers agree not to store data, but enforcement is after-the-fact
- We maintain audit trails to identify and remove bad actors
What We Do About It:
- Mandatory data handling agreements for all providers
- Audit logging of all job assignments
- Immediate termination for policy violations
- Legal recourse for data misuse
Recommended Usage
✓ Suitable For
- • General AI tasks and content generation
- • Creative writing and brainstorming
- • Code assistance and debugging
- • Development and testing environments
- • Non-sensitive data processing
- • Public or anonymized data
⚠ Not Recommended For
- • Personal Identifiable Information (PII)
- • Protected Health Information (PHI/HIPAA)
- • Financial or payment card data
- • Trade secrets or proprietary information
- • Legal or confidential communications
- • Authentication credentials or secrets
Verified Provider Routing
Route your requests to providers with verified device integrity via Google Play Integrity API.
Available Now
Request verified-only routing by adding privacyLevel: "verified" to your API calls.
- Providers pass device attestation checks (unmodified app, not rooted)
- Jobs are only routed to devices with valid integrity tokens
- Reduces risk from modified apps or compromised devices
- Integrity status refreshed every 24 hours
Note: Even verified routing does not make PII safe - device owners can still observe data during processing.
Example Request
POST /v1/chat/completions
{
"model": "llama-3.2-1b-instruct-q4_k_m",
"messages": [{"role": "user", "content": "Your prompt here"}],
"routing_preferences": {
"privacyLevel": "verified"
}
}Security Roadmap
Upcoming security enhancements:
- Trusted Provider Tiers: Verified providers with higher earnings potential
- Enterprise Options: Dedicated provider pools for enterprise workloads
- Regional Routing: Route jobs to providers in specific regions for data residency
Security FAQ
Q: Can providers see my API key?
No. Your API key is only used for authentication with our backend. Providers never receive your API key.
Q: Do you store my prompts?
Not by default. Standard API calls do not store prompts or responses — we only keep metadata (job IDs, timestamps, provider assignments). However, if you opt into conversation memory (by passing a conversation_id), messages are stored server-side so the API can recall context. This is entirely opt-in — if you don't use it, nothing is stored.
Q: What happens if a provider violates the data agreement?
Providers who violate data handling agreements face immediate account termination, forfeiture of earnings, and potential legal action. Our audit logs help identify and investigate violations.
Q: Is Nataris HIPAA/SOC2 compliant?
Not currently. Our decentralized architecture is not designed for regulated workloads like healthcare or financial data. We are exploring enterprise options with enhanced compliance for future releases.
Security Contact
For security concerns, vulnerability reports, or questions about our architecture: security@nataris.ai