No data leaves your server unless you say so. Every layer of Dockbox is built so that security is structural -- not a checkbox you hope someone remembered to tick.
Most AI tools share infrastructure across customers. A vulnerability in one tenant can cascade to all. Dockbox is single-tenant by design -- every deployment is isolated compute, storage, and network.
Before any document reaches a cloud AI model, it passes through a two-stage scrubbing system that runs entirely on your server. Originals are preserved in a secure vault with role-based access.
Original file with sensitive data
Regex + dictionary rules
Local Ollama model catches context-dependent PII
Originals stored safely with restore access
Only sees anonymized, scrubbed data
The first pass uses pattern matching to catch structured PII: social security numbers, credit card numbers, phone numbers, email addresses, and custom patterns you define. Dictionary-based matching catches names and known entities from your organization's data.
A local Ollama model (never leaves your server) performs contextual analysis to catch PII that regex misses -- indirect identifiers, contextual references, and sensitive information that only natural language understanding can detect.
Original files are preserved at data/pii-vault/ with full audit trail. Authorized users can restore originals through the dashboard. The vault is never exposed to AI models or external services.
Only administrators can access the vault and restore scrubbed files to their original state. Every restore action is logged. Group-level scrubbing ensures each team can only access their own vault entries.
Each AI session runs inside its own Docker container with isolated filesystem, memory, and network. When the session ends, the workspace is destroyed. Nothing persists that you did not explicitly save.
Dockbox does not use threads or sandboxed processes. Every AI session runs in a full Docker container -- the same isolation technology used by cloud providers to separate customers.
The credential proxy sits between containers and external APIs. Containers make requests with placeholder keys -- the proxy intercepts and injects real credentials. Even a fully compromised container cannot exfiltrate your secrets.
Even if an AI model were convinced to inspect its own environment variables, it would find nothing useful. The real credentials live only on the host, injected at the proxy layer.
Containers see only allowlisted paths. Sensitive files are shadowed. Project code is read-only. A strict authorization matrix controls what each group can do.
# Container mount configuration /workspace/group → groups/{folder} [read-write] /workspace/global → groups/global [read-only] /workspace/project → PROJECT_ROOT [read-only] /workspace/ipc → data/ipc/{group}/ [read-write] # Sensitive files are never exposed .env → /dev/null [shadowed] credentials.json → blocked by allowlist [denied] # Additional mounts validated against allowlist # Path traversal attempts (../) rejected at validation layer # Sensitive directories (/etc, /root, ~/.ssh) always blocked
| Action | Main Group | Non-Main Group |
|---|---|---|
| Send messages to own chat | Allowed | Allowed |
| Send messages to other chats | Allowed | Denied |
| Schedule tasks for self | Allowed | Allowed |
| Schedule tasks for other groups | Allowed | Denied |
| Register new groups | Allowed | Denied |
| View all groups | Allowed | Denied |
| Scrub files | Own group only | Own group only |
| Access project source | Read-only | No access |
Self-hosted means you own everything. Your data, your infrastructure, your choice of AI provider. Walk away from any vendor at any time with zero data migration.
Switch between Anthropic, OpenAI, or any compatible API by changing a single environment variable. The credential proxy handles the rest.
Use Ollama or any local inference server for zero API cost. Your PII scrubber already runs locally -- extend that to all AI processing when needed.
Every line of code is available for audit, modification, and extension. No black boxes. No proprietary runtimes. Standard Node.js and Docker.
SQLite database, JSON configs, standard Docker containers. Export your data anytime in formats that any tool can read.
Run on AWS, GCP, Azure, bare metal, or a Raspberry Pi. Dockbox runs anywhere Docker runs -- the choice is always yours.
Add channels, integrations, and AI tools through the MCP server interface. The plugin architecture means you never hit a wall.
Dockbox's architecture aligns with major compliance frameworks out of the box. Single-tenant deployment and PII scrubbing give you the controls auditors need to see.
Student data never leaves your institution's server. PII scrubbing ensures that even AI-processed content contains no personally identifiable student records.
Protected health information stays within your controlled environment. Container isolation ensures PHI cannot leak between departments or sessions.
Dockbox's architecture implements SOC 2 trust service criteria patterns: logical access controls, system boundaries, and change management through code.
Deploy in any jurisdiction. Since you control the server, you control where data physically resides. Meet GDPR, PIPEDA, or any regional data residency requirement.
Deploy Dockbox on your infrastructure in under an hour. Zero-trust security comes standard -- no enterprise add-ons, no premium tiers.