OpenNVR

AI Adapters Engine Overview

OpenNVR is mathematically architected for the AI Era. Instead of locking enterprises into proprietary analytics or exorbitant cloud pricing models, OpenNVR utilizes a stateless microservice architecture that entirely decouples continuous video recording from AI inference. We call this orchestration layer AI Adapters.

This decoupled architecture guarantees:

  • Zero-Trust Sovereignty: Your raw video streams never leave the NVR without explicit administrative tunneling.
  • Infinite Hardware Scaling: Because the AI runs in an isolated container mapping, you can run a single adapter on an edge CPU, or seamlessly scale across a 100-GPU Enterprise Data Center without ever dropping a recorded video frame on the primary NVR.
  • Instant Modularity: Swap model pipelines instantly without recompiling the NVR. If you want to replace YOLOv8 with YOLOv11, you simply reboot the adapter container.

How the Pipeline Works

Within the AI-adapters/ directory, OpenNVR supplies distinct, containerized AI listeners. They sit securely on the internal NVR Docker bridge and await inference routing requests from the core API.

When the OpenNVR engine detects a matching policy, it streams a raw visual frame across the bridge. The Adapter processes the frame using ONNX/PyTorch/TensorFlow, and returns highly-structured metadata (JSON bounding boxes, face arrays, timestamp matrices) back to the OpenNVR PostgreSQL Database for permanent retention.

Executing the Orchestrator (Docker)

The most secure deployment pattern maps the Adapter directly into the core OpenNVR Docker-Compose stack, ensuring no external network interfaces are exposed.

  1. Locate the docker-compose.yml within your OpenNVR host root.
  2. Uncomment the ai-adapters service block.
  3. Rebuild the ecosystem:
    docker compose build ai-adapters
    docker compose up -d

Because both containers implicitly share the opennvr-bridge network, the NVR engine instantly discovers your local adapter instances.

Option B: Scaled Network Isolation

If your organization utilizes designated GPU cluster nodes independent of your storage NVR, you can deploy the AI engine completely isolated:

cd AI-adapters
docker compose up -d --build

The endpoint API will expose itself natively at http://[YOUR_GPU_HOST_IP]:9100.


☁️ Integrating Cloud Tensor Pools (Hugging Face)

If your specific security requirements permit, OpenNVR allows you to seamlessly map massive Cloud Models (VLMs, Zero-Shot implementations) via Hugging Face.

This method ensures your token is encrypted within the NVR database and only attached to proxy requests dynamically.

  1. Generate a strict Read-Only endpoint token from your Hugging Face Security Settings.
  2. Within the OpenNVR Dashboard, navigate to Cloud Models / BYOM.
  3. Select Provider: Hugging Face.
  4. Inject your API key and Save.

Method 2: Infrastructure as Code (IaC)

If you manage your NVR fleet via Terraform or Ansible, you can inject the token directly into the container’s environment variables.

services:
   ai-adapters:
      environment:
         - HF_TOKEN=hf_your_generated_token_here

👩‍💻 Local Developer Blueprint (No Docker)

If you are developing proprietary models and need to debug inference without containerization:

Prerequisites

  • Python 3.11+

Local Setup

  1. Activate your Virtual Environment

    uv venv venv
    source venv/bin/activate
  2. Hydrate Dependencies

    cd AI-adapters/AIAdapters
    uv pip install -r requirements.txt
  3. Fetch Essential Tensor Weights Pre-fetches essential weights (e.g., YOLO variants) into the local /model_weights directory:

    python download_models.py
  4. Ignite the Inference Server

    uvicorn adapter.main:app --reload --port 9100

    The logging subsystem will report successfully establishing the API and discovering your local inference plugins.