OpenNVR

Building Custom Adapters

Building Custom Adapters

OpenNVR is fundamentally designed as a Sovereign “Bring Your Own Model” (BYOM) platform. While the system ships with natively supported analytical adapters (like person_detection and face_verify), the true enterprise power manifests when extending the orchestrator with your own proprietary AI models or open-source Hugging Face pipelines.


⚙️ The Inference Lifecycle

Under the hood, OpenNVR executes inference workloads using a highly modular, stateless plugin architecture.

  1. Request Intake: A third-party application or the Core NVR asks the AI engine to run a specific task (e.g., "fire_detection").
  2. Dynamic Routing: The AI Adapter’s FastAPI router dynamically maps that task name against all loaded plugins residing in physical memory.
  3. Payload Streaming: Base ecosystem libraries read the opennvr:// camera URI, dynamically pull the requested video fragment via RTSP/WebRTC, and convert it into a pure NumPy array (np.ndarray) mathematically representing the pixel matrix.
  4. Execution: The NumPy array is piped directly into the specific model’s run() method. The model executes its tensor logic (using ONNX, PyTorch, or TensorFlow) and yields a strict JSON schema response matching the OpenNVR UI data contract.

🛠️ Bootstrapping Your Own Adapter

To build a proprietary adapter, you do not need to rewrite the REST routing, WebSocket networking, or core UI plotting logic. You exclusively need to inject your mathematical processing logic into a strictly formatted directory structure.

1. Directory Structure Blueprint

For the Python server to automatically discover your custom adapter, it must be staged directly inside the adapter/tasks/ directory containing two explicit files: task.py and schema.json.

AI-adapters/AIAdapters/adapter/tasks/
└── 📁 your_custom_model/
    ├── __init__.py
    ├── task.py          <-- Must contain `class Task(BaseTask):`
    └── schema.json      <-- Defines response schema validation

2. Formulating the Data Contract (schema.json)

Every task must explicitly declare the JSON metadata it intends to yield. The global OpenNVR router parses this JSON schema to securely instruct the frontend React application on exactly what visual components (bounding boxes, polygons, simple text) to overlay on the live-view dashboard.

{
  "task": "fire_detection",
  "returns": {
    "label": "string",
    "confidence": "number",
    "bbox": "array"
  }
}

3. Writing the Python Controller (task.py)

Your custom logic script must inherit from the BaseTask interface (adapter.interfaces.BaseTask).

⚠️ Extreme Caution: Never load heavy Machine Learning tensor weight arrays directly inside __init__. Always use the setup() method to push your ONNX/PyTorch .pt models into VRAM to ensure simultaneous asynchronous system booting!

Here is a complete, deployable example of building a custom “Fire Detection” Adapter hook:

import os
import numpy as np
import cv2
from typing import Dict, Any

# Ensure you import OpenNVR's explicit BaseTask Interface
from adapter.interfaces import BaseTask


class Task(BaseTask):
    """
    Your primary class MUST exactly be named `Task`.
    """
    
    # 1. Define required global routing attributes
    name = "fire_detection"
    description = "Detects hazardous fire/smoke conditions in a stream"

    def setup(self):
        """
        Executes exactly once when the KAI-C server boots.
        Initialize your PyTorch/ONNX/TensorFlow weights here.
        """
        model_path = "/model_weights/yolov8_fire.onnx"
        
        if not os.path.exists(model_path):
            raise FileNotFoundError(f"CRITICAL: Missing tensor weights at {model_path}")
            
        print("Loading Fire Detection Model into physical GPU VRAM...")
        # self._model = MyMachineLearningLoader(model_path)
        pass

    def run(self, image: np.ndarray, params: Dict[str, Any]) -> Dict[str, Any]:
        """
        Main execution routing loop.
        Input `image` is mathematically converted into a numpy block automatically.
        You MUST return a dict exactly matching your `schema.json` data contract.
        """
        # Execute your model logic with the numpy image
        # preds = self._model.predict(image)
        
        # Example JSON serialization return matching the schema:
        return {
            "task": "fire_detection",
            "label": "fire",
            "confidence": 0.94,
            "bbox": [50, 50, 400, 400]
        }

    def get_model_info(self) -> Dict[str, Any]:
        """Exposes raw metadata for the System Health Diagnostics page"""
        return {
            "model": "yolo_fire_v1",
            "framework": "onnx",
            "device": "gpu",
            "tasks": [self.name],
        }
        
    def cleanup(self):
        """Executes on Docker shutdown to gracefully clear GPU/CPU Memory allocations"""
        pass

4. Igniting the Adapter

Once your specific task.py and schema.json are securely staged within the correct directory, seamlessly restart your Docker or Python environment:

uvicorn adapter.main:app --reload --port 9100

The system logging array will immediately trace: [INFO] Discovering plugins... SUCCESS - Loaded 'fire_detection' task.

You can instantaneously begin querying your new AI logic via the standard /infer REST hook!