OpenNVR
Developer SDK

The AI Adapter Ecosystem.

Don't be locked into proprietary, black-box vendor analytics that charge $50 per camera a month. With OpenNVR's zero-friction plugin architecture, you can deploy your own Python models or route to 100,000+ Hugging Face weights instantly.

Built-in Model Handlers

High-Speed Edge Models

Optimized for CPU and edge GPU inferencing in completely offline environments. No internet connection required.

  • ONNX
    YOLOv8 Nano (6MB) Sub-millisecond person detection using adaptive confidence thresholds to eliminate false positives.
  • PyTorch
    YOLOv11 Medium + ByteTrack Persistent person tracking and counting across frames by assigning persistent tracking IDs.
  • ONNX
    InsightFace Buffalo-L (183MB) A massive 5-in-1 biometric engine generating 512-dimensional vector embeddings for facial recognition and VIP/Watchlist matching via in-memory FaceDB.

Hugging Face Cloud Models

Proxy requests to the Hugging Face Inference API when you need bleeding-edge massive parameter reasoning.

  • Cloud
    Vision Language Models (VLM) Send a frame directly to Salesforce/blip-image-captioning-base to receive plain english scene descriptions ("A person dropping a suspicious package").
  • Cloud
    Zero-Shot Object Detection Detect objects your local model was never trained on by sending a text prompt along with the image to Hugging Face OWL-ViT architectures.

Build Your Own Custom Plugin (BYOM)

Every AI task in Open-NVR is a self-contained python folder in adapter/tasks/. If you trained a custom YOLO model to detect Hard Hats, here is precisely how you mount it into the live server without rebooting.

Step 1: Define what your API returns

You must provide a schema.json so the frontend React developers know exactly how to draw the UI tracking data from your custom model.

{
"task": "hardhat_detection",
"response_fields": {
"label": {"type": "string"},
"confidence": {"type": "float"},
"bbox": {"type": "array[int]"}
}
}

Step 2: Implement the BaseTask contract

Extend `BaseTask` in a task.py file, override the setup() to load your `.onnx` weights into VRAM, and the server's auto-discovery engine handles the rest.

from adapter.interfaces import BaseTask
class Task(BaseTask):
name = "hardhat_detection"
def setup(self):
# Load heavy ONNX weights into VRAM
self.session = onnxruntime.InferenceSession("weights/hardhat.onnx")
def run(self, image, params):
# image var is raw BGR numpy array from OpenCV!
output = self.session.run(None, {"input": preprocess(image)})
return {"label": "compliant", "confidence": 0.99, "bbox": [0,0,10,10]}