Skip to content

Architecture

Omniglass is an opinionated Zabbix distribution for AV system monitoring. It compiles Zabbix from source with targeted patches and adds an AV-native control plane on top. This page explains what actually runs, how the components fit together, and the design decisions behind the structure.

Omniglass is composed of several logical components, each with a distinct role. This diagram shows how operators and data flow through them:

flowchart TD
    operator(["Operator"])

    subgraph server["Server deployment"]
        direction TB
        ingress["Ingress<br/>(reverse proxy)"]
        web["Web UI<br/>(Zabbix + Omniglass FEMs)"]
        control["Control Plane<br/>(reconciliation + /og/api/)"]
        workflows["Workflow Engine<br/>(Node-RED)"]
        monitoring["Zabbix Server<br/>(monitoring engine)"]
        db[("Shared Database<br/>(Zabbix + Omniglass schemas)")]

        ingress -->|/| web
        ingress -->|/og/api/| control
        ingress -->|/nodered/| workflows
        control -.->|JSON-RPC| web
        web --- db
        control --- db
        workflows --- db
        monitoring --- db
    end

    subgraph edge["Edge deployment"]
        proxy["Collection Proxy"]
    end

    devices(["AV devices"])

    operator -->|HTTPS| ingress
    proxy -->|forwards data| monitoring
    proxy -->|SNMP / external checks| devices

Ingress. A single inbound entry point for all operator traffic. Routes / to the Web UI, /og/api/ to the Control Plane, and /nodered/ to the Workflow Engine.

Zabbix Server. The monitoring engine. Schedules data collection, processes problems, calculates SLAs, fires alerts. Omniglass ships a patched build with schema additions baked in.

Web UI. Serves the Zabbix web interface alongside Omniglass front-end modules (Systems Explorer, System Detail, Locations, custom widgets). The FEMs are Omniglass-built PHP modules that extend Zabbix’s UI without replacing it.

Control Plane. The Omniglass core binary. Runs database migrations, bootstraps Zabbix configuration, continuously reconciles AV-native state (systems, locations, templates, tags), and serves the /og/api/ HTTP API. This is what makes Omniglass more than pre-configured Zabbix — see The control plane below.

Workflow Engine. Node-RED, pre-wired with Omniglass plugins for authentication, theming, and Zabbix integration. Handles synthetic tests, device control sequences, vendor API integrations, and data transforms.

Shared Database. PostgreSQL holding both Zabbix state and the omniglass schema in the same instance. See Data ownership below for the boundaries between what each component owns.

Collection Proxy. A Zabbix proxy running AV device drivers as external checks. Collects from devices via SNMP and proprietary AV protocols, buffers data, and forwards to the Zabbix Server. Deployments can run one proxy (central, alongside the server) or many (edge deployments at remote sites).

Omniglass supports two deployment topologies:

  • Server deployment — every component above runs together on one host. This is the standard deployment.
  • Edge deployment — only a Collection Proxy + Host Agent run at a remote site, forwarding data to a central Omniglass host. No UI, no control plane, no database.

The reference implementation uses Docker Compose to orchestrate these components as services, but the component boundaries don’t depend on Docker — Omniglass can be deployed on any orchestrator that runs containers.

Omniglass uses every Zabbix extension point that supports our requirements, plus a small set of source patches for things the extension points can’t reach.

Extension pointWhat Omniglass does with it
JSON-RPC APIThe core binary reads and writes Zabbix state: hosts, groups, tags, items, templates, users, tokens.
Front-end modules (FEM)Omniglass pages (Systems Explorer, System Detail, Locations, widgets) live in zabbix-web as PHP modules.
External checksThe proxy runs AV device drivers (e.g. Crestron CTP) as external-check binaries to monitor agentless AV protocols.
Container layersOmniglass images (zabbix-web, zabbix-proxy) layer branding, config, and tooling on top of patched Zabbix base images.
Source patchesA small set of targeted patches compiled into the Zabbix source at build time: schema changes, API changes, and PHP UI changes for behaviors the extension points don’t expose. The patches produce our own omniglass-zabbix-server-pgsql, omniglass-zabbix-web-nginx-pgsql, and omniglass-zabbix-proxy-sqlite3 images.
Omniglass DB schemaAdditive tables in a separate omniglass schema alongside Zabbix tables in the same Postgres instance. Managed by dbmate.

Each source patch is tracked against an upstream feature request and removed once Zabbix ships the capability natively. See Patching for the workflow.

If Omniglass is removed, Zabbix continues monitoring. Host groups, tags, templates, and other projected artifacts remain in Zabbix and keep working. The omniglass schema can be dropped independently.

The core binary is a single Go process. It runs migrations, bootstraps Zabbix configuration, reconciles AV-specific state, and serves the Omniglass HTTP API.

On startup, core runs this sequence:

  1. Load config from environment variables
  2. Run omniglass schema migrations (dbmate, advisory-locked)
  3. Bootstrap: wait for Zabbix API, rotate service token, set default theme, register the server-local proxy
  4. Start the reconciliation engine (runs an initial full sweep, then starts per-reconciler workers and the periodic full-sweep timer)
  5. Start the audit poller and wire it into the event bus
  6. Serve the HTTP API on port 8080

The reconciliation engine is what makes Omniglass more than configured Zabbix. It continuously keeps Omniglass-managed state (host groups, system hosts, tags, location hierarchy) aligned with what Omniglass knows it should be — even if an operator changes things directly in the Zabbix UI.

flowchart TB
    subgraph core["core binary"]
        direction TB
        poller["audit poller<br/>(polls zabbix.auditlog every 1s)"]
        bus["in-process event bus<br/>(Go channels)"]
        subgraph engine["reconciliation engine"]
            direction TB
            ticker["full-sweep ticker (2m)"]
            systems["systems reconciler"]
            locations["locations reconciler"]
        end
        poller --> bus --> engine
        ticker -.-> systems
        ticker -.-> locations
    end

    zbx["Zabbix<br/>(hosts, groups, templates, auditlog)"]
    engine -->|JSON-RPC: read + write| zbx

Two loops keep state aligned:

  • Audit log polling (every 1s): the audit poller reads new rows from Zabbix’s auditlog table and publishes events to the bus. The engine routes each event to the reconcilers that care about that resource type, triggering targeted reconciliation.
  • Full sweep (every 2m): every reconciler re-checks all its entities from scratch. This is the safety net — it catches anything the audit poll missed or dropped.

When drift is detected, reconcilers correct it via the Zabbix JSON-RPC API. All corrections are logged to the omniglass.reconciliation_events table for audit.

Current reconcilers:

  • systems — manages system host groups, system host projections (virtual hosts that represent the system as a whole), template linkage, and tag propagation
  • locations — manages location host groups, membership from assigned systems, and coordinate sync to host inventory

See Systems & Locations for what these reconcilers actually do.

DataLocationOwnerNotes
Zabbix config and historyZabbix schema in PostgresZabbixOmniglass reads (audit log) but does not modify the schema. All writes go through the JSON-RPC API.
Omniglass systems, locations, typesomniglass schema in PostgresOmniglassManaged by dbmate migrations. Dropping this schema does not affect monitoring.
Projected artifacts (host groups, system hosts, tags)Zabbix schemaOmniglass reconcilerExist in Zabbix, owned by Omniglass. Survive Omniglass removal.
Node-RED flows and credentialsomniglass schema (via Flow History plugin)Node-RED + OmniglassVersioned on every deploy with diff and restore.

The boundaries are deliberate: Omniglass never issues DDL against the Zabbix schema and never modifies Zabbix config files. Patches aside, everything Omniglass does in Zabbix goes through supported APIs.

omniglass/
components/
core/ # Control plane binary (Go)
cmd/core/ # Entry point
internal/ # reconcile, systems, locations, bus, bootstrap, zabbix
db/migrations/ # Omniglass schema migrations (dbmate)
gateway/ # nginx reverse proxy (single ingress)
zabbix-web-nginx-pgsql/ # Patched Zabbix web image
frontend-modules/ # Omniglass FEM pages and widgets
zabbix-proxy-sqlite3/ # Patched Zabbix proxy image
cmd/ # Device driver binaries (e.g. crestron-ctp)
scripts/ # External check shell scripts
nodered/ # Node-RED with Omniglass plugins
patches/ # Zabbix source patches (applied at build time)
deploy/release/ # Compose template used to generate release assets
docs-site/ # This documentation site (Astro + Starlight)
tests/e2e/ # Full-stack Playwright tests

A few constraints shape every architectural decision in Omniglass:

  • Zabbix remains authoritative for monitoring. Omniglass adds structure and automation on top, but never replaces Zabbix’s core responsibilities (data collection, alerting, escalation, SLA calculation, history).
  • Reversibility is a hard constraint. Removing Omniglass must leave a working Zabbix deployment. Projected artifacts remain; the omniglass schema can be dropped; patches are recoverable by rebuilding without them.
  • Extension points first. Source patches are a last resort, tracked against upstream, and removed when Zabbix ships the capability natively.
  • Deterministic reconciliation. Omniglass-managed state converges to what Omniglass intends, continuously. Operators can trust that changes made through Omniglass will stick, and drift will be corrected.
  • Single ingress. The gateway is the only port exposed to operators. All services behind it share the same network boundary.