node-red-contrib-knx-ultimate
Version:
Control your KNX and KNX Secure intallation via Node-Red! A bunch of KNX nodes, with integrated Philips HUE control, ETS group address importer, and KNX routing between interfaces. Easy to use and highly configurable.
82 lines (70 loc) • 4.13 kB
HTML
<script type="text/markdown" data-help-name="knxUltimateAI">
This node listens to **all KNX telegrams** from the selected KNX Ultimate gateway, builds traffic statistics, detects anomalies, and can optionally query an LLM.
## Outputs
1. **Summary/Stats** (`msg.payload` JSON)
2. **Anomalies** (`msg.payload` JSON)
3. **AI Assistant** (`msg.payload` text, with `msg.summary`)
## Commands (input)
Send `msg.topic`:
- `summary` (or empty): emit summary immediately
- `reset`: clear internal history/counters
- `ask`: send a question to the configured LLM
For `ask`, provide the question in `msg.prompt` (preferred) or `msg.payload` (string).
## Configuration fields
All fields exposed in the KNX AI editor are listed below.
### General
- **Gateway**: KNX Ultimate gateway/config node used as telegram source.
- **Name**: Node label and dashboard header name.
- **Topic**: Base topic used in node outputs.
- **Open KNX AI Web** button: Opens the full KNX AI web dashboard (`/knxUltimateAI/sidebar/page`).
### Capture
- **Capture GroupValue_Write**: Capture write telegrams.
- **Capture GroupValue_Response**: Capture response telegrams.
- **Capture GroupValue_Read**: Capture read telegrams.
### Analysis
- **Analysis window (seconds)**: Main analysis window used for summaries/rates.
- **History window (seconds)**: Retention window for internal telegram history.
- **Max stored events**: Maximum number of telegrams kept in memory.
- **Auto emit summary (seconds, 0=off)**: Periodic summary output interval.
- **Top list size**: Number of top group addresses/sources in summary.
- **Detect simple patterns (A -> B)**: Enable transition/pattern detection.
- **Pattern max lag (ms)**: Max time gap for pattern transition matching.
- **Pattern min occurrences**: Minimum occurrences before a pattern is reported.
### Anomalies
- **Rate window (seconds)**: Sliding time window for anomaly rate checks.
- **Max overall telegrams/sec (0=off)**: Overall bus rate threshold.
- **Max telegrams/sec per GA (0=off)**: Per-group-address rate threshold.
- **Flap window (seconds)**: Time window for flapping/change-rate detection.
- **Max changes per GA in window (0=off)**: Max allowed changes in flap window.
### LLM Assistant
- The **LLM Assistant** tab is shown first in the editor for faster setup.
- **Enable LLM assistant**: Enable Ask/chat assistant features.
- **Provider**: Select LLM backend (OpenAI-compatible or Ollama).
- **Endpoint URL**: Chat/completions endpoint URL.
- **API key**: API key (not required for local Ollama).
- **Model**: Model ID/name.
- **System prompt**: Global instruction for KNX analysis behavior.
- **Temperature**: Sampling temperature.
- **Max tokens**: Max completion tokens.
- **Timeout (ms)**: HTTP timeout for LLM requests.
- **Recent events included**: Max recent telegram events in prompt.
- **Include raw payload hex**: Include raw telegram hex in prompt.
- **Include Node-RED KNX node inventory**: Include flow inventory in prompt.
- **Max flow nodes included**: Limit nodes included from flow inventory.
- **Include documentation snippets (help/README/examples)**: Include docs context.
- **Docs language**: Preferred language for docs snippets.
- **Max docs snippets**: Max number of docs snippets.
- **Max docs chars**: Max total docs characters.
- **Refresh** button: Query provider and load available model IDs.
### Ollama quick setup (local)
- Choose **Provider = Ollama**.
- Default endpoint: `http://localhost:11434/api/chat`.
- If no local models are found, use:
- **1) Download model**: opens the **Model library** page.
- **2) Install it**: downloads and installs the model locally (for example `llama3.1`).
- During model refresh/install, KNX AI also tries to auto-start the Ollama server when possible.
- If install fails with connection errors, ensure Ollama is running (desktop app or `ollama serve`).
- If Node-RED runs in Docker, use `host.docker.internal` instead of `localhost` in the endpoint URL.
## Security note
If LLM is enabled, KNX traffic context can be sent to the configured endpoint. Use local providers if you need strict on-prem data handling.
</script>