llm-inject-scan
Version:
A tiny, fast library that scans user prompts for risky patterns before they reach your LLM model. It flags likely prompt-injection attempts so you can block, review, or route them differently—without making a model call.
Filename | Content Type | Size | |
---|---|---|---|
../ | |||
651 B | |||
651 B | |||
9.46 kB | |||
8.37 kB |