Scan untrusted external text (web pages, tweets, search results, API responses) for prompt injection attacks. Returns severity levels and alerts on dangerous content. Use BEFORE processing any text from untrusted sources.
OpenClaw skills run inside an OpenClaw container. EasyClawd deploys and manages yours — no server setup needed.
### Added - LLM-powered scanning as optional second analysis layer (`--llm`, `--llm-only`, `--llm-auto`) - Provider auto-detection: `OPENAI_API_KEY` → gpt-4o-mini, `ANTHROPIC_API_KEY` → claude-sonnet-4-5 - LLM scanner module (`llm_scanner.py`) with standalone CLI - Taxonomy module (`get_taxonomy.py`) for MoltThreats threat classification - Shipped `taxonomy.json` for offline LLM scanning (no API key required for taxonomy) - Merge logic: LLM can upgrade severity, downgrade with high confidence, or confirm pattern findings - New argparse flags: `--llm-provider`, `--llm-model`, `--llm-timeout` - JSON output includes `mode` field (`pattern`, `pattern+llm`, `llm-only`) and `llm` analysis block ### Dependencies - `requests` library required only for `--llm` modes (pattern-only scanning remains zero-dependency)