[github] Secret/token in environment: GITHUB_PERSONAL_ACCESS_TOKEN has hardcoded value
[brave-search] API key in environment: BRAVE_API_KEY has hardcoded value
[google-maps] API key in environment: GOOGLE_MAPS_API_KEY has hardcoded value
[MCP-wolfram-alpha] API key in environment: WOLFRAM_API_KEY has hardcoded value
[slack] Secret/token in environment: SLACK_BOT_TOKEN has hardcoded value
[discord-raw] Secret/token in environment: DISCORD_TOKEN has hardcoded value
network_trust (100%)
No network exposure or suspicious URL issues
LLM Security Review
CRITICAL
Filesystem server grants read/write access to entire F:\ and D:\ drives, exposing all files on those volumes to the AI agent.
Fix: Restrict filesystem server paths to only the specific directories needed, removing the broad F:/ and D:/ mounts.
HIGH
Neo4j database password is passed as a command-line argument, making it visible in process listings and shell history.
Fix: Pass the Neo4j password via an environment variable instead of a command-line argument.
Slack bot token placeholder uses the real xoxb- prefix format, and if replaced with a real token it will be exposed in the config file alongside the team ID.
Fix: Reference Slack credentials from a secrets manager or external environment variables rather than embedding them in the config file.
Docker MCP server grants the AI agent access to Docker operations, enabling potential container escape, host filesystem mounting, or privilege escalation.
Fix: Restrict Docker MCP to read-only operations or limit it to specific containers/images via configuration.
Windows CLI server grants shell command execution access to the AI agent, allowing arbitrary command execution on the host.
Fix: Review the win-cli config.json to ensure commands are allowlisted; restrict to only the specific commands needed.
MEDIUM
Multiple API keys (GitHub, Brave, Google Maps, Wolfram, Discord, Anthropic, OpenAI) are stored as plaintext placeholders in the config, encouraging direct secret embedding when filled in.
Fix: Use environment variable references or a secrets manager instead of storing API keys directly in the configuration file.
Multiple servers use 'npx -y' which auto-installs packages without confirmation, risking supply-chain attacks from typosquatted or compromised packages.
Fix: Pin package versions explicitly and install packages locally instead of using npx -y for auto-installation.
SQLite and Neo4j servers expose database files/connections directly to the AI agent without apparent access controls or read-only restrictions.
Fix: Configure database servers in read-only mode where possible, and restrict to specific tables or operations.
Local chat servers communicate over unencrypted HTTP to localhost, which could be intercepted if the host is compromised or ports are forwarded.
Fix: Use HTTPS even for localhost connections, or ensure the local LLM port is bound only to 127.0.0.1 and not exposed externally.
Puppeteer server gives the AI agent full browser automation capabilities including navigating to arbitrary URLs, which could be used for credential phishing or data exfiltration.
Fix: Restrict Puppeteer to specific allowed domains or use it only in a sandboxed environment.
LOW
Several community/third-party MCP servers (mcp-text-editor, @kazuph/mcp-taskmanager, mcp-server-everything-search, community-server-llm-txt) are used without version pinning, increasing supply-chain risk.
Fix: Pin all third-party packages to specific verified versions and audit their source code before use.
The configuration runs 30+ MCP servers simultaneously, creating a large attack surface and making it difficult to audit and maintain security across all services.
Fix: Disable MCP servers that are not actively needed and enable them only when required.