- Project Description
- Architecture Blueprint
- Primary Components
- Environment Variables Configuration
- Setup Instructions
- Usage
- Dependencies
The Hyperautomation MCP Server provides a bridge between LLM clients and a middle layer of HA workflows, enabling dynamic security orchestration through natural language interactions.
This MCP server is the core component of an architecture called Interactive Security Orchestration that reimagines how standard SOAR solutions can operate.
Unlike traditional SOAR platforms, where workflows are pre-built and static, this architecture enables analysts to "create workflows" in real-time as the LLM interprets high-level security goals and dynamically selects the optimal tools and actions from available Agents in the HyperAutomation layer.
This unlocks a new paradigm where analysts can seamlessly blend dynamic workflow orchestration with traditional static SOAR approaches, creating a hybrid system that combines the reliability and predictability of pre-built workflows with the flexibility and intelligence of user-directed automation.
📖 Companion blog post: The Interactive Security Orchestration Paradigm (Part I)
- Location: server
- Purpose:
- Core MCP protocol implementation that handles client requests and routes them to appropriate agents implemented as HyperAutomation workflows.
- Retrieval of results from a cloud DB (slight deviation from standard MCP)
The system integrates with multiple domain-specific HyperAutomation Agents through webhook endpoints:
- VT Agent - Virus Total threat intelligence lookups and sample downloads
- SDL Agent - Singularity Data Lake query execution and analysis
- Asset Handler Agent - Endpoint and asset management operations
- Case Manager Agent - Alert management and case handling
- Remote Operations Manager Agent - Remote script execution and task management
- DB Manager - Stores Agents' results in an internet-facing Database
This component handles the retrieval of results generated by the HA Agents via polling
- While the code included in this repo leverages Google Big Query as DB to temporarily store results, analysts can choose any other DB in the cloud, provided that they:
- Update the DB_Manager class to use a different Database and implement the necessary polling mechanism
- Update the DB Agent in HyperAutomation layer to leverage a different integration
Make sure to create a .env file at the root of this project. Customise the content based on your needs.
Database Configuration:
# Google BigQuery Configuration (required when using BigQuery)
GOOGLE_CLOUD_PROJECT="your-gcp-project-id"
BIGQUERY_DATASET_ID="your-dataset-id"
BIGQUERY_TABLE_ID="your-table-id"
# Credentials file path (MUST be updated)
CREDENTIALS_FILE="/path/to/your/credentials/file.json"# Logging Configuration
MCP_SERVER_LOG_FILE="mcp_server.log" # Default: mcp_server.log
# Database Polling Configuration
DB_MAX_RETRIES=200 # Default: 200
DB_RETRY_DELAY=1 # Default: 1 second
DB_SERVICE_TYPE="bigquery" # Options: bigquery, gsheet. Default: bigquery
# BigQuery Configuration (optional)
BIGQUERY_REQ_ID_COLUMN="req_id" # Default: req_id- Clone the repository
- Configure BigQuery
- Configure the Agents
- Install dependencies:
uv install(oruv syncif using uv.lock) - Configure environment variables (see instructions above)
- Test the server:
uv run python server/server.py- Make sure you see no errors
- Kill process
Note: Testing was performed with chatgpt-4.1 (This is the model I recommend using for now)
Connect the MCP server to your preferred LLM client.
/PATH_TO_BINARY/uv --directory /mcp-hyperautomation/server/ run server.py --transport stdio
Run a quick test by typing list endpoints in your client
- Python 3.8+
- uv (for dependency management)
- FastMCP
- Google Cloud BigQuery Python client
- uvicorn (for SSE transport)
- Additional dependencies listed in server/pyproject.toml
Author: antonio.monaca@sentinelone.com