Skip to content

The GenAI API Pentest Platform is a API security testing tool that leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware API security assessments. Unlike traditional tools that rely on pattern matching, this platform uses AI to understand logic, predict vulnerabilities, and generate sophisticated attack scenario.

License

Notifications You must be signed in to change notification settings

gensecaihq/genai-api-pentest-platform

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

GenAI API Pentest Platform

πŸš€ AI-Powered API Security Testing for SMB/SME

Democratizing AI-powered API security assessment for small and medium businesses

Features β€’ Installation β€’ Usage β€’ Documentation β€’ Contributing

πŸš€ Overview

The GenAI API Pentest Platform is an AI-powered API security testing tool designed for small to medium businesses and individual developers. It leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware vulnerability assessments with a focus on accuracy and ease of use.

Current Status: 15% Complete - Proof of Concept

  • βœ… Core AI-powered scanning engine
  • βœ… Multi-LLM consensus system
  • βœ… OWASP API Security Top 10 coverage
  • βœ… Advanced validation to reduce false positives
  • 🚧 Web interface and advanced features in development

✨ Features

πŸ€– AI-Powered Scanning

  • Multi-LLM Integration: OpenAI, Anthropic, Google, OpenRouter, and local LLMs (Ollama)
  • Consensus Validation: Multiple AI models validate findings to reduce false positives
  • Context-Aware Payloads: AI generates attack payloads specific to your API context
  • Smart Pattern Recognition: Advanced response analysis with AI-powered insights

πŸ›‘οΈ Security Testing Coverage

  • OWASP API Security Top 10 (2023): Comprehensive coverage of modern API threats
  • BOLA/IDOR Detection: Broken Object Level Authorization with AI payload generation
  • SQL Injection: Database-specific payloads with timing and error-based detection
  • Authentication/Authorization: Business logic flaw detection
  • Response Analysis: Behavioral anomaly detection and pattern matching

πŸ“‹ Supported Formats

  • OpenAPI/Swagger: 2.0 & 3.x with automatic discovery
  • Manual Configuration: Direct endpoint testing
  • Future Support: Postman Collections, GraphQL (planned)

🎯 Built for SMB/SME

  • Easy Setup: Simple configuration with environment variables
  • Cost-Effective: Use local LLMs or affordable cloud APIs
  • Developer-Friendly: Clear documentation and simple integration
  • Focused Results: Prioritized findings with actionable remediation advice

πŸ› οΈ Installation

Prerequisites

  • Python 3.8+ (Recommended: Python 3.11)
  • API Keys: At least one AI provider (OpenAI, Anthropic, Google, or local Ollama)
  • System Requirements: 4GB RAM, 1GB disk space

Quick Start

# Clone the repository
git clone https://github.com/gensecaihq/genai-api-pentest-platform.git
cd genai-api-pentest-platform

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Configure your API keys
cp .env.example .env
# Edit .env with your API keys (at least one required)

Environment Configuration

# Required: At least one AI provider
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"

# Optional: Local LLM (free alternative)
export OLLAMA_BASE_URL="http://localhost:11434"
export LOCAL_MODEL="llama2"

# Configuration
export LOG_LEVEL="INFO"
export HTTP_TIMEOUT="30"
export MAX_PAYLOADS_PER_ENDPOINT="25"

Local LLM Setup (Free Option)

# Install Ollama for free local AI
curl -fsSL https://ollama.ai/install.sh | sh

# Download a model
ollama pull llama2

# The platform will automatically detect and use local models

🚦 Usage

Quick Test

# Test configuration
python -c "from src.core.config_validator import validate_config_dict; print('βœ… Configuration valid')"

# Basic OpenAPI scan
python scripts/example_scan.py https://api.example.com/openapi.json

# Local file scan
python scripts/example_scan.py ./examples/vulnerable-api.yaml

Python API (Current Implementation)

import asyncio
from src.api.parser import OpenAPIParser
from src.attack.bola_scanner import BOLAScanner
from src.validation.vulnerability_validator import VulnerabilityValidator

async def scan_api():
    # Parse OpenAPI specification
    async with OpenAPIParser() as parser:
        api_spec = await parser.parse_from_url('https://api.example.com/openapi.json')
        
    # Configure scanner
    config = {
        'genai': {
            'providers': {
                'openai': {'api_key': 'your-key', 'enabled': True}
            }
        }
    }
    
    # Run BOLA scan on endpoints
    scanner = BOLAScanner(config)
    validator = VulnerabilityValidator(config)
    
    vulnerabilities = []
    for endpoint in api_spec.endpoints:
        async for vuln in scanner.scan(endpoint):
            # Validate to reduce false positives
            validation = await validator.validate_vulnerability(vuln)
            if validation.is_valid:
                vulnerabilities.append(vuln)
    
    return vulnerabilities

# Run the scan
results = asyncio.run(scan_api())

Configuration

# configs/development.yaml
log_level: "DEBUG"
http_timeout: 10
verify_ssl: false
max_payloads_per_endpoint: 10

# AI provider (choose one or multiple)
openai_model: "gpt-3.5-turbo"  # Cost-effective for development
temperature: 0.7

πŸ“Š Example Results

{
  "vulnerability": {
    "id": "bola_users_123",
    "title": "BOLA: Successful access to unauthorized object",
    "severity": "HIGH",
    "confidence": 0.85,
    "attack_type": "authorization",
    "endpoint": {
      "path": "/users/{id}",
      "method": "GET"
    },
    "payload": "admin",
    "evidence": {
      "response_status": 200,
      "response_time": 150,
      "technique": "privilege_escalation"
    },
    "ai_analysis": "AI detected unauthorized access to user data using 'admin' payload. Response contains sensitive user information that should require proper authorization.",
    "business_impact": "High business impact: Unauthorized access to sensitive user data, potential data breaches, compliance violations",
    "remediation": [
      "Implement proper authorization checks for object access",
      "Use indirect object references (e.g., session-based identifiers)",
      "Validate user permissions for each object request"
    ],
    "validation_result": {
      "is_valid": true,
      "confidence_score": 0.82,
      "false_positive_probability": 0.15
    }
  }
}

πŸ”§ Configuration

See configs/development.yaml and configs/production.yaml for complete examples.

Core Settings

# Logging
log_level: "INFO"                    # DEBUG, INFO, WARNING, ERROR
structured_logging: false           # JSON logging for production

# HTTP Client  
http_timeout: 30                     # Request timeout in seconds
max_retries: 3                       # Maximum retry attempts
verify_ssl: true                     # SSL certificate verification
rate_limit_delay: 0.5               # Delay between requests

# AI Configuration (use environment variables for API keys)
openai_model: "gpt-4-turbo-preview"  # or "gpt-3.5-turbo" for cost savings
temperature: 0.7                     # AI creativity level (0.0-2.0)

# Security Testing
max_payloads_per_endpoint: 25        # Payloads per endpoint (balance speed vs coverage)
confidence_threshold: 0.7            # Minimum confidence for valid findings
false_positive_threshold: 0.3        # Maximum false positive probability

Environment Variables

All sensitive configuration should use environment variables:

# Required: At least one AI provider
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...

# Optional: Local LLM
OLLAMA_BASE_URL=http://localhost:11434

# Testing Configuration
HTTP_TIMEOUT=30
MAX_PAYLOADS_PER_ENDPOINT=25
LOG_LEVEL=INFO

πŸ“š Documentation

πŸ—οΈ Development Status

This is a proof-of-concept implementation with the following components:

βœ… Completed (15% of full platform)

  • Core AI Engine: Multi-LLM consensus system
  • BOLA Scanner: OWASP API1 detection with AI payloads
  • OpenAPI Parser: Automatic endpoint discovery
  • Validation System: Advanced false positive reduction
  • HTTP Client: Production-ready with error handling
  • Configuration: Secure validation and environment variables

🚧 In Development

  • Web interface for easy scanning
  • Additional OWASP API Security Top 10 scanners
  • Reporting and export functionality
  • CLI interface improvements
  • Additional authentication methods

πŸ“‹ Planned Features

  • GraphQL and Postman Collections support
  • Advanced exploit chain discovery
  • Integration with CI/CD pipelines
  • Team collaboration features

🀝 Contributing

Contributions welcome! This project is in active development. Priority areas:

  • OWASP API Security Top 10 scanner implementations
  • Web interface development
  • Documentation improvements
  • Test coverage expansion

πŸ›‘οΈ Security

For security issues, please email the maintainers instead of creating public issues.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

⚠️ Disclaimer

This tool is for authorized security testing only. Users must:

  • Obtain explicit written permission before testing any API
  • Comply with all applicable laws and regulations
  • Use the tool responsibly and ethically
  • Report vulnerabilities through appropriate channels

Important: This is proof-of-concept software. Use in controlled environments only.

πŸš€ Quick Start Checklist

  1. βœ… Install Python 3.8+ and create virtual environment
  2. βœ… Get API Keys - At least one: OpenAI, Anthropic, Google, or setup Ollama
  3. βœ… Clone & Install - Follow installation instructions above
  4. βœ… Configure - Copy .env.example to .env and add your API keys
  5. βœ… Test - Run configuration validation
  6. βœ… Scan - Try with a sample OpenAPI specification

πŸ’° Cost Considerations

  • OpenAI GPT-3.5-turbo: ~$0.002 per 1K tokens (cost-effective)
  • OpenAI GPT-4: ~$0.03 per 1K tokens (higher accuracy)
  • Anthropic Claude: ~$0.008 per 1K tokens (good balance)
  • Local LLMs (Ollama): Free (requires local compute)

Estimate: Testing a 10-endpoint API costs $0.10-$2.00 depending on model choice.

πŸ™ Acknowledgments

  • OpenAI, Anthropic, and Google for LLM APIs
  • OWASP for API Security Top 10 guidance
  • The cybersecurity community for vulnerability research
  • Ollama for enabling local LLM deployment

GenAI API Pentest Platform

Democratizing AI-powered API security for SMB/SME

GitHub β€’ Roadmap β€’ Documentation

About

The GenAI API Pentest Platform is a API security testing tool that leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware API security assessments. Unlike traditional tools that rely on pattern matching, this platform uses AI to understand logic, predict vulnerabilities, and generate sophisticated attack scenario.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •