Skip to content

✨ A high-performance code agent written in Rust, combining the best features of WCGW for maximum efficiency and semantic capabilities. πŸ¦€

License

Notifications You must be signed in to change notification settings

gabrielmaialva33/winx-code-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Winx

✨ οΌ·ο½‰ο½Žο½˜ οΌ‘ο½‡ο½…ο½Žο½” ✨

πŸ¦€ A high-performance Rust implementation of WCGW for code agents πŸ¦€

Language License GitHub language count Repository size Last Commit Made by Maia

Trust Score


πŸ“– Overview

Winx is a Rust reimplementation of WCGW, providing shell execution and file management capabilities for LLM code agents. Designed for high performance and reliability, Winx integrates with Claude and other LLMs via the Model Context Protocol (MCP).

🌟 Features

  • ⚑ High Performance: Implemented in Rust for maximum efficiency
  • πŸ€– Multi-Provider AI Integration (v0.1.5):
    • 🎯 DashScope/Qwen3: Primary AI provider with Alibaba Cloud's Qwen3-Coder-Plus model
    • πŸ”„ NVIDIA NIM: Fallback 1 with Qwen3-235B-A22B model and thinking mode
    • πŸ’Ž Google Gemini: Fallback 2 with Gemini-1.5-Pro and Gemini-1.5-Flash models
    • πŸ”§ AI-Powered Code Analysis: Detect bugs, security issues, and performance problems
    • πŸš€ AI Code Generation: Generate code from natural language descriptions
    • πŸ“š AI Code Explanation: Get detailed explanations of complex code
    • 🎭 AI-to-AI Chat: Winx fairy assistant with personality and multiple conversation modes
    • πŸ›‘οΈ Smart Fallback System: Automatic provider switching on failures
  • πŸ“ Advanced File Operations:
    • πŸ“– Read files with line range support
    • ✏️ Write new files with syntax validation
    • πŸ” Edit existing files with intelligent search/replace
    • πŸ”„ Smart file caching with change detection
    • πŸ“ Line-level granular read tracking
  • πŸ–₯️ Command Execution:
    • πŸš€ Run shell commands with status tracking
    • πŸ“Ÿ Interactive shell with persistent session
    • ⌨️ Full input/output control via PTY
    • πŸƒβ€β™‚οΈ Background process execution
  • πŸ”€ Operational Modes:
    • πŸ”“ wcgw: Complete access to all features
    • πŸ”Ž architect: Read-only mode for planning and analysis
    • πŸ”’ code_writer: Restricted access for controlled modifications
  • πŸ“Š Project Management:
    • πŸ“ Repository structure analysis
    • πŸ’Ύ Context saving and task resumption
  • πŸ–ΌοΈ Media Support: Read images and encode as base64
  • 🧩 MCP Protocol: Seamless integration with Claude and other LLMs

πŸ–‡οΈ Installation & Setup

Prerequisites

  • Rust 1.70 or higher
  • Tokio runtime

1. Clone the Repository

git clone https://github.com/gabrielmaialva33/winx-code-agent.git && cd winx

2. Build the Project

# For development
cargo build

# For production
cargo build --release

3. Run the Agent

# Using cargo
cargo run

# Or directly
./target/release/winx-code-agent

πŸ”§ Integration with Claude

Winx is designed to work seamlessly with Claude via the MCP interface:

  1. Edit Claude's Configuration

    // In claude_desktop_config.json (Mac: ~/Library/Application Support/Claude/claude_desktop_config.json)
    {
      "mcpServers": {
        "winx": {
          "command": "/path/to/winx-code-agent",
          "args": [],
          "env": {
            "RUST_LOG": "info",
            "DASHSCOPE_API_KEY": "your-dashscope-api-key",
            "DASHSCOPE_MODEL": "qwen3-coder-plus",
            "NVIDIA_API_KEY": "your-nvidia-api-key",
            "NVIDIA_DEFAULT_MODEL": "qwen/qwen3-235b-a22b",
            "GEMINI_API_KEY": "your-gemini-api-key",
            "GEMINI_MODEL": "gemini-1.5-pro"
          }
        }
      }
    }
  2. Restart Claude after configuration to see the Winx MCP integration icon.

  3. Start using the tools through Claude's interface.


πŸ› οΈ Available Tools

πŸš€ initialize

Always call this first to set up your workspace environment.

initialize(
  type="first_call",
  any_workspace_path="/path/to/project",
  mode_name="wcgw"
)

πŸ–₯️ bash_command

Execute shell commands with persistent shell state and full interactive capabilities.

# Execute commands
bash_command(
  action_json={"command": "ls -la"},
  chat_id="i1234"
)

# Check command status
bash_command(
  action_json={"status_check": true},
  chat_id="i1234"
)

# Send input to running commands
bash_command(
  action_json={"send_text": "y"},
  chat_id="i1234"
)

# Send special keys (Ctrl+C, arrow keys, etc.)
bash_command(
  action_json={"send_specials": ["Enter", "CtrlC"]},
  chat_id="i1234"
)

πŸ“ File Operations

  • read_files: Read file content with line range support

    read_files(
      file_paths=["/path/to/file.rs"],
      show_line_numbers_reason=null
    )
    
  • file_write_or_edit: Write or edit files

    file_write_or_edit(
      file_path="/path/to/file.rs",
      percentage_to_change=100,
      file_content_or_search_replace_blocks="content...",
      chat_id="i1234"
    )
    
  • read_image: Process image files as base64

    read_image(
      file_path="/path/to/image.png"
    )
    

πŸ’Ύ context_save

Save task context for later resumption.

context_save(
  id="task_name",
  project_root_path="/path/to/project",
  description="Task description",
  relevant_file_globs=["**/*.rs"]
)

πŸ€– AI-Powered Tools (v0.1.5)

  • code_analyzer: AI-powered code analysis for bugs, security, and performance

    code_analyzer(
      file_path="/path/to/code.rs",
      language="Rust"
    )
    
  • ai_generate_code: Generate code from natural language description

    ai_generate_code(
      prompt="Create a REST API for user management",
      language="Rust",
      context="Using Axum framework",
      max_tokens=1000,
      temperature=0.7
    )
    
  • ai_explain_code: Get AI explanation and documentation for code

    ai_explain_code(
      file_path="/path/to/code.rs",
      language="Rust",
      detail_level="expert"
    )
    
  • winx_chat: Chat with Winx, your AI assistant fairy ✨

    winx_chat(
      message="Oi Winx, como funciona o sistema de fallback?",
      conversation_mode="technical",
      include_system_info=true,
      personality_level=8
    )
    

    Conversation Modes:

    • casual: Informal, friendly chat with personality 😊
    • technical: Focused technical responses πŸ”§
    • help: Help mode with detailed explanations πŸ†˜
    • debug: Debugging assistance πŸ›
    • creative: Creative brainstorming πŸ’‘
    • mentor: Teaching and best practices πŸ§™β€β™€οΈ

πŸ‘¨β€πŸ’» Usage Workflow

  1. Initialize the workspace

    initialize(type="first_call", any_workspace_path="/path/to/your/project")
    
  2. Explore the codebase

    bash_command(action_json={"command": "find . -type f -name '*.rs' | sort"}, chat_id="i1234")
    
  3. Read key files

    read_files(file_paths=["/path/to/important_file.rs"])
    
  4. Make changes

    file_write_or_edit(file_path="/path/to/file.rs", percentage_to_change=30, 
    file_content_or_search_replace_blocks="<<<<<<< SEARCH\nold code\n=======\nnew code\n>>>>>>> REPLACE", 
    chat_id="i1234")
    
  5. Run tests

    bash_command(action_json={"command": "cargo test"}, chat_id="i1234")
    
  6. Chat with Winx for help

    winx_chat(message="Winx, posso ter ajuda para otimizar este cΓ³digo?", 
    conversation_mode="mentor", include_system_info=true)
    
  7. Save context for later

    context_save(id="my_task", project_root_path="/path/to/project", 
    description="Implementation of feature X", relevant_file_globs=["src/**/*.rs"])
    

🏷 Need Support or Assistance?

If you need help or have any questions about Winx, feel free to reach out via the following channels:


πŸ“ Changelog

v0.1.5 (Latest) - Multi-Provider AI Integration

πŸš€ Major Features:

  • Multi-Provider AI System: Primary DashScope, fallback to NVIDIA, then Gemini
  • DashScope/Qwen3 Integration: Alibaba Cloud's Qwen3-Coder-Plus as primary AI provider
  • Smart Fallback System: Automatic provider switching with comprehensive error handling
  • 3 New AI Tools: code_analyzer, ai_generate_code, ai_explain_code

🎯 AI Providers:

  • DashScope: Primary provider with OpenAI-compatible API format
  • NVIDIA NIM: Qwen3-235B-A22B with thinking mode and MoE architecture
  • Google Gemini: Gemini-1.5-Pro and Gemini-1.5-Flash models

πŸ› οΈ Technical Improvements:

  • Rate limiting and retry logic for all AI providers
  • Comprehensive logging and error reporting
  • Environment-based configuration management
  • Full CI/CD quality checks (formatting, linting, testing)

πŸ™ Special Thanks

A huge thank you to rusiaaman for the inspiring work on WCGW, which served as the primary inspiration for this project. Winx reimplements WCGW's features in Rust for enhanced performance and reliability.


πŸ“œ License

MIT

About

✨ A high-performance code agent written in Rust, combining the best features of WCGW for maximum efficiency and semantic capabilities. πŸ¦€

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks