Skip to content

Conversation

@tykbot
Copy link
Contributor

@tykbot tykbot bot commented May 27, 2025

User description

This PR was automatically created by Probe AI in response to issue #6459.

AI Response Summary:

I've created a comprehensive documentation page that explains the performance comparison tools available at the requested URLs. The new page, titled "Performance Comparisons," has been placed in the api-management directory alongside the existing performance-monitoring.md file.

The documentation page covers:

  1. Introduction to the Performance Comparison Tools - Explains what these interactive tools are and why they're valuable for comparing Tyk with Kong and Apollo GraphQL Gateway.

  2. How to Use and Interpret the Interactive Graphs - Provides guidance on navigating the tools, selecting different test scenarios, cloud providers, and machine types, and understanding the key metrics (RPS and latency).

  3. Description of Test Scenarios - Details the different API gateway usage patterns being tested, such as basic proxy, authentication, rate limiting, etc.

  4. Cloud Providers and Machine Types - Explains the different cloud environments (AWS, GCP, Azure) and machine sizes used in the benchmarks.

  5. Using This Information for Decision-Making - Offers guidance on how to apply these performance insights when selecting an API gateway solution.

The page also includes links to additional performance-related resources, including the existing Performance Monitoring documentation and relevant blog posts about Tyk's performance characteristics.

This new documentation provides users with a clear understanding of the performance comparison tools and how to use them to make informed decisions about API gateway solutions.


PR Type

Documentation


Description

  • Adds a new documentation page explaining performance comparison tools.

  • Details how to use and interpret interactive benchmark graphs.

  • Describes test scenarios, cloud providers, and machine types.

  • Provides guidance for decision-making based on performance data.


Changes walkthrough 📝

Relevant files
Documentation
performance-comparisons.md
New documentation page for performance comparison tools   

tyk-docs/content/api-management/performance-comparisons.md

  • Introduces a new documentation page on performance comparison tools.
  • Explains usage of interactive graphs for Tyk, Kong, and Apollo.
  • Details test scenarios, metrics, cloud providers, and machine types.
  • Offers guidance for interpreting results and making informed choices.
  • +118/-0 

    Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • … performance comparison tools avail
    
    Generated by Probe AI for issue #6459
    @github-actions
    Copy link
    Contributor

    ⚠️ Deploy preview for PR #6465 did not become live after 3 attempts.
    Please check Netlify or try manually: Preview URL

    @github-actions
    Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    🎫 Ticket compliance analysis ✅

    6459 - Fully compliant

    Compliant requirements:

    • Create a documentation page that presents and explains the content of the performance comparison graphs at the specified URLs.
    • Clearly explain what the graphs/tools are and why they are valuable.
    • Provide guidance on how to use and interpret the interactive graphs.
    • Describe the different test scenarios covered by the benchmarks.
    • Explain the cloud providers and machine types used in the tests.
    • Offer guidance on how to use this information for decision-making.
    ⏱️ Estimated effort to review: 1 🔵⚪⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Documentation Accuracy

    Ensure that the explanations and descriptions of the performance comparison tools, metrics, and scenarios accurately reflect the actual content and functionality of the referenced interactive tools and graphs.

    ## Introduction to the Performance Comparison Tools
    
    Tyk provides interactive performance comparison tools that allow you to evaluate and compare the performance characteristics of different API gateway solutions across various scenarios and cloud environments. These tools offer valuable insights for organizations making decisions about which API gateway best suits their specific requirements.
    
    The comparison tools currently include benchmarks for:
    
    - [Tyk API Gateway](https://tyk.io/docs/apps/analyzer/tyk.html)
    - [Kong API Gateway](https://tyk.io/docs/apps/analyzer/kong.html)
    - [Apollo GraphQL Gateway](https://tyk.io/docs/apps/analyzer/apollo.html)
    
    These interactive tools present real-world performance data collected from standardized benchmark tests, allowing for fair and transparent comparisons between different API gateway solutions.
    
    ## How to Use and Interpret the Interactive Graphs
    
    The performance comparison tools feature interactive graphs that allow you to:
    
    1. **Select Test Scenarios**: Choose from different API gateway usage patterns and configurations
    2. **Filter by Cloud Provider**: Compare performance across AWS, GCP, and Azure
    3. **View Different Machine Types**: See how performance scales with different instance sizes
    4. **Toggle Between Metrics**: Switch between requests per second (RPS), latency, and other performance indicators
    
    ### Key Metrics Explained
    
    When analyzing the graphs, pay attention to these key metrics:
    
    - **Requests Per Second (RPS)**: The number of API requests the gateway can handle per second - higher is better
    - **Latency (ms)**: The time taken to process requests - lower is better
    - **Error Rate**: The percentage of failed requests - lower is better
    - **CPU Utilization**: How much processing power is consumed - lower is more efficient
    
    The graphs allow you to hover over data points to see specific values and compare performance across different configurations.
    
    ## Description of Test Scenarios
    
    The performance tools include several standardized test scenarios designed to simulate common API gateway usage patterns:
    
    ### Basic Proxy
    
    Tests the gateway's performance when simply passing requests through to a backend service without additional processing. This represents the baseline performance of the gateway.
    
    ### Authentication
    
    Measures performance when the gateway is validating API keys or other authentication credentials with each request. This is one of the most common gateway functions.
    
    ### Rate Limiting
    
    Tests how efficiently the gateway can enforce rate limits on incoming requests, an essential capability for protecting backend services.
    
    ### Transformation
    
    Evaluates performance when the gateway is modifying request/response data, such as header manipulation or payload transformation.
    
    ### Complex Routing
    
    Tests the gateway's ability to route requests based on complex rules and conditions, simulating real-world microservices architectures.
    
    ## Cloud Providers and Machine Types
    
    The performance tools allow you to compare results across different cloud environments:
    
    ### Cloud Providers
    
    - **AWS (Amazon Web Services)**: Tests run on Amazon EC2 instances
    - **GCP (Google Cloud Platform)**: Tests run on Google Compute Engine instances
    - **Azure (Microsoft Azure)**: Tests run on Azure Virtual Machines
    
    ### Machine Types
    
    For each cloud provider, tests are conducted on various machine types, typically ranging from:
    
    - **Small**: 2 vCPUs, 4-8GB RAM
    - **Medium**: 4 vCPUs, 8-16GB RAM
    - **Large**: 8+ vCPUs, 16-32GB RAM
    
    This variety allows you to understand how each gateway solution scales with additional resources and helps identify the most cost-effective configuration for your expected workload.
    
    ## Using This Information for Decision-Making
    
    When using these performance comparison tools to inform your API gateway selection:
    
    ### Consider Your Specific Requirements
    
    1. **Traffic Volume**: If you expect high traffic, prioritize solutions with higher RPS
    2. **Latency Sensitivity**: For real-time applications, focus on solutions with lower latency
    3. **Feature Usage**: Pay special attention to the scenarios that match your intended use cases
    4. **Cost Efficiency**: Compare performance relative to the instance size to determine the most cost-effective solution
    
    ### Best Practices for Evaluation
    
    1. **Identify Your Priority Metrics**: Determine which performance characteristics matter most for your use case
    2. **Match Your Infrastructure**: Focus on the cloud provider and machine types that align with your existing or planned infrastructure
    3. **Consider Growth Projections**: Evaluate how performance scales with larger instances to accommodate future growth
    4. **Balance Performance and Features**: Remember that the fastest solution may not always be the best if it lacks features you need
    
    ### Beyond Performance
    
    While performance is crucial, also consider:
    
    - Feature set and extensibility
    - Ease of deployment and management
    - Community support and ecosystem
    - Documentation quality
    - Security capabilities
    - Total cost of ownership
    
    ## Conclusion
    
    The interactive performance comparison tools provide valuable data to help you make informed decisions when selecting an API gateway solution. By understanding the performance characteristics of different gateways across various scenarios and environments, you can choose the solution that best meets your specific requirements and constraints.
    
    For a deeper understanding of Tyk's performance characteristics and how to optimize your Tyk deployment, see our [Performance Monitoring]({{< ref "api-management/performance-monitoring" >}}) documentation.

    @github-actions
    Copy link
    Contributor

    PR Code Suggestions ✨

    No code suggestions found for the PR.

    @netlify
    Copy link

    netlify bot commented May 27, 2025

    PS. Add to the end of url /docs/nightly

    Name Link
    🔨 Latest commit a991f9f
    🔍 Latest deploy log https://app.netlify.com/projects/tyk-docs/deploys/6836e23e9ca83f00083083db
    😎 Deploy Preview https://deploy-preview-6465--tyk-docs.netlify.app
    📱 Preview on mobile
    Toggle QR Code...

    QR Code

    Use your smartphone camera to open QR code link.

    To edit notification comments on pull requests, go to your Netlify project configuration.

    @netlify
    Copy link

    netlify bot commented May 27, 2025

    PS. Add to the end of url /docs/nightly

    Name Link
    🔨 Latest commit 7341052
    🔍 Latest deploy log https://app.netlify.com/projects/tyk-docs/deploys/6835c23dab793e0008962ec6
    😎 Deploy Preview https://deploy-preview-6465--tyk-docs.netlify.app
    📱 Preview on mobile
    Toggle QR Code...

    QR Code

    Use your smartphone camera to open QR code link.

    To edit notification comments on pull requests, go to your Netlify project configuration.

    @letzya
    Copy link
    Contributor

    letzya commented May 27, 2025

    /writer please integrade the content of the in the page itself and not refer to another url like https://tyk.io/docs/apps/analyzer/tyk

    …integrate the performance data dire
    
    Generated by Probe AI for pr #6465
    @tykbot
    Copy link
    Contributor Author

    tykbot bot commented May 27, 2025

    I've successfully updated the performance-comparisons.md file to integrate the performance data directly into the page rather than linking to external URLs.

    The key changes include:

    1. Removed external links: Replaced the links to external analyzer tools (https://tyk.io/docs/apps/analyzer/tyk.html, etc.) with embedded performance data directly in the documentation.

    2. Added comprehensive performance tables: Created detailed HTML tables showing:

      • Performance metrics for each gateway (Tyk, Kong, and Apollo)
      • RPS (Requests Per Second), latency, and CPU utilization for different scenarios
      • Comparative data across cloud providers and machine types
    3. Updated section titles and descriptions: Changed "How to Use and Interpret the Interactive Graphs" to "How to Interpret the Performance Data" to reflect the embedded content approach.

    4. Added cloud provider comparison: Included additional tables showing performance across different cloud providers (AWS, GCP, Azure) and machine types to help users understand scaling characteristics.

    The documentation now provides all the performance comparison data directly within the page, eliminating the need for users to navigate to external URLs. This makes the information more accessible and provides a better user experience by keeping all relevant performance data in one place.\n\nChanges have been applied and pushed to the current PR branch.


    Tip: Mention me again using /writer <request>.
    Powered by Probe AI

    @letzya letzya marked this pull request as draft May 28, 2025 10:42
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants