diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-add-suite.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-add-suite.md
new file mode 100644
index 000000000..4149ecafb
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-add-suite.md
@@ -0,0 +1,66 @@
+---
+id: api-testing-add-suite
+title: Adding New Test Suites
+description: Guide to adding to new Suites using "Add New"
+sidebar_label: Adding New Suite
+tags:
+ - api-testing
+ - test-organization
+ - test-suite
+ - test-management
+---
+# Adding a Test Suite
+
+In this guide, we will walk through the process of adding a test suite in Keploy. Users can either manually provide the details for the test suite or directly import a curl command to create one.
+
+## Steps to Add a Test Suite
+
+1. **Click on the Plus Button**
+ - Navigate to the test suite section in the Keploy interface.
+ - Click on the `+` button to add a new test suite.
+
+2. **Provide Test Suite Details**
+ - Fill in the following fields:
+ - **Name**: Enter a unique name for the test suite.
+ - **Details**: Provide a brief description of the test suite.
+ - **Request**: Specify the request details, such as the HTTP method, URL, headers, and body.
+ - **Associations**: Define any associations or dependencies related to the test suite.
+
+3. **Import a Curl Command (Optional)**
+ - If you have a curl command, you can directly import it to create the test suite.
+ - Paste the curl command in the provided input field.
+ - Keploy will automatically parse the curl command and populate the test suite details.
+
+4. **Save the Test Suite**
+ - Once all the details are filled in, click on the `Save` button to create the test suite.
+
+## Example
+
+### Manual Entry
+```json
+{
+ "name": "User Authentication",
+ "details": "Tests the login functionality.",
+ "request": {
+ "method": "POST",
+ "url": "https://api.example.com/login",
+ "headers": {
+ "Content-Type": "application/json"
+ },
+ "body": {
+ "username": "test_user",
+ "password": "secure_password"
+ }
+ },
+ "associations": ["auth-service", "user-database"]
+}
+```
+
+### Importing a Curl Command
+```bash
+curl -X POST https://api.example.com/login \
+ -H "Content-Type: application/json" \
+ -d '{"username": "test_user", "password": "secure_password"}'
+```
+
+By following these steps, you can easily create and manage test suites in Keploy.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-adding-labels.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-adding-labels.md
new file mode 100644
index 000000000..274ed1fb4
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-adding-labels.md
@@ -0,0 +1,412 @@
+---
+id: api-testing-adding-labels
+title: Adding Labels to Test Suites
+description: Guide to creating and assigning labels to test suites individually or in bulk
+sidebar_label: Adding Labels
+tags:
+ - api-testing
+ - test-organization
+ - labels
+ - test-management
+---
+
+# Adding Labels to Test Suites
+
+Labels help you organize, categorize, and filter your test suites effectively. Keploy provides flexible labeling options that allow you to add labels to individual test suites or multiple suites at once.
+
+## Overview
+
+The labeling system in Keploy offers:
+
+- **Individual Labeling**: Add labels to specific test suites
+- **Bulk Labeling**: Apply labels to multiple suites simultaneously
+- **Label Management**: Create, edit, and delete custom labels
+- **Filtering**: Use labels to filter and organize your test collection
+
+## Adding Labels to Individual Test Suites
+
+### Method 1: Using the Three Dots Menu
+
+1. **Navigate to Test Suites**
+ - Go to your Keploy dashboard
+ - Click on the **Test Suites** section
+
+2. **Access Suite Options**
+ - Locate the test suite you want to label
+ - Click the **three dots (⋮)** menu next to the test suite name
+ - The menu appears in the top-right area of each suite row
+
+3. **Select Add Label Option**
+ - From the dropdown menu, click **"Add Label"** or **"Manage Labels"**
+ - A label management dialog will open
+
+### Method 2: From Suite Details Page
+
+1. **Open Test Suite**
+ - Click on the test suite name to open its details page
+
+2. **Find Label Section**
+ - Look for the **"Labels"** or **"Tags"** section in the suite header
+ - Click the **"+ Add Label"** button
+
+## Label Assignment Interface
+
+When you open the label assignment dialog, you'll see:
+
+### Existing Labels Section
+```
+🏷️ Available Labels
+├── 📊 Priority
+│ ├── high-priority
+│ ├── medium-priority
+│ └── low-priority
+├── 🌍 Environment
+│ ├── production
+│ ├── staging
+│ └── development
+├── 👥 Team
+│ ├── team-frontend
+│ ├── team-backend
+│ └── team-qa
+└── 🔍 Type
+ ├── smoke-test
+ ├── regression
+ └── integration
+```
+
+### Assigning Existing Labels
+
+1. **Browse Categories**
+ - Expand label categories to see available options
+ - Use the search box to find specific labels quickly
+
+2. **Select Labels**
+ - Click on labels to select them
+ - Selected labels will be highlighted or marked with a checkmark ✓
+ - You can select multiple labels from different categories
+
+3. **Apply Labels**
+ - Review your selections in the "Selected Labels" preview
+ - Click **"Apply Labels"** to assign them to the test suite
+
+## Creating New Labels
+
+### Creating During Assignment
+
+1. **Open Label Dialog**
+ - Follow the steps above to open the label assignment interface
+
+2. **Create New Label**
+ - Click **"Create New Label"** or the **"+"** button
+ - Enter label details in the creation form
+
+3. **Label Creation Form**
+ ```
+ Label Name: [smoke-critical]
+ Category: [Type] (dropdown)
+ Color: [🔴] (color picker)
+ Description: [Critical smoke tests that must pass]
+ ```
+
+4. **Save and Apply**
+ - Click **"Create Label"** to save the new label
+ - The new label will automatically be selected for the current suite
+ - Click **"Apply Labels"** to complete the assignment
+
+### Pre-creating Labels
+
+You can also create labels in advance:
+
+1. **Access Label Management**
+ - Go to **Settings** → **Label Management**
+ - Or click **"Manage All Labels"** from any label dialog
+
+2. **Create Label Categories**
+ ```
+ Category: Priority
+ Labels: critical, high, medium, low
+
+ Category: Environment
+ Labels: prod, staging, dev, local
+
+ Category: Team
+ Labels: frontend, backend, qa, devops
+ ```
+
+## Bulk Label Assignment
+
+### Using Checkbox Selection
+
+1. **Select Multiple Suites**
+ - Navigate to the Test Suites list
+ - Use checkboxes to select multiple test suites
+ - Or click **"Select All"** to choose all visible suites
+
+2. **Access Bulk Actions**
+ - After selecting suites, a bulk actions toolbar appears
+ - Click **"Add Labels"** or **"Manage Labels"** button
+
+3. **Bulk Label Dialog**
+ ```
+ Selected Suites: 5 suites
+ ├── User Authentication Suite
+ ├── Payment Processing Suite
+ ├── Order Management Suite
+ ├── Notification Suite
+ └── Report Generation Suite
+
+ Actions:
+ ☐ Add labels (append to existing)
+ ☐ Replace labels (remove existing, add new)
+ ☐ Remove specific labels
+ ```
+
+4. **Choose Action Type**
+ - **Add Labels**: Append new labels to existing ones
+ - **Replace Labels**: Remove all existing labels and add new ones
+ - **Remove Labels**: Remove specific labels from all selected suites
+
+5. **Select Labels**
+ - Choose from existing labels or create new ones
+ - Preview shows which suites will be affected
+ - Click **"Apply to Selected Suites"**
+
+## Label Management Best Practices
+
+### Naming Conventions
+
+1. **Use Consistent Formatting**
+ ```
+ ✅ Good Examples:
+ - team-frontend
+ - priority-high
+ - env-production
+ - type-smoke-test
+
+ ❌ Avoid:
+ - TeamFrontend
+ - HIGH_PRIORITY
+ - prod env
+ - smoke test type
+ ```
+
+2. **Category-Based Organization**
+ ```
+ Priority: critical, high, medium, low
+ Environment: production, staging, development
+ Type: smoke, regression, integration, e2e
+ Team: frontend, backend, qa, devops
+ Status: active, deprecated, experimental
+ ```
+
+### Label Hierarchy
+
+Organize labels in a logical hierarchy:
+
+```
+🏢 Organization Level
+├── 🌍 Environment
+│ ├── production
+│ ├── staging
+│ └── development
+├── 👥 Team Ownership
+│ ├── team-auth
+│ ├── team-payments
+│ └── team-notifications
+├── 📊 Test Classification
+│ ├── type-smoke
+│ ├── type-regression
+│ └── type-integration
+└── ⚡ Priority Level
+ ├── priority-p0
+ ├── priority-p1
+ └── priority-p2
+```
+
+## Using Labels for Organization
+
+### Filtering by Labels
+
+1. **Filter Interface**
+ ```
+ Filters: [Environment: staging] [Team: backend] [Priority: high]
+
+ Results: 12 test suites found
+ ├── ✅ User Service API Tests (staging, backend, high)
+ ├── ✅ Payment Gateway Tests (staging, backend, high)
+ └── ✅ Order Processing Tests (staging, backend, high)
+ ```
+
+2. **Combine Multiple Filters**
+ - Use AND logic: Show suites with ALL selected labels
+ - Use OR logic: Show suites with ANY selected labels
+ - Exclude labels: Show suites WITHOUT specific labels
+
+### Search by Labels
+
+```
+Search Examples:
+- label:high-priority
+- label:team-frontend OR label:team-backend
+- label:smoke-test AND label:production
+- -label:deprecated (exclude deprecated suites)
+```
+
+## Advanced Label Operations
+
+### Conditional Labeling
+
+Apply labels based on conditions:
+
+```
+IF suite.name CONTAINS "auth"
+ THEN add labels: [team-auth, security]
+
+IF suite.environment == "production"
+ THEN add labels: [critical, monitored]
+
+IF suite.last_run < 30_days_ago
+ THEN add labels: [stale, review-needed]
+```
+
+### Label Automation
+
+Set up automatic labeling rules:
+
+1. **Auto-labeling on Creation**
+ ```yaml
+ rules:
+ - if: suite_name matches "smoke*"
+ labels: [type-smoke, priority-high]
+ - if: created_by == "ci-pipeline"
+ labels: [automated, ci-generated]
+ ```
+
+2. **Schedule-based Labeling**
+ ```yaml
+ scheduled_rules:
+ - schedule: "daily"
+ condition: last_run > 7_days
+ action: add_label "needs-attention"
+ ```
+
+## Label Analytics and Reporting
+
+### Label Distribution
+
+View how labels are distributed across your test suites:
+
+```
+Label Usage Report
+==================
+📊 Priority Labels:
+├── high-priority: 45 suites (23%)
+├── medium-priority: 89 suites (45%)
+└── low-priority: 63 suites (32%)
+
+🌍 Environment Labels:
+├── production: 67 suites (34%)
+├── staging: 78 suites (39%)
+└── development: 52 suites (27%)
+
+👥 Team Labels:
+├── team-frontend: 34 suites (17%)
+├── team-backend: 56 suites (28%)
+└── team-qa: 23 suites (12%)
+```
+
+### Label-based Success Rates
+
+Track test success rates by label:
+
+```
+Success Rate by Label
+====================
+🏷️ high-priority: 94% success rate
+🏷️ team-backend: 89% success rate
+🏷️ production: 97% success rate
+🏷️ smoke-test: 92% success rate
+```
+
+## Troubleshooting
+
+### Common Issues
+
+1. **Cannot Add Labels**
+ - **Check Permissions**: Ensure you have edit access to the test suite
+ - **Verify Suite Status**: Make sure the suite isn't currently running
+ - **Browser Issues**: Clear cache and refresh the page
+
+2. **Labels Not Appearing**
+ - **Refresh View**: Reload the test suites page
+ - **Check Filters**: Verify that filters aren't hiding labeled suites
+ - **Sync Issues**: Wait a moment for changes to propagate
+
+3. **Bulk Operations Failing**
+ - **Selection Limit**: Reduce the number of selected suites
+ - **Permission Issues**: Ensure you have bulk edit permissions
+ - **Server Load**: Try again during lower usage periods
+
+### Best Practices for Troubleshooting
+
+1. **Start Small**
+ - Test labeling with one suite first
+ - Gradually increase to bulk operations
+
+2. **Verify Changes**
+ - Check that labels appear correctly after assignment
+ - Test filtering with newly added labels
+
+3. **Document Issues**
+ - Note any error messages for support
+ - Record steps that led to the problem
+
+## Integration Examples
+
+### API Usage
+
+```bash
+# Add labels via API
+curl -X POST "https://api.keploy.io/test-suites/{suite-id}/labels" \
+ -H "Authorization: Bearer your-token" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "labels": ["high-priority", "team-backend", "production"]
+ }'
+
+# Bulk label assignment
+curl -X POST "https://api.keploy.io/test-suites/bulk-labels" \
+ -H "Authorization: Bearer your-token" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "suite_ids": ["suite-1", "suite-2", "suite-3"],
+ "action": "add",
+ "labels": ["regression", "nightly-run"]
+ }'
+```
+
+### CI/CD Integration
+
+```yaml
+# GitHub Actions example
+- name: Label Test Suites
+ run: |
+ # Label suites based on changed files
+ if [[ "${{ github.event_name }}" == "pull_request" ]]; then
+ keploy label add --suites $AFFECTED_SUITES --labels "pr-validation"
+ fi
+
+ # Label based on branch
+ if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
+ keploy label add --suites $ALL_SUITES --labels "main-branch"
+ fi
+```
+
+## Related Features
+
+- **[Test Suite Management](./api-testing-edit-suites.md)**: Edit and organize test suites
+- **[Selective Test Execution](./api-testing-running-selective.md)**: Run tests using label filters
+- **[Test Reports](./api-testing-sharing-reports.md)**: Generate reports filtered by labels
+- **[Custom Assertions](./api-testing-custom-assertions.md)**: Create assertions for labeled suites
+
+Labels are a powerful organizational tool that help you maintain order in large test collections and enable efficient test management workflows.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-assertion-tree.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-assertion-tree.md
new file mode 100644
index 000000000..4925e25da
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-assertion-tree.md
@@ -0,0 +1,142 @@
+---
+id: api-testing-assertion-tree
+title: Assertion Tree
+sidebar_label: Assertion Tree
+description: Visualize and manage your entire test flow in a structured tree format
+tags:
+ - API testing
+ - test visualization
+ - assertion tree
+ - test flow
+ - automation
+keywords:
+ - test suite visualization
+ - assertion tree
+ - API flow
+ - test step editor
+ - visual test builder
+---
+
+import ProductTier from '@site/src/components/ProductTier';
+
+
+
+## Assertion Tree
+
+The **Assertion Tree** allows you to visualize and manage your entire test suite in a structured, hierarchical format.
+
+Instead of viewing tests as isolated steps, the Assertion Tree gives you a complete flow-level perspective — including requests, responses, and assertions — in one interactive interface.
+
+---
+
+## How to Access the Assertion Tree
+
+1. Navigate to an individual **Test Suite**
+2. Click on the **"Visualize"** button
+3. The system renders the full test suite in a **tree format**
+
+---
+
+## What You Can See
+
+The Assertion Tree provides a visual representation of:
+
+- All test steps in execution order
+- Request details for each step
+- Attached assertions
+- Response validations
+- Parent-child relationships between steps (if applicable)
+
+Each node in the tree represents a test step and contains:
+
+- Request configuration
+- Associated assertions
+- Execution dependencies
+
+This makes it easier to understand how your test suite behaves as a complete workflow.
+
+---
+
+## What You Can Do
+
+The Assertion Tree is fully interactive. You can:
+
+### 1. View Complete Flow
+Understand the entire API workflow from start to finish without switching between screens.
+
+---
+
+### 2. Inspect Assertions Inline
+Quickly see which assertions are attached to each step, including:
+
+- Status code validations
+- JSON validations
+- Header validations
+- Schema validations
+- Custom function validations
+
+---
+
+### 3. Add a New Step in the Flow
+
+You can insert a new test step directly within the tree.
+
+This allows you to:
+
+- Expand an existing workflow
+- Add conditional validation steps
+- Introduce additional API calls
+- Build multi-step integration flows
+
+The new step becomes part of the structured execution sequence.
+
+---
+
+### 4. Modify Existing Steps
+
+From the tree view, you can:
+
+- Edit request configurations
+- Update assertions
+- Adjust execution order
+- Refine validation logic
+
+All changes reflect directly in the test suite.
+
+---
+
+## Why Use the Assertion Tree?
+
+The Assertion Tree is particularly useful when:
+
+- Your test suite contains multiple API calls
+- You are testing end-to-end workflows
+- Business logic spans multiple requests
+- You need clarity on how validations are structured
+- You want a visual representation instead of linear editing
+
+It transforms test management from a flat list into a structured execution graph.
+
+---
+
+## Typical Use Cases
+
+- Authentication → Resource Creation → Validation → Cleanup flows
+- Multi-step payment processing validations
+- E-commerce checkout journeys
+- Webhook-triggered event testing
+- Integration testing across services
+
+---
+
+## Best Practices
+
+- Use the tree view to design full workflows before adding assertions
+- Keep each step focused on a single responsibility
+- Attach assertions at the correct step level
+- Review flow dependencies to avoid unintended execution order
+- Use visualization to debug failing multi-step tests faster
+
+---
+
+The Assertion Tree enables you to design, inspect, and extend complex API workflows with clarity and precision — all from a single visual interface.
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-buggy-suites.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-buggy-suites.md
new file mode 100644
index 000000000..2c012d35b
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-buggy-suites.md
@@ -0,0 +1,209 @@
+---
+id: api-testing-buggy-suites
+title: Buggy Test Suites
+description: Guide to viewing and debugging failed test suites generated by Keploy
+sidebar_label: Buggy Suites
+tags:
+ - api-testing
+ - debugging
+ - test-failures
+ - troubleshooting
+---
+
+# Buggy Test Suites
+
+When Keploy generates tests, some test cases might fail due to various reasons such as endpoint issues, data mismatches, or API changes. The buggy test suites page helps you identify, understand, and fix these failing tests.
+
+## Viewing Buggy Test Suites
+
+Navigate to the test suites generated with the red esclamation icon to view all test suites that contain failing test cases. Each buggy suite displays:
+
+- **Suite Name**: The name of the test suite containing failed tests
+- **Test Steps**: Steps in the suite
+- **Failure Reason**: Reason why the test suite is buggy
+
+## Understanding Failure Reasons
+
+For each buggy test suite, you can find the detailed explanations of why tests are marked as buggy. Common failure reasons include:
+
+### 1. Endpoint Not Found (404 Errors)
+
+**Example Failure Reason:**
+```
+The response returned a 404 status code for the 'Create Owner' step, indicating the endpoint '/owners' was not found. This contradicts the documented cURL examples and schema, which show that this endpoint should exist and return a 201 status code upon successful creation.
+```
+
+**What this means:**
+- The API endpoint that was working during recording is no longer available
+- The endpoint URL might have changed
+- The API server might be down or misconfigured
+
+**How to fix:**
+1. Verify the endpoint URL is correct
+2. Check if the API server is running
+3. Review API documentation for any endpoint changes
+4. Update the test suite if the endpoint has moved
+
+### 2. Schema Validation Failures
+
+**Example Failure Reason:**
+```
+Response schema validation failed. Expected property 'id' of type 'number' but received 'string'. The API response structure has changed from the recorded version.
+```
+
+**What this means:**
+- The API response format has changed since recording
+- Data types don't match the expected schema
+- New required fields might have been added
+
+### 3. Authentication Issues
+
+**Example Failure Reason:**
+```
+Authentication failed with 401 Unauthorized. The API key or token used during recording may have expired or been revoked.
+```
+
+**What this means:**
+- API credentials have expired or changed
+- Authentication method has been updated
+- Permission levels may have changed
+
+## Assertion Failures
+
+The buggy suites page provides detailed assertion failure information to help you understand exactly what went wrong:
+
+### Response Status Assertions
+```yaml
+Expected: 201 Created
+Actual: 404 Not Found
+Assertion: status_code_equals
+Message: The endpoint returned an unexpected status code
+```
+
+### Response Body Assertions
+```yaml
+Expected: {"id": 123, "name": "John Doe", "email": "john@example.com"}
+Actual: {"error": "User not found", "code": 404}
+Assertion: json_body_equals
+Message: Response body structure completely different from expected
+```
+
+### Response Time Assertions
+```yaml
+Expected: < 2000ms
+Actual: 5432ms
+Assertion: response_time_less_than
+Message: Response time exceeded acceptable threshold
+```
+
+### Header Assertions
+```yaml
+Expected: "application/json"
+Actual: "text/html"
+Assertion: content_type_equals
+Message: Content-Type header mismatch indicates server error
+```
+
+## Debugging Actions
+
+For each buggy test case, you can take several debugging actions:
+
+### 1. View Full Test Details
+Click on any failed test to see:
+- Complete request details (URL, headers, body)
+- Full response details (status, headers, body)
+- All assertion results with expected vs actual values
+- Test execution timeline
+
+### 2. Compare with Recorded Version
+View the side-by-side comparison between:
+- **Original Recording**: The request/response captured during recording
+- **Current Execution**: The actual request/response during test execution
+- **Differences Highlighted**: Visual indicators showing what changed
+
+### 3. Manual Test Execution
+Test the endpoint manually to verify:
+```bash
+# Example manual cURL test
+curl -X POST \
+ 'https://api.example.com/owners' \
+ -H 'Content-Type: application/json' \
+ -H 'Authorization: Bearer your-token' \
+ -d '{
+ "name": "John Doe",
+ "email": "john@example.com"
+ }'
+```
+
+### 4. Update Test Expectations
+If the API behavior has legitimately changed:
+1. **Re-record the test**: Capture new expected behavior
+2. **Update assertions**: Modify expected values to match new API
+3. **Add new test cases**: Cover additional scenarios if needed
+
+## Common Debugging Scenarios
+
+### Scenario 1: Environment Differences
+**Problem**: Tests pass in development but fail in staging/production
+
+**Solution**:
+- Check environment-specific configurations
+- Verify database state and test data
+- Review environment variables and secrets
+- Ensure consistent API versions across environments
+
+### Scenario 2: Timing Issues
+**Problem**: Tests fail intermittently due to timing
+
+**Solution**:
+- Increase response timeout thresholds
+- Add delays between dependent API calls
+- Review database transaction handling
+- Consider eventual consistency in distributed systems
+
+### Scenario 3: Data Dependencies
+**Problem**: Tests fail because required data doesn't exist
+
+**Solution**:
+- Set up proper test data fixtures
+- Use data factory patterns for test preparation
+- Implement database seeding for test environments
+- Review test isolation and cleanup procedures
+
+## Best Practices for Fixing Buggy Suites
+
+1. **Start with Environment Verification**
+ - Ensure all required services are running
+ - Verify database connectivity and state
+ - Check configuration and environment variables
+
+2. **Analyze Failure Patterns**
+ - Group similar failures together
+ - Identify if failures are systematic or random
+ - Look for common root causes across multiple tests
+
+3. **Fix Root Causes, Not Symptoms**
+ - Address underlying API issues rather than just updating tests
+ - Collaborate with development teams on API stability
+ - Document breaking changes and migration paths
+
+4. **Maintain Test Quality**
+ - Regularly review and update test suites
+ - Remove obsolete or flaky tests
+ - Add new tests for changed functionality
+
+5. **Monitor Test Health**
+ - Set up alerts for test failure rates
+ - Track test suite reliability over time
+ - Review test results as part of deployment process
+
+## Getting Help
+
+If you're unable to resolve buggy test suites:
+
+1. **Check Documentation**: Review API documentation for recent changes
+2. **Contact Support**: Reach out to the development team for API-related issues
+3. **Community Forums**: Ask questions in Keploy community channels
+4. **Share Test Details**: Provide complete test execution logs when seeking help
+
+Remember, buggy test suites often indicate real issues with your API or environment. Use them as an early warning system to maintain API quality and reliability.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-bulk-assertions.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-bulk-assertions.md
new file mode 100644
index 000000000..8ea649bc5
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-bulk-assertions.md
@@ -0,0 +1,175 @@
+---
+id: api-testing-bulk-assertions
+title: Bulk Assertions and Schema Validation
+description: Guide to performing bulk assertions across multiple endpoints, methods, and status codes
+sidebar_label: Bulk Assertions
+tags:
+ - api-testing
+ - bulk-assertions
+ - schema-assertions
+ - test-validation
+ - test-management
+---
+
+# Bulk Assertions and Schema Validation
+
+This guide explains how to perform bulk assertions in Keploy, allowing you to validate multiple test cases across different endpoints, HTTP methods, and status codes simultaneously.
+
+## What are Bulk Assertions?
+
+Bulk assertions enable you to apply validation rules across multiple test suites at once, saving time and ensuring consistency in your API testing. Instead of creating assertions one by one, you can select multiple tests and apply the same assertion criteria to all of them.
+
+## What are Schema Assertions?
+
+Schema assertions allow you to validate the structure and format of API responses. You can choose specific fields from the entire response body to assert, ensuring that your API returns data in the expected format with the correct data types and required fields.
+
+## How to Perform Bulk Assertions
+
+### 1. Filter Your Test Suites
+
+First, use the filtering options to narrow down the tests you want to assert:
+- **Filter by Endpoint**: Select specific API endpoints
+- **Filter by HTTP Method**: Choose methods like GET, POST, PUT, DELETE, etc.
+- **Filter by Status Code**: Filter by response status codes (2xx, 4xx, 5xx, etc.)
+- **Filter by Test Suite**: Select specific test suite collections
+
+### 2. Select Tests for Bulk Assertion
+
+- Once filtered, you can select multiple test cases
+- Use checkboxes to select individual tests or select all filtered tests
+- The selection can span across different endpoints and methods
+
+### 3. Choose Assertion Fields
+
+From the entire response body, you can choose which fields to assert:
+- **Response Headers**: Validate specific headers
+- **Response Body Fields**: Select individual fields from the JSON response
+- **Status Codes**: Assert expected status codes
+- **Response Time**: Validate performance metrics
+- **Data Types**: Ensure fields have correct types (string, number, boolean, etc.)
+- **Required Fields**: Verify that mandatory fields are present
+
+### 4. Apply Schema Assertions
+
+Schema assertions validate the structure of your API responses:
+- **Field Presence**: Ensure required fields exist in the response
+- **Data Type Validation**: Verify that fields have the correct data type
+- **Format Validation**: Check formats like email, URL, date, etc.
+- **Nested Object Validation**: Validate complex nested structures
+- **Array Validation**: Assert on array properties and elements
+
+### 5. Save and Execute
+
+- Review the selected assertions
+- Apply the assertions to all selected test cases
+- Execute the tests to validate against the defined schema
+
+## Example Use Cases
+
+### Example 1: Asserting User Endpoints
+```
+Filter by:
+- Endpoint: /api/v1/users/*
+- HTTP Method: GET
+- Status Code: 200
+
+Bulk Assert:
+- Response contains: id, name, email
+- Data types: id (number), name (string), email (string)
+- Email format validation
+```
+
+### Example 2: Error Response Validation
+```
+Filter by:
+- Status Code: 4xx, 5xx
+- HTTP Method: POST, PUT, DELETE
+
+Bulk Assert:
+- Response contains: error, message, statusCode
+- Data types: error (boolean), message (string), statusCode (number)
+- Required fields: error, message
+```
+
+### Example 3: Performance Testing
+```
+Filter by:
+- Endpoint: /api/v1/products
+- HTTP Method: GET
+
+Bulk Assert:
+- Response time: < 200ms
+- Status Code: 200
+- Response contains: products (array), total (number)
+```
+
+### Example 4: Schema Validation for Multiple Endpoints
+```
+Select Multiple Endpoints:
+- /api/v1/users
+- /api/v1/products
+- /api/v1/orders
+
+Schema Assertions:
+- All responses have: timestamp, success, data
+- timestamp format: ISO 8601
+- success type: boolean
+- data type: object or array
+```
+
+## Benefits of Bulk Assertions
+
+- **Time Efficiency**: Apply assertions to multiple tests simultaneously
+- **Consistency**: Ensure uniform validation across similar endpoints
+- **Maintainability**: Update assertions for multiple tests at once
+- **Comprehensive Testing**: Validate complex scenarios across different endpoints
+- **Schema Compliance**: Ensure API responses adhere to defined schemas
+- **Reduced Errors**: Less manual work means fewer mistakes
+
+## Schema Assertion Features
+
+### Supported Validations
+
+1. **Type Checking**
+ - String, Number, Boolean, Object, Array, Null
+ - Custom type definitions
+
+2. **Format Validation**
+ - Email, URL, UUID, Date, Time, DateTime
+ - Custom regex patterns
+
+3. **Range Validation**
+ - Minimum and maximum values for numbers
+ - String length constraints
+ - Array size limits
+
+4. **Required Fields**
+ - Mark fields as mandatory
+ - Conditional requirements based on other fields
+
+5. **Nested Object Validation**
+ - Deep validation of complex structures
+ - Array of objects validation
+
+6. **Custom Assertions**
+ - Define custom validation logic
+ - Combine multiple assertion rules
+
+## Best Practices
+
+1. **Start with Filters**: Use filters to group similar tests before applying bulk assertions
+2. **Incremental Assertions**: Start with basic assertions and add more complex ones gradually
+3. **Review Before Applying**: Always review the selected tests before applying bulk assertions
+4. **Use Schema Templates**: Create reusable schema templates for common response structures
+5. **Version Control**: Keep track of schema changes across API versions
+6. **Document Assertions**: Add descriptions to complex assertions for team clarity
+
+## Tips for Effective Schema Assertions
+
+- **Keep schemas DRY**: Reuse common schema patterns across different endpoints
+- **Test edge cases**: Include assertions for empty arrays, null values, and optional fields
+- **Validate error responses**: Ensure error messages follow a consistent schema
+- **Use realistic data**: Test with production-like data for accurate validation
+- **Regular updates**: Update schemas when API contracts change
+
+By leveraging bulk assertions and schema validation, you can ensure comprehensive API testing while minimizing manual effort and maintaining high test coverage across your application.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-custom-assertions.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-custom-assertions.md
new file mode 100644
index 000000000..7a934d0ec
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-custom-assertions.md
@@ -0,0 +1,129 @@
+---
+id: api-testing-custom-assertions
+title: Custom Assertions
+sidebar_label: Custom Assertions
+description: Define powerful validation rules for your API tests in Keploy
+tags:
+ - API testing
+ - assertions
+ - validation
+ - schema validation
+ - automation
+keywords:
+ - status code validation
+ - JSON assertions
+ - header validation
+ - schema validation
+ - custom functions
+
+---
+
+import ProductTier from '@site/src/components/ProductTier';
+
+
+
+Custom assertions allow you to precisely validate API responses beyond basic status checks.
+
+Keploy supports the following assertion categories:
+
+| Scenario | Recommended Assertion |
+|----------|----------------------|
+| Exact status code validation | Status Code |
+| Accept any success response | Status Code Class |
+| Partial JSON validation | JSON Contains |
+| Strict field validation | JSON Equal |
+| Response structure consistency | Schema |
+| Dynamic value comparison | Custom Function |
+| Validate only important fields | Selected Fields |
+| Security header enforcement | Header Exists / Header Equal |
+
+
+## For specific Selected Fields
+
+### Selected Fields
+Allows you to validate only specific parts of a response instead of the entire body.
+
+Useful when:
+- Response includes dynamic metadata
+- You want to ignore volatile fields (timestamps, request IDs, etc.)
+- Only certain business-critical fields matter
+
+## Custom Functions (Advanced Validation)
+
+For complex validation logic, Keploy supports custom functions inside assertions.
+
+Custom functions allow you to:
+- Write JavaScript expressions
+- Perform conditional validation
+- Compare multiple fields
+- Validate dynamic calculations
+- Enforce business rules
+
+### Example Use Cases
+- Validate `totalAmount = sum(lineItems)`
+- Ensure timestamp is within last 5 minutes
+- Compare response field with environment variable
+- Validate custom encryption or hashing logic
+
+### Example: E-commerce Order Validation
+
+Consider an e-commerce API that returns order details. You want to validate that the total amount equals the sum of all line items plus tax.
+
+**API Response:**
+```json
+{
+ "orderId": "ORD-12345",
+ "items": [
+ { "name": "Laptop", "price": 1200.00, "quantity": 1 },
+ { "name": "Mouse", "price": 25.50, "quantity": 2 }
+ ],
+ "subtotal": 1251.00,
+ "tax": 125.10,
+ "total": 1376.10,
+ "timestamp": "2026-02-11T10:30:00Z"
+}
+```
+
+**Custom Function for Total Validation:**
+```javascript
+// Validate that total = subtotal + tax
+function validateOrderTotal(response) {
+ const data = JSON.parse(response.body);
+ const expectedTotal = data.subtotal + data.tax;
+ const actualTotal = data.total;
+
+ return {
+ passed: Math.abs(expectedTotal - actualTotal) < 0.01, // Handle floating point precision
+ message: `Expected total ${expectedTotal}, but got ${actualTotal}`
+ };
+}
+
+// Validate that subtotal matches sum of line items
+function validateSubtotal(response) {
+ const data = JSON.parse(response.body);
+ const calculatedSubtotal = data.items.reduce((sum, item) => {
+ return sum + (item.price * item.quantity);
+ }, 0);
+
+ return {
+ passed: Math.abs(calculatedSubtotal - data.subtotal) < 0.01,
+ message: `Calculated subtotal ${calculatedSubtotal}, but API returned ${data.subtotal}`
+ };
+}
+```
+
+**Usage in Keploy:**
+1. Navigate to your test step editor
+2. Add a new assertion
+3. Select "Custom Function" as assertion type
+4. Paste your custom function code
+5. The function will execute during test runs and validate your business logic
+
+## Best Practices
+
+- **Prefer Schema validation** for dynamic APIs
+- **Use JSON Equal** only when strict comparison is necessary
+- **Avoid over-validating** volatile fields
+- **Use Custom Functions** for business logic validation
+- **Combine multiple assertions** for stronger test reliability
+- **Keep assertions focused and readable**
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-edit-assertions.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-edit-assertions.md
new file mode 100644
index 000000000..a2eb502d4
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-edit-assertions.md
@@ -0,0 +1,82 @@
+---
+id: api-testing-edit-assertions
+title: Editing Test Suites and Custom Assertions
+description: Guide to editing test suites with custom variables and assertion functions
+sidebar_label: Edit Assertions
+tags:
+ - api-testing
+ - edit-assertions
+ - custom-variables
+ - custom-functions
+ - test-management
+---
+
+# Editing Test Suites and Custom Assertions
+
+This guide explains how to edit test suites in Keploy, including adding custom variables to URLs and request bodies, and creating custom assertion functions for advanced test validation.
+
+## Overview
+
+Editing a test suite allows you to:
+
+- Modify API request details (URL, headers, body, method)
+- Create and manage **global** and **local** variables
+- Update or replace existing assertions
+- Add **custom assertion functions**
+- Write reusable validation logic for request or response
+
+This gives you fine-grained control over how your APIs are validated.
+
+
+## Accessing Test Suite Edit Mode
+
+### Step 1: Navigate to Test Suites
+
+1. Go to your **Test Suites**
+2. Click on the **Test Suite**
+3. Locate the suite you want to modify
+
+### Step 2: Open Test Step Editor
+
+#### Using Three Dots Menu
+
+1. Click the **three dots (⋮)** next to the test suite
+2. Select **"Edit Test Step"**
+3. The Test Step Editor will open
+
+Editing the Request
+
+Inside the Test Step Editor, you can modify:
+
+```yaml
+Method: GET
+URL: https://api.example.com/users/{{user_id}}
+
+Headers:
+ Authorization: Bearer {{auth_token}}
+
+Body:
+ email: "{{email}}"
+```
+
+You can:
+
+- Change HTTP method
+- Update endpoint path
+- Modify headers
+- Edit JSON payload
+- Inject variables into any field
+
+### Using Variables for dynamic tests
+
+For understanding how to use variables :
+
+## Editing Existing Assertions
+You can choose from multiple assertion categories:
+- Status Code Assertion
+- Header Assertion
+- Body / JSON Path Assertion
+- Schema Assertion
+- Adding Custom Assertions: For custom assertions using functions refer:
+
+
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-edit-suites.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-edit-suites.md
new file mode 100644
index 000000000..bafc361f8
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-edit-suites.md
@@ -0,0 +1,85 @@
+---
+id: api-testing-edit-suites
+title: Edit Test Suites
+sidebar_label: Edit Test Suites
+description: Editing test suites for API tests
+tags:
+ - API testing
+ - webhooks
+ - integration
+ - custom validation
+ - policy enforcement
+keywords:
+ - webhook
+ - API testing
+ - PreExecute
+ - PostExecute
+ - external validation
+ - custom logic
+---
+
+import ProductTier from '@site/src/components/ProductTier';
+
+
+
+This guide will help you edit test suites in Keploy to customize your API testing workflow.
+
+## Editing Test Suite Details
+
+To modify test suite settings like name and description:
+
+1. Navigate to your test suite in the Keploy dashboard
+2. Click on the **three dots (⋯)** helper menu in the test suite you want to modify
+3. Select **"Edit Suite"** from the dropdown menu
+4. Update the suite name, description, and other details as needed
+5. Save your changes
+
+## Editing Individual Test Steps
+
+To modify specific test requests and responses:
+
+1. Go to the individual test step within your test suite
+2. Click on **"Edit Step"** to open the test editor
+3. You can now modify:
+ - Request details (URL, headers, body, parameters)
+ - HTTP method
+ - Request payload
+
+### Adding and Editing Assertions
+
+Assertions help validate your API responses. To add or edit assertions:
+
+1. In the test step editor, navigate to the assertions section
+2. Add new assertions or modify existing ones
+3. You can validate:
+ - Response status codes
+ - Response body content
+ - Response headers
+
+### Custom Functions in Assertions
+
+Keploy supports custom functions for advanced assertion logic. You can:
+
+- Create custom validation functions
+- Use JavaScript expressions for complex validations
+- Reference external validation logic
+
+For detailed information on custom functions, [reference here](#).
+### Creating and Using Variables
+
+Variables allow you to create reusable values across your test suite:
+
+1. **URL Base Path Variables**: Define base URLs that can be reused across multiple tests
+2. **Environment Variables**: Set different values for different testing environments
+3. **Dynamic Variables**: Create variables that change during test execution
+
+## How to Create Variables
+
+For in detail usage of variables refer here :
+
+## Best Practices
+
+- **Use descriptive names**: Give your test suites and individual tests clear, descriptive names that explain their purpose
+- **Group related tests**: Organize tests logically within suites (e.g., user authentication, payment processing, etc.)
+- **Keep suites focused**: Each test suite should test a specific feature or workflow
+- **Test multiple aspects**: Include assertions for status codes, response structure, and business logic
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-filter-suites.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-filter-suites.md
new file mode 100644
index 000000000..616a9f156
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-filter-suites.md
@@ -0,0 +1,108 @@
+---
+id: api-testing-filter-suites
+title: Using Filtering in Test Suites
+description: Guide to add filters for test suites
+sidebar_label: Filter Test Suites
+tags:
+ - api-testing
+ - filter-suites
+ - test-suite
+ - test-management
+---
+# Filtering Test Suites
+
+This guide explains how to filter test suites in Keploy to quickly find and manage your API tests. You can apply various filters to narrow down your test suites based on different criteria.
+
+## Available Filter Options
+
+Keploy provides multiple filtering options to help you efficiently locate and organize your test suites:
+
+### 1. Filter by Test Suite
+- Filter test suites by their name or identifier
+- Quickly locate specific test suites from a large collection
+- Use search functionality to find test suites by partial name matching
+
+### 2. Filter by Status Code
+- Filter tests based on HTTP response status codes
+- Common status code filters:
+ - **2xx Success**: 200 OK, 201 Created, 204 No Content, etc.
+ - **3xx Redirection**: 301 Moved Permanently, 302 Found, 304 Not Modified, etc.
+ - **4xx Client Errors**: 400 Bad Request, 401 Unauthorized, 404 Not Found, etc.
+ - **5xx Server Errors**: 500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, etc.
+- Useful for identifying failing tests or specific response patterns
+
+### 3. Filter by HTTP Method
+- Filter tests based on the HTTP request method:
+ - **GET**: Retrieve data from the server
+ - **POST**: Submit data to create new resources
+ - **PUT**: Update existing resources
+ - **PATCH**: Partially update resources
+ - **DELETE**: Remove resources
+ - **OPTIONS**: Get communication options
+ - **HEAD**: Get headers without body
+- Helps organize tests by the type of operation being tested
+
+### 4. Filter by Endpoint
+- Filter tests based on the API endpoint or URL path
+- Search by:
+ - Full endpoint URL
+ - Partial path matching
+ - Endpoint patterns
+- Useful for testing specific API routes or services
+
+## How to Apply Filters
+
+1. **Access the Filter Panel**
+ - Navigate to the test suites section in Keploy
+ - Look for the filter icon or filter panel
+
+2. **Select Filter Criteria**
+ - Choose one or more filter options from the available categories
+ - Filters can be combined for more precise results
+
+3. **Apply Filters**
+ - Click "Apply" or the filters will be applied automatically
+ - The test suite list will update to show only matching results
+
+4. **Clear Filters**
+ - Use the "Clear Filters" or "Reset" button to remove all active filters
+ - Return to viewing all test suites
+
+## Example Use Cases
+
+### Finding Failed Tests
+```
+Filter by Status Code: 4xx, 5xx
+```
+This will show all tests that resulted in client or server errors.
+
+### Reviewing POST Requests
+```
+Filter by HTTP Method: POST
+```
+This displays all tests using the POST method.
+
+### Testing a Specific API
+```
+Filter by Endpoint: /api/v1/users
+```
+This shows all tests for the users endpoint.
+
+### Combining Filters
+```
+Filter by:
+- HTTP Method: GET
+- Status Code: 200
+- Endpoint: /api/v1/products
+```
+This shows all successful GET requests to the products endpoint.
+
+## Benefits of Filtering
+
+- **Faster Navigation**: Quickly find specific tests without scrolling through long lists
+- **Better Organization**: Group and view related tests together
+- **Debugging Efficiency**: Isolate failing tests or problematic endpoints
+- **Test Analysis**: Understand patterns in your API behavior
+- **Maintenance**: Easier to update or remove tests for specific endpoints or methods
+
+By using these filtering options, you can efficiently manage and analyze your test suites in Keploy.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-fixing-ai.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-fixing-ai.md
new file mode 100644
index 000000000..de618d8b7
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-fixing-ai.md
@@ -0,0 +1,132 @@
+---
+id: api-testing-fix-with-ai
+title: Fix with AI
+sidebar_label: Fix with AI
+description: Automatically normalize and repair failing test suites using AI
+tags:
+ - API testing
+ - AI automation
+ - test normalization
+ - test maintenance
+ - debugging
+keywords:
+ - AI test fixing
+ - normalize test suite
+ - failing tests
+ - automated test repair
+ - intelligent assertions
+---
+
+import ProductTier from '@site/src/components/ProductTier';
+
+
+
+## Fix with AI
+
+**Fix with AI** helps you automatically repair and normalize failing test suites using intelligent analysis.
+
+Instead of manually editing requests, assertions, or schema mismatches, you can provide instructions to the AI, and it will adjust the suite accordingly.
+
+This significantly reduces test maintenance effort when APIs evolve.
+
+---
+
+## When to Use Fix with AI
+
+Use this feature when:
+
+- A test suite fails after backend changes
+- Response fields were renamed or restructured
+- Dynamic fields are causing frequent assertion failures
+- Schema mismatches occur
+- You want to normalize outdated validations
+- You want to clean up over-strict assertions
+
+---
+
+## How It Works
+
+1. Navigate to a **Failing Test Suite**
+2. Click **Fix with AI**
+3. Provide instructions describing what needs to be corrected
+
+Example instructions:
+- "Normalize dynamic fields like timestamps and request IDs"
+- "Update schema based on latest API response"
+- "Ignore volatile metadata fields"
+- "Fix assertion mismatches based on new response structure"
+- "Relax strict JSON equality checks"
+
+4. Submit your instructions
+5. AI analyzes the failure and updates the test suite accordingly
+
+---
+
+## What the AI Can Modify
+
+The AI can intelligently update:
+
+- JSON assertions
+- Schema validations
+- Header validations
+- Status code expectations
+- Dynamic field handling
+- Selected field configurations
+- Request payload mismatches
+
+It ensures the suite reflects the current API behavior while preserving intended validation logic.
+
+---
+
+## Example Scenario
+
+### Problem
+Your API now returns:
+
+```json
+{
+ "id": 123,
+ "email": "user@example.com",
+ "createdAt": "2026-02-11T10:30:00Z"
+}
+```
+
+Previously, your test expected strict equality including `createdAt`.
+
+The test fails due to timestamp variance.
+
+### Instruction to AI
+"Normalize dynamic fields like `createdAt` and ignore timestamp differences."
+
+### Result
+AI updates the assertion to:
+
+- Use Schema validation instead of strict equality
+- Exclude or normalize the `createdAt` field
+- Keep critical business validations intact
+
+The test suite now passes without weakening important checks.
+
+---
+
+## Normalization Behavior
+
+When you ask the AI to "normalize" a suite, it may:
+
+- Replace strict JSON Equal with Schema validation
+- Convert full-body comparison into Selected Fields
+- Remove volatile fields from assertions
+- Adjust regex for dynamic headers
+- Update expected status codes if API behavior changed intentionally
+
+Normalization focuses on making tests stable without reducing meaningful validation.
+
+---
+
+## Best Practices
+
+- **Be specific in your instructions**
+- **Clearly mention which fields should be ignored or updated**
+- **Review AI-generated changes before finalizing**
+- **Use normalization for dynamic fields, not business logic errors**
+- **Keep critical validations strict**
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-generation-history.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-generation-history.md
new file mode 100644
index 000000000..cd7c1a46e
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-generation-history.md
@@ -0,0 +1,212 @@
+---
+id: api-testing-generation-history
+title: Test Generation History
+description: Guide to viewing and managing test generation history with job tracking and status monitoring
+sidebar_label: Generation History
+tags:
+ - api-testing
+ - generation-history
+ - test-generation
+ - job-tracking
+ - test-management
+---
+
+# Test Generation History
+
+This guide explains how to use the generation history page in Keploy to track and manage your test generation jobs. The history provides detailed insights into each generation run, including acceptance rates, errors, and inputs used.
+
+## Overview
+
+The generation history page displays a comprehensive list of all test generation jobs, allowing you to monitor the success rate of your test generations and take action on rejected or buggy tests.
+
+## Generation History Features
+
+### Job Information Display
+
+For each generation job, you can view:
+
+1. **Job ID**: Unique identifier for each test generation run
+2. **Generation Statistics**:
+ - **Accepted**: Number of test suites that passed validation
+ - **Recovered**: Number of test suites that were recovered from errors
+ - **Rejected**: Number of test suites that failed validation
+ - **Buggy**: Number of test suites with identified issues
+3. **Input Details**: The inputs and configurations used for that particular generation
+4. **Timestamp**: When the generation job was executed
+5. **Status**: Overall status of the generation job (Completed, In Progress, Failed)
+
+### Viewing Generation Details
+
+To view details of a specific generation:
+
+1. **Navigate to Generation History**
+ - Go to the generation history section in Keploy
+ - View the list of all generation jobs
+
+2. **Review Job Statistics**
+ - See the breakdown of accepted, recovered, rejected, and buggy test suites
+ - Understand the success rate of each generation
+
+3. **Check Input Parameters**
+ - View the inputs used for that generation
+ - Review configuration settings and parameters
+ - Understand what led to specific results
+
+## Working with Rejected Test Suites
+
+### Adding Rejected Tests to Current Suite
+
+If you find rejected test suites that you want to include:
+
+1. **Locate Rejected Tests**
+ - Browse through the generation history
+ - Identify jobs with rejected test suites
+
+2. **Click the Plus Icon**
+ - Click on the **+** (plus) icon next to any rejected test suite
+ - This action will add the rejected test to your current list of test suites
+
+3. **Review and Modify**
+ - The test suite will appear in your current test suite list
+ - You can now review, edit, and fix any issues
+ - Make necessary adjustments before running the test
+
+4. **Re-validate**
+ - After modifications, re-run the test to validate
+ - Monitor if it moves from rejected to accepted status
+
+## Understanding Test Statuses
+
+### Accepted Tests ✅
+- Tests that passed all validation checks
+- Successfully generated and ready to use
+- No issues detected in the test suite
+
+### Recovered Tests 🔄
+- Tests that encountered errors but were successfully recovered
+- May have required automatic fixes or adjustments
+- Review recommended to ensure correctness
+
+### Rejected Tests ❌
+- Tests that failed validation checks
+- May have incorrect assertions or invalid configurations
+- Require manual review and fixes
+- Can be added back to the test suite list for modification
+
+### Buggy Tests 🐛
+- Tests with identified bugs or issues
+- May have inconsistent behavior or errors
+- Need investigation and debugging
+- Review test logic and inputs
+
+## Example Generation History View
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Generation History │
+├─────────────────────────────────────────────────────────────────┤
+│ │
+│ Job ID: gen-2026-02-13-001 │
+│ Timestamp: 2026-02-13 10:30:45 │
+│ Status: Completed │
+│ │
+│ Statistics: │
+│ ✅ Accepted: 45 │
+│ 🔄 Recovered: 12 │
+│ ❌ Rejected: 8 │
+│ 🐛 Buggy: 3 │
+│ │
+│ Inputs Used: │
+│ - Endpoints: /api/v1/users, /api/v1/products │
+│ - Methods: GET, POST, PUT │
+│ - Recording Duration: 5 minutes │
+│ - Agent: Local Agent v2.1.0 │
+│ │
+│ Rejected Tests: [+] Add to Suite │
+│ │
+├─────────────────────────────────────────────────────────────────┤
+│ │
+│ Job ID: gen-2026-02-12-005 │
+│ Timestamp: 2026-02-12 16:22:10 │
+│ Status: Completed │
+│ │
+│ Statistics: │
+│ ✅ Accepted: 32 │
+│ 🔄 Recovered: 5 │
+│ ❌ Rejected: 15 │
+│ 🐛 Buggy: 7 │
+│ │
+│ Inputs Used: │
+│ - Endpoints: /api/v2/orders │
+│ - Methods: GET, DELETE │
+│ - Recording Duration: 3 minutes │
+│ - Agent: Browser Extension │
+│ │
+│ Rejected Tests: [+] Add to Suite │
+│ │
+└─────────────────────────────────────────────────────────────────┘
+```
+
+## Analyzing Generation Trends
+
+### Success Rate Analysis
+- Track the percentage of accepted vs rejected tests over time
+- Identify patterns in test generation quality
+- Optimize inputs based on historical data
+
+### Input Optimization
+- Review which inputs led to higher acceptance rates
+- Compare different configurations and their outcomes
+- Refine your test generation strategy
+
+### Error Patterns
+- Identify common reasons for test rejection
+- Address recurring bugs or issues
+- Improve test generation quality
+
+## Best Practices
+
+1. **Regular Review**
+ - Check generation history regularly
+ - Monitor acceptance rates and trends
+ - Address rejected tests promptly
+
+2. **Learn from Rejected Tests**
+ - Analyze why tests were rejected
+ - Improve input parameters for future generations
+ - Document common issues and solutions
+
+3. **Recover and Reuse**
+ - Use the plus icon to recover rejected tests
+ - Fix and validate rejected test suites
+ - Build a comprehensive test coverage
+
+4. **Track Performance**
+ - Monitor the number of buggy tests
+ - Identify problematic endpoints or methods
+ - Improve API stability based on insights
+
+5. **Maintain Clean History**
+ - Archive old generation jobs periodically
+ - Focus on recent and relevant generations
+ - Keep track of successful generation patterns
+
+## Filtering and Sorting
+
+You can filter and sort generation history by:
+- **Date Range**: View generations within a specific time period
+- **Status**: Filter by acceptance rate or overall status
+- **Job ID**: Search for specific generation jobs
+- **Endpoint**: Filter by endpoints used in generation
+- **Success Rate**: Sort by acceptance percentage
+
+## Benefits of Generation History
+
+- **Transparency**: Complete visibility into test generation process
+- **Traceability**: Track which inputs produced which tests
+- **Quality Control**: Monitor and improve test generation quality
+- **Recovery**: Easily recover and fix rejected tests
+- **Analytics**: Understand patterns and optimize generation strategy
+- **Audit Trail**: Maintain records of all test generation activities
+
+By leveraging the generation history feature, you can maintain high-quality test suites, recover valuable rejected tests, and continuously improve your API testing strategy.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-local-agent.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-local-agent.md
new file mode 100644
index 000000000..12584f72d
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-local-agent.md
@@ -0,0 +1,46 @@
+---
+id: api-testing-local-agent
+title: Using Keploy Local Agent
+description: Guide to adding to recording and generating test suites using local agent
+sidebar_label: Local Agent
+tags:
+ - api-testing
+ - local-agent
+ - test-suite
+ - test-management
+---
+# Using the Local Agent
+
+This guide explains how to use the local agent in Keploy to test private or local endpoints. Follow the steps below to set up and use the local agent effectively.
+
+## Steps to Use the Local Agent
+
+1. **Enter the Endpoint URL**
+ - Navigate to the local agent section in the Keploy interface.
+ - Enter the endpoint URL you want to test. This can be a private URL or any other endpoint.
+ - A default localhost link is also available for convenience.
+
+2. **Download the Keplr Agent**
+ - Based on your device configuration, download the Keplr agent:
+ - **Windows**
+ - **Mac**
+ - **Linux**
+ - Follow the installation instructions for your operating system.
+
+3. **Start the Keplr Agent**
+ - Once the agent is downloaded, start it on your device.
+ - Open the agent interface to ensure it is running and ready to connect.
+
+4. **Connect the Agent**
+ - After starting the agent, connect it to Keploy.
+ - Once connected, you can begin making API calls.
+
+5. **Record API Calls**
+ - The Keplr agent will automatically record the API calls you make.
+ - It will capture the responses and start generating test sheets based on the recorded calls.
+
+6. **Troubleshooting Connection Issues**
+ - If the local agent fails to connect, you can use the Keplr extension as an alternative.
+ - Ensure that the agent is running and the endpoint URL is correct.
+
+By following these steps, you can efficiently use the local agent to test your APIs and generate test sheets automatically.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-mark-unbuggy.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-mark-unbuggy.md
new file mode 100644
index 000000000..135623cf7
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-mark-unbuggy.md
@@ -0,0 +1,235 @@
+---
+id: api-testing-mark-unbuggy
+title: Mark Test Suite as Unbuggy
+description: Guide to marking test suites as unbuggy after fixing issues
+sidebar_label: Mark as Unbuggy
+tags:
+ - api-testing
+ - test-management
+ - suite-status
+ - debugging
+---
+
+# Mark Test Suite as Unbuggy
+
+After resolving issues in a buggy test suite, you can mark it as unbuggy to indicate that the problems have been addressed and the suite is functioning correctly again.
+
+## When to Mark a Suite as Unbuggy
+
+Mark a test suite as unbuggy when:
+
+- All test failures have been resolved
+- API endpoints are working as expected
+- Schema validations are passing
+- Authentication issues have been fixed
+- Test assertions are now accurate
+- The underlying API issues have been corrected
+
+## How to Mark a Suite as Unbuggy
+
+### Step 1: Navigate to the Test Suite
+
+1. Go to your Keploy dashboard
+2. Navigate to the **Test Suites** section
+3. Click on the specific test suite you want to mark as unbuggy
+
+### Step 2: Access Suite Options
+
+Once you're on the test suite page:
+
+1. Look for the **three dots (⋮)** menu next to the test suite name
+2. The menu is typically located in the top-right area of the suite header
+3. Click on the three dots to open the context menu
+
+### Step 3: Mark as Unbuggy
+
+From the context menu:
+
+1. Select **"Unmark Buggy"** from the dropdown options
+
+## What Happens When You Mark a Suite as Unbuggy
+
+### Immediate Changes
+
+- **Status Update**: The suite status changes from "Buggy" to "Active" or "Passing"
+- **Visual Indicator**: The suite will no longer appear with error indicators
+- **Dashboard Update**: The suite is moved out of the buggy suites list
+- **Notification**: A success notification confirms the status change
+
+### Ongoing Behavior
+
+- **Future Runs**: The suite will run normally in subsequent test executions
+- **Reporting**: The suite will be included in standard test reports
+- **Monitoring**: Keploy will continue monitoring the suite for new issues
+- **History**: The previous buggy status and resolution are logged in the suite history
+
+## Best Practices
+
+### Before Marking as Unbuggy
+
+1. **Verify All Fixes**
+ ```bash
+ # Run the test suite manually to confirm fixes
+ keploy test --test-sets "your-suite-name"
+ ```
+
+2. **Check All Test Cases**
+ - Ensure every test in the suite is passing
+ - Verify no intermittent failures remain
+ - Confirm all assertions are working correctly
+
+3. **Test in Multiple Environments**
+ - Run tests in staging environment
+ - Verify production-like conditions
+ - Check with realistic data volumes
+
+### Documentation
+
+1. **Record Resolution Steps**
+ - Document what was fixed
+ - Note any API changes made
+ - Record configuration updates
+
+2. **Update Test Documentation**
+ - Modify test descriptions if needed
+ - Update expected behaviors
+ - Add notes about resolution
+
+## Common Scenarios for Marking as Unbuggy
+
+### 1. API Endpoint Restored
+
+**Scenario**: A 404 error was resolved by fixing the API endpoint
+
+**Before marking unbuggy**:
+```bash
+# Verify the endpoint is working
+curl -X POST https://api.example.com/owners \
+ -H "Content-Type: application/json" \
+ -d '{"name": "Test Owner"}'
+```
+
+### 2. Schema Issues Fixed
+
+**Scenario**: Response schema validation was fixed by updating the API
+
+**Verification steps**:
+1. Check that response format matches expectations
+2. Verify all required fields are present
+3. Confirm data types are correct
+
+### 3. Authentication Resolved
+
+**Scenario**: Authentication issues were fixed by updating credentials
+
+**Before marking unbuggy**:
+1. Test with new authentication tokens
+2. Verify permissions are sufficient
+3. Check token expiration dates
+
+### 4. Environment Configuration Fixed
+
+**Scenario**: Environment-specific issues were resolved
+
+**Verification checklist**:
+- [ ] Database connections working
+- [ ] Environment variables set correctly
+- [ ] Required services are running
+- [ ] Network connectivity is stable
+
+## Bulk Operations
+
+### Mark Multiple Suites as Unbuggy
+
+If you have multiple suites to mark as unbuggy:
+
+1. **From the Buggy Suites List**:
+ - Use checkboxes to select multiple suites
+ - Click the bulk actions menu
+ - Select "Mark Selected as Unbuggy"
+
+2. **From Individual Suite Pages**:
+ - Process each suite individually
+ - Verify fixes for each suite separately
+ - Document resolutions for tracking
+
+## Monitoring After Marking as Unbuggy
+
+### Automated Monitoring
+
+Keploy automatically monitors unbuggy suites for:
+- New test failures
+- Performance regressions
+- Schema changes
+- API availability issues
+
+### Manual Verification
+
+Regularly check that previously buggy suites remain stable:
+
+1. **Weekly Reviews**
+ - Check suite success rates
+ - Monitor execution times
+ - Review error logs
+
+2. **After Deployments**
+ - Run critical test suites
+ - Verify no regressions introduced
+ - Check environment stability
+
+## Troubleshooting
+
+### Unable to Mark as Unbuggy
+
+If you can't find the option to mark as unbuggy:
+
+1. **Check Permissions**
+ - Ensure you have edit permissions for the test suite
+ - Verify your account has the necessary role
+
+2. **Suite Status**
+ - Confirm the suite is currently marked as buggy
+ - Check if recent test runs are still failing
+
+3. **Browser Issues**
+ - Refresh the page and try again
+ - Clear browser cache if needed
+ - Try using a different browser
+
+### Accidental Marking
+
+If you accidentally marked a suite as unbuggy:
+
+1. **Re-run the Suite**
+ - Execute the test suite again
+ - If issues persist, it will automatically be marked as buggy
+
+2. **Manual Review**
+ - Check the suite execution results
+ - Review individual test case outcomes
+ - Mark as buggy again if needed
+
+## Related Actions
+
+After marking a suite as unbuggy, you might want to:
+
+- **Schedule Regular Runs**: Set up automated execution schedules
+- **Update Documentation**: Revise test suite documentation
+- **Share Results**: Notify team members of the resolution
+- **Review Similar Suites**: Check other suites for similar issues
+
+## Integration with CI/CD
+
+When marking suites as unbuggy in CI/CD pipelines:
+
+```yaml
+# Example GitHub Action
+- name: Mark Suite as Unbuggy
+ if: ${{ steps.test.outputs.all_passed == 'true' }}
+ run: |
+ keploy suite mark-unbuggy --suite-id ${{ env.SUITE_ID }}
+```
+
+This ensures that suites are automatically marked as unbuggy when automated fixes resolve issues.
+
+Remember: Marking a suite as unbuggy should only be done after thoroughly verifying that all issues have been resolved and the suite is functioning correctly.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-run-report.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-run-report.md
new file mode 100644
index 000000000..e5765c14b
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-run-report.md
@@ -0,0 +1,374 @@
+---
+id: api-testing-run-report
+title: Test Run Reports
+description: Guide to viewing and analyzing test run reports with detailed execution results and filtering
+sidebar_label: Run Reports
+tags:
+ - api-testing
+ - run-reports
+ - test-execution
+ - test-results
+ - test-management
+---
+
+# Test Run Reports
+
+This guide explains how to use the run report page in Keploy to track and analyze your test execution results. The run reports provide comprehensive insights into test performance, failures, and bugs with detailed diagnostic information.
+
+## Overview
+
+The run report page displays a list of all test execution runs, allowing you to monitor test results, identify failures, and debug issues efficiently. Each report provides detailed information about individual test cases and their outcomes.
+
+## Run Report List View
+
+### Report Summary Information
+
+For each test run, you can view:
+
+1. **Report ID**: Unique identifier for the test run
+2. **Created**: Timestamp when the test run was executed
+3. **Creator**: User or system that initiated the test run
+4. **Total Suites**: Total number of test suites executed
+5. **Status Distribution**:
+ - **Pass**: Number of test suites that passed ✅
+ - **Fail**: Number of test suites that failed ❌
+ - **Buggy**: Number of test suites with bugs 🐛
+
+### Viewing Report List
+
+1. **Navigate to Run Reports**
+ - Go to the run reports section in Keploy
+ - View the list of all test execution runs
+
+2. **Review Report Summary**
+ - See the overall pass/fail/buggy distribution
+ - Identify problematic test runs at a glance
+ - Track test execution history
+
+## Detailed Report View
+
+### Accessing Detailed Results
+
+Click on any report from the list to view detailed execution results:
+
+1. **Click on Report ID**
+ - Select a report to view full details
+ - Access comprehensive test execution information
+
+2. **View Test Results**
+ - See detailed breakdown of all test suites
+ - Identify which tests passed, failed, or are buggy
+ - Review execution metrics and timings
+
+### Understanding Test Results
+
+#### Passed Tests ✅
+- Tests that successfully completed all assertions
+- All validations matched expected results
+- No errors or warnings during execution
+
+#### Failed Tests ❌
+- Tests that did not meet assertion criteria
+- **Failure Reasons Displayed**:
+ - Assertion mismatches
+ - Unexpected response values
+ - Status code mismatches
+ - Timeout errors
+- **Association Failures**:
+ - Failures from dependent services or associations
+ - External API failures affecting the test
+ - Database connection issues
+
+#### Buggy Tests 🐛
+- Tests with identified bugs or inconsistent behavior
+- **Buggy Reasons Displayed**:
+ - Shown on top of the particular test step
+ - Detailed error messages and stack traces
+ - Intermittent failures or race conditions
+ - Data inconsistencies
+
+## Filtering Test Results
+
+The run report page provides powerful filtering options to help you analyze specific test results:
+
+### Available Filters
+
+#### 1. Filter by Suite Status
+Filter tests based on their execution outcome:
+- **Passed**: Show only successful tests
+- **Failed**: Show only failed tests
+- **Buggy**: Show only buggy tests
+- **All**: View all test results
+
+#### 2. Filter by Status Code
+Filter by HTTP response status codes:
+- **2xx Success**: 200 OK, 201 Created, 204 No Content
+- **3xx Redirection**: 301, 302, 304
+- **4xx Client Errors**: 400, 401, 403, 404
+- **5xx Server Errors**: 500, 502, 503, 504
+- **Custom Code**: Filter by specific status codes
+
+#### 3. Filter by HTTP Method
+Filter tests by request method:
+- **GET**: Retrieve operations
+- **POST**: Create operations
+- **PUT**: Update operations
+- **PATCH**: Partial update operations
+- **DELETE**: Delete operations
+- **OPTIONS, HEAD**: Other HTTP methods
+
+#### 4. Filter by Endpoint
+Filter by API endpoint or URL path:
+- Full endpoint URL
+- Partial path matching
+- Wildcard patterns
+- Multiple endpoints selection
+
+### Applying Filters
+
+1. **Open Filter Panel**
+ - Click on the filter icon in the report view
+ - Select desired filter criteria
+
+2. **Combine Multiple Filters**
+ - Apply multiple filters simultaneously
+ - Narrow down results to specific scenarios
+ - Example: Failed POST requests to /api/v1/users
+
+3. **Clear Filters**
+ - Reset filters to view all results
+ - Remove individual filter criteria
+
+## Detailed Test Step Information
+
+### Viewing Step-by-Step Results
+
+For each test case, you can see:
+
+1. **Test Steps Breakdown**
+ - Individual steps within each test
+ - Request and response details
+ - Execution time for each step
+
+2. **Buggy Reasons on Test Steps**
+ - Detailed error messages displayed on top of the affected step
+ - Root cause analysis
+ - Stack traces when available
+ - Suggested fixes or actions
+
+3. **Failure Reasons from Assertions**
+ - Expected vs actual values comparison
+ - Schema validation errors
+ - Assertion failure details
+
+4. **Association Failures**
+ - Failures from dependent services
+ - External API errors
+ - Database or integration issues
+ - Cascading failure analysis
+
+## Example Report View
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Run Reports │
+├─────────────────────────────────────────────────────────────────┤
+│ │
+│ Report ID: run-2026-02-13-001 │
+│ Created: 2026-02-13 14:25:30 │
+│ Creator: john.doe@example.com │
+│ Total Suites: 150 │
+│ │
+│ Distribution: │
+│ ✅ Pass: 125 (83%) │
+│ ❌ Fail: 18 (12%) │
+│ 🐛 Buggy: 7 (5%) │
+│ │
+│ [View Details] │
+│ │
+├─────────────────────────────────────────────────────────────────┤
+│ │
+│ Report ID: run-2026-02-13-002 │
+│ Created: 2026-02-13 10:15:22 │
+│ Creator: Automated CI/CD │
+│ Total Suites: 200 │
+│ │
+│ Distribution: │
+│ ✅ Pass: 180 (90%) │
+│ ❌ Fail: 15 (7.5%) │
+│ 🐛 Buggy: 5 (2.5%) │
+│ │
+│ [View Details] │
+│ │
+└─────────────────────────────────────────────────────────────────┘
+```
+
+### Detailed Report View Example
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Run Report: run-2026-02-13-001 │
+│ │
+│ Filters: [Suite Status: All] [Status Code: All] [Method: All] │
+│ │
+├─────────────────────────────────────────────────────────────────┤
+│ │
+│ ❌ FAILED: Create User - POST /api/v1/users │
+│ │
+│ Status Code: 400 Bad Request │
+│ Execution Time: 145ms │
+│ │
+│ Failure Reason: │
+│ - Assertion Failed: Expected status code 201, got 400 │
+│ - Response body validation error │
+│ │
+│ Association Failures: │
+│ - Email validation service returned error │
+│ - Database constraint violation: duplicate email │
+│ │
+│ Test Steps: │
+│ 1. ✅ Prepare request payload │
+│ 2. ✅ Send POST request │
+│ 3. ❌ Validate response status (Expected 201, got 400) │
+│ 4. ❌ Validate response schema (Missing field: userId) │
+│ │
+├─────────────────────────────────────────────────────────────────┤
+│ │
+│ 🐛 BUGGY: Get Product Details - GET /api/v1/products/123 │
+│ │
+│ Status Code: 200 OK │
+│ Execution Time: 2350ms (Timeout Warning) │
+│ │
+│ Buggy Reason (on Step 2): │
+│ - Intermittent timeout on external pricing service │
+│ - Response time exceeded threshold (>2000ms) │
+│ - Inconsistent data: price field sometimes null │
+│ │
+│ Test Steps: │
+│ 1. ✅ Send GET request │
+│ 2. 🐛 Wait for response (2350ms - Slow) │
+│ └─ Error: External pricing API timeout │
+│ 3. ⚠️ Validate response (Warning: price field is null) │
+│ │
+├─────────────────────────────────────────────────────────────────┤
+│ │
+│ ✅ PASSED: Login User - POST /api/v1/auth/login │
+│ │
+│ Status Code: 200 OK │
+│ Execution Time: 95ms │
+│ │
+│ All assertions passed successfully │
+│ │
+└─────────────────────────────────────────────────────────────────┘
+```
+
+## Analyzing Test Failures
+
+### Common Failure Patterns
+
+1. **Assertion Failures**
+ - Response doesn't match expected schema
+ - Incorrect status codes
+ - Missing or unexpected fields
+ - Data type mismatches
+
+2. **Association Failures**
+ - Dependent service unavailable
+ - Database connection errors
+ - Third-party API failures
+ - Authentication/authorization issues
+
+3. **Performance Issues**
+ - Timeout errors
+ - Slow response times
+ - Resource exhaustion
+
+### Debugging Failed Tests
+
+1. **Review Failure Reasons**
+ - Read detailed error messages
+ - Check expected vs actual values
+ - Identify the failing step
+
+2. **Check Association Failures**
+ - Verify dependent services are running
+ - Check network connectivity
+ - Review external API status
+
+3. **Analyze Buggy Tests**
+ - Review the buggy reason displayed on the test step
+ - Check for intermittent issues
+ - Look for patterns in bug occurrences
+
+4. **Use Filters for Analysis**
+ - Filter by specific endpoints showing failures
+ - Group failures by HTTP method
+ - Analyze status code patterns
+
+## Report Metrics and Insights
+
+### Key Metrics
+
+- **Pass Rate**: Percentage of successful tests
+- **Failure Rate**: Percentage of failed tests
+- **Bug Rate**: Percentage of buggy tests
+- **Average Execution Time**: Mean time across all tests
+- **Success Trend**: Historical pass rate over time
+
+### Performance Insights
+
+- **Slowest Endpoints**: Identify performance bottlenecks
+- **Most Failed Tests**: Tests requiring attention
+- **Flaky Tests**: Tests with inconsistent results (buggy)
+- **Association Dependencies**: Most common external failures
+
+## Best Practices
+
+1. **Regular Report Review**
+ - Check reports after each test run
+ - Monitor pass rate trends
+ - Address failures promptly
+
+2. **Use Filters Effectively**
+ - Filter failed tests to prioritize fixes
+ - Group by endpoint to identify problematic APIs
+ - Filter by status code to categorize issues
+
+3. **Document Failures**
+ - Note recurring failure patterns
+ - Document association dependencies
+ - Track bug fixes and resolutions
+
+4. **Investigate Buggy Tests**
+ - Review buggy reasons carefully
+ - Check for timing issues or race conditions
+ - Stabilize flaky tests
+
+5. **Monitor Associations**
+ - Track external service reliability
+ - Set up alerts for association failures
+ - Maintain fallback strategies
+
+6. **Share Reports**
+ - Share reports with team members
+ - Include reports in CI/CD pipelines
+ - Use reports for sprint retrospectives
+
+## Exporting and Sharing
+
+- **Export Reports**: Download reports in various formats (PDF, CSV, JSON)
+- **Share Links**: Generate shareable links to specific reports
+- **Schedule Reports**: Set up automated report distribution
+- **Integration**: Connect with project management tools
+
+## Benefits of Run Reports
+
+- **Comprehensive Testing Visibility**: Complete view of test execution results
+- **Quick Issue Identification**: Easily spot failures and bugs
+- **Detailed Diagnostics**: Step-by-step failure analysis
+- **Association Tracking**: Monitor external dependencies
+- **Historical Tracking**: Maintain test execution history
+- **Team Collaboration**: Share results and insights with team
+- **Data-Driven Decisions**: Use metrics to improve test quality
+
+By leveraging the run report features, you can maintain high-quality APIs, quickly identify and fix issues, and ensure comprehensive test coverage across your application.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-running-selective.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-running-selective.md
new file mode 100644
index 000000000..437e36748
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-running-selective.md
@@ -0,0 +1,379 @@
+---
+id: api-testing-running-selective
+title: Running Selective Test Suites
+description: Guide to selecting and running specific test suites using checkboxes and bulk actions
+sidebar_label: Selective Test Execution
+tags:
+ - api-testing
+ - test-execution
+ - bulk-actions
+ - test-management
+---
+
+# Running Selective Test Suites
+
+Keploy allows you to select specific test suites from your test collection and perform bulk actions like running tests, deleting suites, or adding labels. This selective approach helps you manage large test collections efficiently and run only the tests you need.
+
+## Overview
+
+The selective test suite feature provides:
+
+- **Checkbox Selection**: Choose individual test suites or select all
+- **Bulk Actions**: Perform actions on multiple suites simultaneously
+- **Filtered Execution**: Run only the tests you've selected
+- **Efficient Management**: Handle large test collections with ease
+
+## Selecting Test Suites
+
+### Individual Selection
+
+1. **Navigate to Test Suites**
+ - Go to your Keploy dashboard
+ - Click on the **Test Suites** section
+
+2. **Use Checkboxes**
+ - Each test suite has a checkbox on the left side
+ - Click the checkbox next to any suite you want to select
+ - Selected suites will be highlighted with a checkmark ✓
+
+3. **Visual Indicators**
+ - Selected suites show a blue checkmark
+ - The suite row may be highlighted or have a colored border
+ - A selection counter appears showing "X suites selected"
+
+### Bulk Selection Options
+
+#### Select All Suites
+```
+☑️ Select All (at the top of the list)
+```
+- Checkbox at the top of the suite list
+- Selects all visible test suites on the current page
+- Useful for applying actions to your entire test collection
+
+#### Select by Filter
+1. Apply filters (status, tags, creation date, etc.)
+2. Use "Select All" to choose all filtered results
+3. Only suites matching your criteria will be selected
+
+#### Select by Pattern
+- Use search functionality to find specific suites
+- Select all results matching your search criteria
+- Combine with filters for more precise selection
+
+## Available Actions
+
+Once you've selected test suites, several bulk actions become available:
+
+### 1. Run Selected Tests
+
+**Button**: **Run Selected** or **Execute Selected**
+
+**What it does**:
+- Executes all test cases within the selected suites
+- Runs tests in parallel or sequential order (configurable)
+- Provides consolidated results for all selected suites
+
+**Usage**:
+```
+1. Select desired test suites using checkboxes
+2. Click "Run Selected" button
+3. Choose execution options (if prompted):
+ - Parallel execution (faster)
+ - Sequential execution (more stable)
+ - Environment selection
+4. Click "Start Execution"
+```
+
+**Execution Options**:
+- **Environment**: Choose target environment (dev, staging, prod)
+- **Parallel Runs**: Set number of concurrent executions
+- **Timeout Settings**: Configure test timeout values
+- **Retry Policy**: Set retry attempts for failed tests
+
+### 2. Delete Selected Suites
+
+**Button**: **Delete Selected**
+
+**What it does**:
+- Permanently removes selected test suites
+- Deletes all test cases within those suites
+- Cannot be undone (use with caution)
+
+**Safety Features**:
+- Confirmation dialog before deletion
+- Shows list of suites to be deleted
+- Option to export suites before deletion
+
+**Usage**:
+```
+1. Select test suites to delete
+2. Click "Delete Selected"
+3. Review the confirmation dialog
+4. Type "DELETE" to confirm (if required)
+5. Click "Confirm Deletion"
+```
+
+### 3. Add Labels
+
+**Button**: **Add Labels** or **Manage Tags**
+
+**What it does**:
+- Adds labels/tags to selected test suites
+- Helps organize and categorize tests
+- Enables better filtering and search
+
+**Label Types**:
+- **Environment**: `dev`, `staging`, `production`
+- **Priority**: `high`, `medium`, `low`
+- **Category**: `smoke`, `regression`, `integration`
+- **Owner**: `team-frontend`, `team-backend`
+- **Custom**: Any custom label you define
+
+**Usage**:
+```
+1. Select test suites to label
+2. Click "Add Labels"
+3. Choose from existing labels or create new ones
+4. Select multiple labels if needed
+5. Click "Apply Labels"
+```
+
+### 4. Additional Bulk Actions
+
+#### Export Selected
+- Download selected suites as files
+- Export in various formats (JSON, CSV, etc.)
+- Backup before making changes
+
+#### Duplicate Selected
+- Create copies of selected test suites
+- Useful for creating variations or backups
+- Maintains original test structure
+
+#### Move to Folder
+- Organize suites into folders or categories
+- Bulk organization for better management
+- Maintain hierarchical structure
+
+## Selection Workflow Examples
+
+### Example 1: Running Smoke Tests
+
+**Scenario**: Run all smoke tests before deployment
+
+```
+1. Filter by label: "smoke"
+2. Click "Select All" (selects all smoke test suites)
+3. Click "Run Selected"
+4. Choose "Production" environment
+5. Set parallel execution: 5 concurrent runs
+6. Click "Start Execution"
+```
+
+### Example 2: Cleaning Up Old Tests
+
+**Scenario**: Delete outdated test suites
+
+```
+1. Filter by creation date: "Older than 6 months"
+2. Filter by status: "Not run in 30 days"
+3. Review the filtered results
+4. Select relevant suites (uncheck any you want to keep)
+5. Click "Delete Selected"
+6. Confirm deletion after review
+```
+
+### Example 3: Organizing by Team
+
+**Scenario**: Add team labels to categorize ownership
+
+```
+1. Search for suites containing "user-management"
+2. Select all relevant suites
+3. Click "Add Labels"
+4. Add label: "team-backend"
+5. Add label: "high-priority"
+6. Click "Apply Labels"
+```
+
+## Selection Management
+
+### Selection Persistence
+
+- **Page Navigation**: Selections persist when moving between pages
+- **Filter Changes**: Selections maintained when applying new filters
+- **Session Duration**: Selections cleared when closing browser/tab
+
+### Selection Counter
+
+The interface shows:
+```
+✓ 5 suites selected out of 23 total
+```
+
+### Clear Selection
+
+**Options to clear selection**:
+- Click "Clear Selection" button
+- Uncheck "Select All" checkbox
+- Refresh the page
+
+## Best Practices
+
+### Before Running Selected Tests
+
+1. **Review Selection**
+ ```
+ - Verify all intended suites are selected
+ - Check that no critical tests are missing
+ - Confirm environment settings
+ ```
+
+2. **Check Dependencies**
+ ```
+ - Ensure selected tests don't have interdependencies
+ - Verify test data requirements
+ - Confirm service availability
+ ```
+
+3. **Set Appropriate Timeouts**
+ ```
+ - Consider total execution time
+ - Set realistic timeout values
+ - Plan for potential failures
+ ```
+
+### Efficient Selection Strategies
+
+1. **Use Filters First**
+ - Apply relevant filters before selecting
+ - Reduce noise and focus on relevant suites
+ - Combine multiple filters for precision
+
+2. **Leverage Labels**
+ - Maintain good labeling practices
+ - Use consistent naming conventions
+ - Regular label cleanup and organization
+
+3. **Batch Operations**
+ - Group similar actions together
+ - Avoid frequent small operations
+ - Plan bulk changes in advance
+
+## Monitoring Execution
+
+### Real-time Progress
+
+When running selected tests:
+
+```
+Execution Progress: 3 of 5 suites completed
+├── ✅ User Authentication Suite (Passed)
+├── ✅ Payment Processing Suite (Passed)
+├── ⚠️ Order Management Suite (Failed - 2 tests)
+├── 🔄 Notification Suite (Running...)
+└── ⏳ Report Generation Suite (Queued)
+```
+
+### Execution Summary
+
+After completion:
+```
+Execution Results Summary
+========================
+Total Suites: 5
+✅ Passed: 3 suites
+❌ Failed: 2 suites
+⏱️ Total Time: 4m 32s
+📊 Success Rate: 60%
+```
+
+## Troubleshooting
+
+### Selection Issues
+
+**Problem**: Can't select certain suites
+- **Solution**: Check permissions for those suites
+- **Check**: Verify suites aren't currently running
+
+**Problem**: Selection doesn't persist
+- **Solution**: Ensure browser cookies are enabled
+- **Check**: Verify stable internet connection
+
+### Execution Issues
+
+**Problem**: Selected tests won't run
+- **Solutions**:
+ - Verify all selected suites are valid
+ - Check environment connectivity
+ - Confirm sufficient system resources
+ - Review test dependencies
+
+**Problem**: Bulk actions fail
+- **Solutions**:
+ - Reduce selection size and try again
+ - Check server capacity and load
+ - Verify permissions for bulk operations
+
+## Keyboard Shortcuts
+
+Enhance your workflow with keyboard shortcuts:
+
+```
+Ctrl/Cmd + A : Select all visible suites
+Ctrl/Cmd + D : Deselect all suites
+Spacebar : Toggle selection for highlighted suite
+Enter : Run selected suites
+Delete : Delete selected suites (with confirmation)
+```
+
+## Integration with CI/CD
+
+### API Endpoints for Selective Execution
+
+```bash
+# Run specific test suites via API
+curl -X POST "https://api.keploy.io/test-suites/run" \
+ -H "Authorization: Bearer your-token" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "suite_ids": ["suite-1", "suite-2", "suite-3"],
+ "environment": "staging",
+ "parallel": true,
+ "max_concurrent": 3
+ }'
+```
+
+### GitHub Actions Example
+
+```yaml
+name: Run Selected API Tests
+on:
+ workflow_dispatch:
+ inputs:
+ suite_labels:
+ description: 'Comma-separated list of suite labels to run'
+ required: true
+ default: 'smoke,critical'
+
+jobs:
+ run-selective-tests:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Run Selected Test Suites
+ run: |
+ keploy test run \
+ --labels ${{ github.event.inputs.suite_labels }} \
+ --environment staging \
+ --parallel 5
+```
+
+## Related Features
+
+- **[Test Suite Management](./api-testing-edit-suites.md)**: Edit and organize test suites
+- **[Buggy Test Suites](./api-testing-buggy-suites.md)**: Handle failing test suites
+- **[Test Reports](./api-testing-sharing-reports.md)**: View and share execution results
+- **[Custom Assertions](./api-testing-custom-assertions.md)**: Define custom validation rules
+
+The selective test execution feature gives you fine-grained control over your test suite management, enabling efficient testing workflows and better resource utilization.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-schema-coverage.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-schema-coverage.md
new file mode 100644
index 000000000..cecf4a6d1
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-schema-coverage.md
@@ -0,0 +1,363 @@
+---
+id: api-testing-schema-coverage
+title: Schema Coverage and Generation
+description: Guide to viewing schema coverage and generating tests for missing coverage
+sidebar_label: Schema Coverage
+tags:
+ - api-testing
+ - schema-coverage
+ - test-generation
+ - schema-validation
+ - test-management
+---
+
+# Schema Coverage and Generation
+
+This guide explains how to use the schema coverage page in Keploy to analyze your API schema coverage and automatically generate additional test suites to cover missing scenarios.
+
+## Overview
+
+The schema coverage page provides a comprehensive view of how well your test suites cover your API schema. You can compare your original schema with Keploy's generated schema, identify gaps in coverage, and automatically generate tests to fill those gaps.
+
+## Accessing Schema Coverage
+
+### From Test Suite
+
+1. **Navigate to Test Suite**
+ - Go to your test suite view
+ - Locate the test suite you want to analyze
+
+2. **Click on Schema Coverage**
+ - Click on the "Schema Coverage" button or link
+ - This will take you to the schema coverage page for that test suite
+
+## Schema Coverage Page Features
+
+### 1. Original Schema View
+
+The original schema section displays:
+- **Your API Schema**: The original OpenAPI/Swagger schema or manually defined schema
+- **Schema Structure**: Complete API specification including:
+ - Endpoints and paths
+ - Request/response models
+ - Data types and formats
+ - Required and optional fields
+ - Validation rules and constraints
+
+### 2. Total Coverage Metrics
+
+View comprehensive coverage statistics:
+- **Overall Coverage Percentage**: Total schema coverage across all endpoints
+- **Endpoint Coverage**: Coverage breakdown by API endpoint
+- **Field Coverage**: Percentage of schema fields covered by tests
+- **Method Coverage**: Coverage by HTTP methods (GET, POST, PUT, DELETE, etc.)
+- **Status Code Coverage**: Which response codes are tested
+- **Covered Lines**: Number of schema lines with test coverage
+- **Missing Lines**: Number of schema lines without test coverage
+
+### 3. Keploy Generated Schema
+
+The generated schema section shows:
+- **Auto-Generated Schema**: Schema derived from recorded API calls
+- **Coverage Highlights**: Visual indication of covered vs uncovered parts
+- **Field-Level Coverage**: Which fields have been tested
+- **Edge Case Identification**: Scenarios that need additional testing
+
+### Side-by-Side Comparison
+
+View original and generated schemas side by side:
+```
+┌──────────────────────────────────────────────────────────────────┐
+│ Schema Coverage Analysis │
+├──────────────────────────────────────────────────────────────────┤
+│ │
+│ Total Coverage: 78% ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░ │
+│ │
+│ ┌─────────────────────────────┬──────────────────────────────┐ │
+│ │ Original Schema │ Keploy Generated Schema │ │
+│ ├─────────────────────────────┼──────────────────────────────┤ │
+│ │ │ │ │
+│ │ /api/v1/users: │ /api/v1/users: │ │
+│ │ GET: ✅ Covered │ GET: ✅ Tested │ │
+│ │ POST: ✅ Covered │ POST: ✅ Tested │ │
+│ │ PUT: ⚠️ Partial │ PUT: ⚠️ Partial │ │
+│ │ DELETE: ❌ Not Covered │ DELETE: ❌ Not Tested │ │
+│ │ │ │ │
+│ │ User Object: │ User Object: │ │
+│ │ id: ✅ Covered │ id: ✅ Found │ │
+│ │ name: ✅ Covered │ name: ✅ Found │ │
+│ │ email: ✅ Covered │ email: ✅ Found │ │
+│ │ phone: ❌ Not Covered │ phone: ❌ Missing │ │
+│ │ address: ❌ Not Covered │ address: ❌ Missing │ │
+│ │ role: ⚠️ Partial │ role: ⚠️ Limited values │ │
+│ │ │ │ │
+│ └─────────────────────────────┴──────────────────────────────┘ │
+│ │
+│ Missing Coverage: │
+│ • DELETE /api/v1/users/{id} - Not tested │
+│ • User.phone field - No test cases │
+│ • User.address field - No test cases │
+│ • User.role - Only 'user' value tested, missing 'admin', 'guest' │
+│ │
+│ [Cover Missing Lines] │
+│ │
+└──────────────────────────────────────────────────────────────────┘
+```
+
+## Covering Missing Lines
+
+### Generate Tests for Uncovered Schema
+
+1. **Click "Cover Missing Lines"**
+ - Locate the "Cover Missing Lines" button on the schema coverage page
+ - Click to open the test generation dialog
+
+2. **Specify Coverage Requirements**
+ - Define what you want to cover:
+ - **Endpoints**: Select specific endpoints to generate tests for
+ - **HTTP Methods**: Choose methods (GET, POST, PUT, DELETE, etc.)
+ - **Fields**: Specify schema fields that need coverage
+ - **Conditions**: Define specific scenarios or edge cases
+ - **Status Codes**: Target specific response codes to test
+ - **Data Variations**: Specify value ranges or combinations
+
+3. **Configure Generation Options**
+
+ Example configuration:
+ ```
+ ┌──────────────────────────────────────────────────────────────┐
+ │ Generate Tests for Missing Coverage │
+ ├──────────────────────────────────────────────────────────────┤
+ │ │
+ │ Select Endpoints to Cover: │
+ │ ☑ DELETE /api/v1/users/{id} │
+ │ ☑ PUT /api/v1/users/{id} │
+ │ │
+ │ Select Fields to Cover: │
+ │ ☑ User.phone │
+ │ ☑ User.address │
+ │ ☑ User.role (all values) │
+ │ │
+ │ HTTP Methods: │
+ │ ☑ GET ☑ POST ☑ PUT ☑ DELETE │
+ │ │
+ │ Conditions to Test: │
+ │ • Valid user deletion │
+ │ • Delete non-existent user (404) │
+ │ • Unauthorized deletion (401) │
+ │ • Update with phone number │
+ │ • Update with address │
+ │ • Test all role values: admin, user, guest │
+ │ │
+ │ Expected Status Codes: │
+ │ ☑ 200 OK ☑ 201 Created ☑ 204 No Content │
+ │ ☑ 400 Bad Request ☑ 401 Unauthorized ☑ 404 Not Found │
+ │ │
+ │ Additional Options: │
+ │ ☑ Generate edge cases │
+ │ ☑ Include validation errors │
+ │ ☑ Test field combinations │
+ │ │
+ │ [Cancel] [Generate Test Suites] │
+ │ │
+ └──────────────────────────────────────────────────────────────┘
+ ```
+
+4. **Generate Additional Test Suites**
+ - Click "Generate Test Suites"
+ - Keploy will automatically create tests based on your specifications
+ - New test suites will be added to your test suite list
+ - Coverage metrics will be updated
+
+## Coverage Analysis Features
+
+### Coverage Visualization
+
+- **Heat Map View**: Visual representation of coverage density
+- **Color Coding**:
+ - 🟢 **Green**: Fully covered (100%)
+ - 🟡 **Yellow**: Partially covered (50-99%)
+ - 🔴 **Red**: Not covered (0-49%)
+- **Interactive Schema Tree**: Expandable schema structure with coverage indicators
+
+### Detailed Coverage Metrics
+
+#### Endpoint-Level Coverage
+```
+/api/v1/users
+├─ GET ✅ 100% (All fields covered)
+├─ POST ✅ 95% (Missing: address validation)
+├─ PUT ⚠️ 60% (Missing: phone, address updates)
+└─ DELETE ❌ 0% (No tests)
+
+/api/v1/products
+├─ GET ✅ 100%
+├─ POST ✅ 100%
+├─ PUT ✅ 85% (Missing: price edge cases)
+└─ DELETE ✅ 100%
+```
+
+#### Field-Level Coverage
+```
+User Schema:
+├─ id ✅ 100% (Tested in all operations)
+├─ name ✅ 100% (Valid, empty, special chars)
+├─ email ✅ 100% (Valid, invalid formats)
+├─ phone ❌ 0% (Not tested)
+├─ address ❌ 0% (Not tested)
+├─ role ⚠️ 33% (Only 'user' tested)
+│ ├─ user ✅ Covered
+│ ├─ admin ❌ Not covered
+│ └─ guest ❌ Not covered
+└─ createdAt ✅ 100%
+```
+
+### Coverage Gaps Identification
+
+Keploy automatically identifies:
+1. **Untested Endpoints**: API paths with no test coverage
+2. **Missing HTTP Methods**: CRUD operations not tested
+3. **Uncovered Fields**: Schema fields never validated
+4. **Missing Edge Cases**: Boundary conditions not tested
+5. **Incomplete Enum Values**: Not all possible values tested
+6. **Error Scenarios**: Missing negative test cases
+7. **Optional Fields**: Optional parameters not tested
+
+## Example Use Cases
+
+### Use Case 1: Complete CRUD Coverage
+
+**Current State:**
+- GET and POST endpoints covered
+- PUT and DELETE not tested
+
+**Action:**
+1. Click "Cover Missing Lines"
+2. Select PUT and DELETE endpoints
+3. Specify conditions:
+ - Valid updates
+ - Non-existent resource updates
+ - Unauthorized access
+4. Generate test suites
+
+**Result:**
+- Coverage increases from 50% to 100%
+- All CRUD operations tested
+
+### Use Case 2: Field Coverage
+
+**Current State:**
+- Basic user fields covered (id, name, email)
+- Advanced fields not tested (phone, address)
+
+**Action:**
+1. Click "Cover Missing Lines"
+2. Select missing fields: phone, address
+3. Specify conditions:
+ - Valid phone formats
+ - Invalid phone formats
+ - International addresses
+ - Empty addresses
+4. Generate test suites
+
+**Result:**
+- Field coverage increases from 60% to 100%
+- All user fields validated
+
+### Use Case 3: Enum Value Coverage
+
+**Current State:**
+- User role field only tested with 'user' value
+- Missing tests for 'admin' and 'guest'
+
+**Action:**
+1. Click "Cover Missing Lines"
+2. Select role field
+3. Specify all enum values: admin, user, guest
+4. Define conditions for each role's permissions
+5. Generate test suites
+
+**Result:**
+- Role coverage increases from 33% to 100%
+- All role-based scenarios tested
+
+## Benefits of Schema Coverage
+
+### Quality Assurance
+- **Comprehensive Testing**: Ensure all API contracts are validated
+- **Catch Breaking Changes**: Detect schema violations early
+- **Contract Compliance**: Verify API matches specification
+
+### Development Efficiency
+- **Automated Test Generation**: Generate tests automatically for missing coverage
+- **Gap Identification**: Quickly identify untested scenarios
+- **Prioritized Testing**: Focus on areas with low coverage
+
+### Documentation
+- **Living Documentation**: Schema coverage serves as API documentation
+- **Coverage Reports**: Share coverage metrics with stakeholders
+- **Trend Analysis**: Track coverage improvements over time
+
+## Best Practices
+
+1. **Regular Coverage Review**
+ - Check schema coverage after adding new endpoints
+ - Review coverage before releases
+ - Set coverage targets (e.g., 80% minimum)
+
+2. **Incremental Coverage Improvement**
+ - Start with critical endpoints
+ - Gradually increase coverage over time
+ - Focus on high-impact areas first
+
+3. **Meaningful Test Generation**
+ - Specify realistic conditions when generating tests
+ - Include both positive and negative scenarios
+ - Test edge cases and boundary conditions
+
+4. **Keep Schema Updated**
+ - Update original schema when API changes
+ - Re-run coverage analysis after updates
+ - Archive old coverage reports for comparison
+
+5. **Combine with Manual Testing**
+ - Use auto-generation for basic coverage
+ - Add manual tests for complex scenarios
+ - Review and refine generated tests
+
+## Coverage Metrics and Goals
+
+### Recommended Coverage Targets
+
+- **Critical Endpoints**: 95-100% coverage
+- **User-Facing APIs**: 90-95% coverage
+- **Internal APIs**: 80-90% coverage
+- **Experimental Features**: 70-80% coverage
+
+### Monitoring Coverage Trends
+
+Track coverage over time:
+```
+Coverage History:
+├─ Jan 2026: 45% ────────────────────────▒▒▒▒▒▒▒▒▒▒▒
+├─ Feb 2026: 62% ──────────────────────────────▒▒▒▒▒
+└─ Mar 2026: 78% ────────────────────────────────────▒▒
+ Goal: 90% ──────────────────────────────────────────
+```
+
+## Integration with Test Workflow
+
+1. **Record API Calls** → Test suites created
+2. **View Schema Coverage** → Identify gaps
+3. **Generate Missing Tests** → Fill coverage gaps
+4. **Run Tests** → Validate coverage
+5. **Review Results** → Iterate and improve
+
+## Exporting Coverage Reports
+
+- **Export Options**: PDF, HTML, JSON, CSV
+- **Include in CI/CD**: Generate coverage reports in pipelines
+- **Share with Team**: Distribute coverage metrics
+- **Compliance Reports**: Document API testing completeness
+
+By leveraging schema coverage analysis and automated test generation, you can ensure comprehensive API testing, maintain high code quality, and quickly identify and address testing gaps in your application.
\ No newline at end of file
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-sharing-reports.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-sharing-reports.md
new file mode 100644
index 000000000..e3387e775
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-sharing-reports.md
@@ -0,0 +1,73 @@
+---
+id: api-testing-sharing-reports
+title: Sharing Reports
+sidebar_label: Sharing Reports
+description: Share API test execution reports securely within your workspace
+tags:
+ - API testing
+ - reports
+ - collaboration
+ - workspace
+ - access control
+keywords:
+ - internal report sharing
+ - workspace collaboration
+ - API test results
+ - team access
+ - execution reports
+---
+
+import ProductTier from '@site/src/components/ProductTier';
+
+
+
+## Sharing Reports
+
+Keploy allows you to securely share test execution reports with members inside your workspace.
+
+## How Report Sharing Works
+
+Reports can be shared in two ways:
+
+### 1. Share with Existing Workspace Members
+
+1. Navigate to a completed **Test Run**
+2. Open the execution report
+3. Click the **Share** option
+4. Select a team member from your workspace
+5. Confirm sharing
+
+The selected user will gain access to view the report inside their dashboard.
+
+
+### 2. Add a New User and Share
+
+If the person is not yet part of your workspace:
+
+1. Click **Add User**
+2. Enter their details
+3. Add them to your company workspace
+4. Share the report with them
+
+Once added, they become a workspace member and can access shared reports based on permissions.
+
+
+## What Shared Reports Include
+
+When you share a report, the recipient can view:
+
+### Execution Summary
+- Total test cases executed
+- Passed / Failed count
+- Execution duration
+- Environment details
+- Step-Level Results
+- Request & Response Details
+
+## Best Practices
+
+- Share reports instead of exporting logs
+- Add relevant team members directly from the dashboard
+- Maintain proper role-based access control
+- Review assertion-level failures before escalating issues
+- Remove access when no longer required
diff --git a/versioned_docs/version-4.0.0/running-keploy/api-testing-suite-settings.md b/versioned_docs/version-4.0.0/running-keploy/api-testing-suite-settings.md
new file mode 100644
index 000000000..8c68124a1
--- /dev/null
+++ b/versioned_docs/version-4.0.0/running-keploy/api-testing-suite-settings.md
@@ -0,0 +1,604 @@
+---
+id: api-testing-suite-settings
+title: Test Suite Settings & Actions
+description: Complete guide to test suite panel actions including sharing, running, bulk editing, and global configurations
+sidebar_label: Suite Settings
+tags:
+ - api-testing
+ - test-suite-management
+ - suite-settings
+ - bulk-operations
+---
+
+# Test Suite Settings & Actions
+
+The test suite panel in Keploy provides a comprehensive set of actions and settings to manage your test suites effectively. From basic operations like running and sharing tests to advanced features like bulk schema editing and global configurations, the suite panel offers everything you need for efficient test management.
+
+## Overview of Available Actions
+
+The test suite panel provides access to the following key actions:
+
+- **Share Test Suite**: Collaborate with team members by sharing test suites
+- **Run Test Suite**: Execute all tests within the suite
+- **Bulk Edit Schema Assertions**: Modify assertions across multiple tests
+- **Add Global Variables**: Define variables accessible across all tests
+- **Run in CI**: Configure continuous integration execution
+- **Global Functions**: Create reusable functions for test suites
+
+## Accessing the Test Suite Panel
+
+1. **Navigate to Test Suites**
+ - Go to your Keploy dashboard
+ - Click on **Test Suites** from the main navigation
+
+2. **Open Suite Panel**
+ - Click on any test suite to open its details
+ - The suite panel opens with various action buttons and settings tabs
+
+3. **Panel Layout**
+ ```
+ Test Suite: User Authentication API
+ =====================================
+ [Share] [Run] [Settings] [⋮ More Actions]
+
+ Tabs: [Tests] [Variables] [Functions] [Assertions] [CI/CD]
+ ```
+
+## Action 1: Sharing Test Suites
+
+### Share Options
+
+**Access**: Click the **Share** button in the suite panel
+
+**Sharing Methods**:
+- **Team Members**: Share with specific users in your organization
+- **Public Link**: Generate a public link for external sharing
+- **Export**: Download suite as a file for offline sharing
+
+### Sharing Process
+
+1. **Click Share Button**
+ ```
+ Share "User Authentication API" Suite
+ ====================================
+
+ Share with:
+ ☐ Team Members
+ ☐ External Users
+ ☐ Generate Public Link
+ ☐ Export as File
+ ```
+
+2. **Configure Sharing Settings**
+ ```
+ Permissions:
+ ☐ View Only (read-only access)
+ ☐ Edit (can modify tests)
+ ☐ Execute (can run tests)
+ ☐ Admin (full control)
+
+ Expiration: [30 days ▼]
+ Password Protection: [Optional]
+ ```
+
+3. **Generate Share Link**
+ ```
+ Generated Link:
+ https://app.keploy.io/shared/suite/abc123xyz
+
+ Actions:
+ [Copy Link] [Send Email] [Download QR Code]
+ ```
+
+## Action 2: Running Test Suites
+
+### Run Configuration
+
+**Access**: Click the **Run** button in the suite panel
+
+**Execution Options**:
+```
+Run Configuration
+=================
+
+Environment: [Staging ▼]
+Execution Mode:
+ ☐ Sequential (one test at a time)
+ ☑ Parallel (multiple tests simultaneously)
+
+Parallel Settings:
+ Max Concurrent Tests: [5]
+ Timeout per Test: [30 seconds]
+
+Retry Policy:
+ Failed Tests: [Retry 2 times]
+ Retry Delay: [5 seconds]
+
+Data Options:
+ ☐ Use Test Data
+ ☐ Generate Random Data
+ ☑ Use Global Variables
+```
+
+### Execution Monitoring
+
+Real-time execution progress:
+```
+Execution Progress: 15 of 20 tests completed
+=========================================
+
+✅ Login API Test (0.8s)
+✅ Register User Test (1.2s)
+✅ Password Reset Test (0.9s)
+🔄 Profile Update Test (running...)
+⏳ Logout Test (queued)
+⏳ Delete Account Test (queued)
+
+Success Rate: 85% | Avg Response Time: 1.1s
+```
+
+## Action 3: Bulk Edit Schema Assertions
+
+### Schema Assertion Editor
+
+**Access**: Go to **Assertions** tab in the suite panel
+
+**Bulk Operations Available**:
+- **Add Assertions**: Apply new assertions to multiple tests
+- **Modify Assertions**: Update existing assertions across tests
+- **Remove Assertions**: Delete specific assertions from multiple tests
+- **Template Application**: Apply assertion templates to selected tests
+
+### Bulk Editing Process
+
+1. **Select Tests for Bulk Edit**
+ ```
+ Tests in Suite (20 total)
+ ========================
+ ☑ Select All
+ ☑ Login API Test
+ ☑ Register User Test
+ ☑ Password Reset Test
+ ☐ Profile Update Test
+ ☐ Logout Test
+
+ Selected: 3 tests
+ ```
+
+2. **Choose Assertion Type**
+ ```
+ Assertion Categories
+ ===================
+
+ 📊 Response Validation:
+ ├── Status Code Assertions
+ ├── Response Time Assertions
+ ├── Header Validations
+ └── Content-Type Checks
+
+ 🔍 Content Validation:
+ ├── JSON Schema Validation
+ ├── Required Fields Check
+ ├── Data Type Validation
+ └── Value Range Validation
+
+ 🔐 Security Assertions:
+ ├── Authentication Headers
+ ├── HTTPS Enforcement
+ └── CORS Validation
+ ```
+
+3. **Configure Assertions**
+ ```
+ JSON Schema Assertion
+ ====================
+
+ Field: response.user.id
+ Type: [number ▼]
+ Required: ☑ Yes
+ Validation Rules:
+ Min Value: [1]
+ Max Value: [999999]
+
+ Field: response.user.email
+ Type: [string ▼]
+ Required: ☑ Yes
+ Pattern: [^[^\s@]+@[^\s@]+\.[^\s@]+$]
+
+ Apply to: 3 selected tests
+ ```
+
+## Action 4: Add Global Variables
+
+### Global Variable Management
+
+**Access**: Go to **Variables** tab in the suite panel
+
+**Variable Types**:
+- **Environment Variables**: Different values per environment
+- **Static Variables**: Fixed values across all tests
+- **Dynamic Variables**: Generated at runtime
+- **Secret Variables**: Encrypted sensitive data
+
+### Adding Global Variables
+
+1. **Create New Variable**
+ ```
+ Add Global Variable
+ ==================
+
+ Variable Name: [api_base_url]
+ Variable Type: [Environment ▼]
+
+ Environment Values:
+ Development: https://api-dev.example.com
+ Staging: https://api-staging.example.com
+ Production: https://api.example.com
+
+ Description: Base URL for API endpoints
+ ```
+
+2. **Variable Categories**
+ ```
+ 🌍 Environment Variables:
+ ├── api_base_url
+ ├── database_host
+ └── auth_service_url
+
+ 🔐 Authentication:
+ ├── api_key (secret)
+ ├── auth_token (dynamic)
+ └── client_secret (secret)
+
+ 📊 Test Data:
+ ├── test_user_id
+ ├── sample_email
+ └── default_timeout
+
+ ⚙️ Configuration:
+ ├── max_retry_attempts
+ ├── request_timeout
+ └── parallel_execution_count
+ ```
+
+3. **Variable Usage in Tests**
+ ```
+ Example Usage in Test Request:
+ =============================
+
+ URL: {{api_base_url}}/users/{{test_user_id}}
+ Headers:
+ Authorization: Bearer {{auth_token}}
+ Content-Type: application/json
+
+ Body:
+ {
+ "email": "{{sample_email}}",
+ "timeout": {{default_timeout}}
+ }
+ ```
+
+## Action 5: Run in CI
+
+### CI/CD Integration Setup
+
+**Access**: Go to **CI/CD** tab in the suite panel
+
+**Available CI Platforms**:
+- GitHub Actions
+- GitLab CI/CD
+- Jenkins
+- Azure DevOps
+- CircleCI
+- Custom Webhooks
+
+### CI Configuration Process
+
+1. **Select CI Platform**
+ ```
+ Choose CI/CD Platform
+ ====================
+
+ ☑ GitHub Actions
+ ☐ GitLab CI/CD
+ ☐ Jenkins
+ ☐ Azure DevOps
+ ☐ CircleCI
+ ☐ Custom Webhook
+ ```
+
+2. **Generate CI Configuration**
+ ```yaml
+ # Generated GitHub Actions Workflow
+ name: API Tests - User Authentication Suite
+
+ on:
+ push:
+ branches: [ main, develop ]
+ pull_request:
+ branches: [ main ]
+ schedule:
+ - cron: '0 6 * * *' # Daily at 6 AM
+
+ jobs:
+ api-tests:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v3
+
+ - name: Run Keploy Test Suite
+ uses: keploy/test-action@v1
+ with:
+ suite-id: 'user-auth-suite-123'
+ environment: 'staging'
+ parallel: true
+ max-concurrent: 5
+ env:
+ KEPLOY_API_KEY: ${{ secrets.KEPLOY_API_KEY }}
+ ```
+
+3. **CI Execution Settings**
+ ```
+ CI Execution Configuration
+ =========================
+
+ Trigger Conditions:
+ ☑ On Push to Main Branch
+ ☑ On Pull Request
+ ☑ Scheduled (Daily at 6 AM)
+ ☐ Manual Trigger Only
+
+ Execution Environment:
+ Environment: [Staging ▼]
+ Parallel Execution: ☑ Enabled
+ Max Workers: [5]
+ Timeout: [10 minutes]
+
+ Failure Handling:
+ ☑ Fail build on test failure
+ ☑ Send notifications on failure
+ ☐ Auto-retry failed tests
+ ```
+
+## Action 6: Global Functions
+
+### Function Management
+
+**Access**: Go to **Functions** tab in the suite panel
+
+**Function Types**:
+- **Pre-request Functions**: Execute before each test
+- **Post-response Functions**: Execute after each test response
+- **Utility Functions**: Reusable helper functions
+- **Validation Functions**: Custom assertion logic
+
+### Creating Global Functions
+
+1. **Add New Function**
+ ```
+ Create Global Function
+ =====================
+
+ Function Name: [generateAuthToken]
+ Function Type: [Pre-request ▼]
+
+ Function Code:
+ ```javascript
+ function generateAuthToken(request, context) {
+ const timestamp = Date.now();
+ const signature = crypto.createHmac('sha256', context.secret_key)
+ .update(`${timestamp}${request.method}${request.url}`)
+ .digest('hex');
+
+ return {
+ 'X-Auth-Token': `${timestamp}.${signature}`,
+ 'X-Timestamp': timestamp
+ };
+ }
+ ```
+
+2. **Function Categories**
+ ```
+ 🔧 Utility Functions:
+ ├── generateRandomId()
+ ├── formatTimestamp()
+ ├── encodeBase64()
+ └── validateEmail()
+
+ 🔐 Authentication:
+ ├── generateAuthToken()
+ ├── refreshToken()
+ └── validateSession()
+
+ 📊 Data Processing:
+ ├── normalizeResponse()
+ ├── extractErrorCode()
+ └── calculateChecksum()
+
+ ✅ Validation:
+ ├── validateSchema()
+ ├── checkResponseTime()
+ └── verifyHeaders()
+ ```
+
+3. **Function Usage Examples**
+ ```javascript
+ // Pre-request function usage
+ function beforeRequest(request, context) {
+ // Add authentication
+ const authHeaders = generateAuthToken(request, context);
+ request.headers = { ...request.headers, ...authHeaders };
+
+ // Add request ID for tracking
+ request.headers['X-Request-ID'] = generateRandomId();
+
+ return request;
+ }
+
+ // Post-response validation
+ function afterResponse(response, context) {
+ // Validate response schema
+ const isValid = validateSchema(response.body, context.expectedSchema);
+ if (!isValid) {
+ throw new Error('Response schema validation failed');
+ }
+
+ // Check performance
+ if (response.time > context.maxResponseTime) {
+ console.warn(`Slow response: ${response.time}ms`);
+ }
+
+ return response;
+ }
+ ```
+
+## Advanced Suite Configuration
+
+### Suite-Level Settings
+
+**Access**: Click **Settings** button in suite panel
+
+**Configuration Options**:
+```
+Suite Configuration
+==================
+
+General Settings:
+ Suite Name: [User Authentication API]
+ Description: [Complete authentication flow tests]
+ Owner: [team-backend]
+
+Execution Settings:
+ Default Environment: [Staging]
+ Default Timeout: [30 seconds]
+ Max Retry Attempts: [3]
+ Parallel Execution: ☑ Enabled
+
+Data Management:
+ ☑ Preserve test data between runs
+ ☑ Auto-cleanup temporary data
+ ☐ Use production data (warning)
+
+Notifications:
+ ☑ Email on failure
+ ☑ Slack integration
+ ☐ SMS alerts
+
+Security:
+ ☑ Encrypt sensitive variables
+ ☑ Audit log access
+ ☑ Require approval for modifications
+```
+
+### Suite Templates
+
+Save and reuse suite configurations:
+```
+Save as Template
+===============
+
+Template Name: [Standard API Test Suite]
+Description: [Default configuration for API testing]
+
+Include in Template:
+☑ Variable definitions
+☑ Global functions
+☑ Assertion templates
+☑ CI/CD configuration
+☐ Test data (large datasets)
+
+Apply Template to:
+☐ New test suites only
+☐ Existing suites (with confirmation)
+```
+
+## Monitoring and Analytics
+
+### Suite Performance Dashboard
+
+```
+Suite Analytics Dashboard
+========================
+
+📊 Execution Statistics (Last 30 Days):
+├── Total Runs: 1,247
+├── Success Rate: 94.2%
+├── Average Duration: 4m 32s
+└── Most Common Failures: Authentication timeout
+
+📈 Trends:
+├── Success Rate: ↗️ +2.1% (improving)
+├── Response Time: ↘️ -120ms (faster)
+└── Test Coverage: ↗️ +5 new assertions
+
+🔍 Top Issues:
+├── Flaky Test: "Password Reset" (12% failure rate)
+├── Slow Endpoint: "/users/profile" (avg 2.1s)
+└── Missing Assertion: Response headers validation
+```
+
+## Best Practices
+
+### Suite Organization
+
+1. **Logical Grouping**
+ - Group related API endpoints together
+ - Separate by functional areas (auth, payments, etc.)
+ - Use consistent naming conventions
+
+2. **Variable Management**
+ - Use environment-specific variables
+ - Avoid hardcoding sensitive data
+ - Document variable purposes
+
+3. **Function Reusability**
+ - Create modular, single-purpose functions
+ - Use descriptive function names
+ - Include error handling
+
+### Performance Optimization
+
+1. **Execution Efficiency**
+ - Enable parallel execution for independent tests
+ - Set appropriate timeouts
+ - Use test data factories
+
+2. **Resource Management**
+ - Clean up test data after execution
+ - Monitor suite execution times
+ - Optimize slow-running tests
+
+## Troubleshooting
+
+### Common Issues
+
+1. **Suite Won't Run**
+ - Check environment connectivity
+ - Verify variable values
+ - Review function syntax
+
+2. **Bulk Operations Failing**
+ - Reduce operation scope
+ - Check individual test permissions
+ - Verify schema compatibility
+
+3. **CI Integration Issues**
+ - Validate API keys and secrets
+ - Check network connectivity
+ - Review execution logs
+
+### Getting Help
+
+- **Documentation**: Review individual feature guides
+- **Support**: Contact support with suite execution logs
+- **Community**: Ask questions in Keploy forums
+- **API Reference**: Check API documentation for programmatic access
+
+## Related Features
+
+- **[Individual Test Management](./api-testing-edit-suites.md)**: Edit specific tests
+- **[Label Management](./api-testing-adding-labels.md)**: Organize with labels
+- **[Selective Execution](./api-testing-running-selective.md)**: Run specific tests
+- **[Sharing & Reports](./api-testing-sharing-reports.md)**: Share results
+
+The test suite panel provides a comprehensive control center for managing all aspects of your API test suites, from basic execution to advanced automation and collaboration features.
\ No newline at end of file
diff --git a/versioned_sidebars/version-4.0.0-sidebars.json b/versioned_sidebars/version-4.0.0-sidebars.json
index 74b2f5a79..b690de232 100644
--- a/versioned_sidebars/version-4.0.0-sidebars.json
+++ b/versioned_sidebars/version-4.0.0-sidebars.json
@@ -170,7 +170,26 @@
"running-keploy/run-ai-generated-api-tests",
"running-keploy/api-testing-cicd",
"running-keploy/api-testing-webhook",
+ "running-keploy/api-testing-edit-suites",
+ "running-keploy/api-testing-custom-assertions",
+ "running-keploy/api-testing-assertion-tree",
"running-keploy/api-testing-auth-setup",
+ "running-keploy/api-testing-sharing-reports",
+ "running-keploy/api-testing-buggy-suites",
+ "running-keploy/api-testing-mark-unbuggy",
+ "running-keploy/api-testing-running-selective",
+ "running-keploy/api-testing-adding-labels",
+ "running-keploy/api-testing-suite-settings",
+ "running-keploy/api-testing-add-suite",
+ "running-keploy/api-testing-local-agent",
+ "running-keploy/api-testing-filter-suites",
+ "running-keploy/api-testing-generation-history",
+ "running-keploy/api-testing-run-report",
+ "running-keploy/api-testing-bulk-assertions",
+ "running-keploy/api-testing-schema-coverage",
+ "running-keploy/api-testing-edit-assertions",
+
+
{
"type": "doc",
"label": "FAQs",