Skip to content

Conversation

@codeflash-ai
Copy link
Contributor

@codeflash-ai codeflash-ai bot commented Dec 26, 2025

⚡️ This pull request contains optimizations for PR #990

If you approve this dependent PR, these changes will be merged into the original PR branch diversity.

This PR will be automatically closed if the original PR is merged.


📄 733% (7.33x) speedup for AiServiceClient.optimize_python_code_refinement in codeflash/api/aiservice.py

⏱️ Runtime : 63.1 milliseconds 7.57 milliseconds (best of 33 runs)

📝 Explanation and details

The optimized code achieves a 733% speedup by eliminating expensive external library calls and complex string manipulations in the humanize_runtime function, which was the primary bottleneck.

Key Optimizations

1. Removed humanize.precisedelta Dependency

The original code called humanize.precisedelta() for every value ≥1000 nanoseconds, accounting for 87.2% of the function's runtime. The optimized version replaces this with:

  • Direct threshold-based unit selection using simple numeric comparisons (if time_micro < 1000, elif time_micro < 1_000_000, etc.)
  • Manual arithmetic for unit conversion (e.g., time_micro / 1000 for milliseconds)
  • No external library overhead in the hot path

2. Eliminated Regex Parsing

The original code used re.split(r",|\s", runtime_human)[1] to extract units from the humanize output (4.5% of runtime). The optimized version directly assigns unit strings based on the threshold logic, avoiding regex entirely.

3. Simplified Formatting Logic

The original code performed complex string splitting and reconstruction to format decimal places (checking runtime_human_parts[0] length, conditionally adding "0" padding, etc.). The optimized version uses:

  • Smart formatting based on value magnitude: f"{value:.2f}" for values <10, f"{value:.1f}" for <100, f"{int(round(value))}" otherwise
  • Direct singular/plural unit selection using math.isclose(value, 1.0) instead of nested conditionals on string parts

4. Fast Path for Sub-Microsecond Values

Added early return for time_in_ns < 1000, avoiding all conversion logic for nanosecond-scale values.

Performance Impact

Test results show consistent speedups across all scenarios:

  • Small batches (1-3 requests): 122-231% faster
  • Large batches (1000 requests): 903% faster
  • Error cases with logging overhead: 7-8% faster (less improvement due to I/O dominance)

The optimization is particularly effective for workloads that process many refinement requests, as humanize_runtime is called twice per request (for original and optimized runtimes). In the optimize_python_code_refinement method, the payload construction time dropped from 91.1% to 57% of total runtime, directly correlating with the humanize_runtime improvements.

Behavioral Preservation

The optimized code maintains the same output format and singular/plural unit handling. The math.isclose check ensures precise singular unit detection (e.g., "1 microsecond" vs "1.01 microseconds"), replacing the original's string-based logic.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 28 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 83.3%
🌀 Click to see Generated Regression Tests
from unittest.mock import MagicMock

# imports
from codeflash.api.aiservice import AiServiceClient
from codeflash.models.models import AIServiceRefinerRequest


# Helper function to create a minimal valid AIServiceRefinerRequest
def make_refiner_request(
    optimization_id="opt-1",
    original_source_code="```test.py\nprint('Hello')\n```",
    read_only_dependency_code="",
    original_line_profiler_results="",
    original_code_runtime=1000000,
    optimized_source_code="```test.py\nprint('Hi')\n```",
    optimized_explanation="Changed print statement.",
    optimized_line_profiler_results="",
    optimized_code_runtime=500000,
    speedup=2.0,
    trace_id="trace-123",
    function_references=["main"],
    call_sequence=["main"],
):
    return AIServiceRefinerRequest(
        optimization_id=optimization_id,
        original_source_code=original_source_code,
        read_only_dependency_code=read_only_dependency_code,
        original_line_profiler_results=original_line_profiler_results,
        original_code_runtime=original_code_runtime,
        optimized_source_code=optimized_source_code,
        optimized_explanation=optimized_explanation,
        optimized_line_profiler_results=optimized_line_profiler_results,
        optimized_code_runtime=optimized_code_runtime,
        speedup=speedup,
        trace_id=trace_id,
        function_references=function_references,
        call_sequence=call_sequence,
    )


# Helper function to create a valid refinement response
def make_refinement_response(
    code="```test.py\nprint('Hi')\n```", explanation="Refined.", optimization_id="opt-1", parent_id=None, model="gpt-4"
):
    return {
        "refinements": [
            {
                "source_code": code,
                "explanation": explanation,
                "optimization_id": optimization_id,
                "parent_id": parent_id,
                "model": model,
            }
        ]
    }


# Helper function to create a large number of requests
def make_large_refiner_requests(n):
    return [
        make_refiner_request(
            optimization_id=f"opt-{i}",
            original_source_code=f"```file{i}.py\nprint('Hello {i}')\n```",
            optimized_source_code=f"```file{i}.py\nprint('Hi {i}')\n```",
            trace_id=f"trace-{i}",
            function_references=[f"func{i}"],
            call_sequence=[f"func{i}"],
        )
        for i in range(n)
    ]


# Basic Test Cases


def test_empty_refinement_list(monkeypatch):
    """Test that an empty request list returns an empty result."""
    client = AiServiceClient()
    mock_response = MagicMock()
    mock_response.status_code = 200
    mock_response.json.return_value = {"refinements": []}
    monkeypatch.setattr(client, "make_ai_service_request", lambda *args, **kwargs: mock_response)

    codeflash_output = client.optimize_python_code_refinement([])
    result = codeflash_output  # 22.4μs -> 22.4μs (0.228% slower)


def test_large_refinements_with_some_invalid(monkeypatch):
    """Test that the function skips invalid code blocks among large refinements."""
    client = AiServiceClient()
    N = 100
    refinements = [
        {
            "source_code": f"```file{i}.py\nprint('Hi {i}')\n```" if i % 2 == 0 else "invalid code",
            "explanation": f"Refined {i}.",
            "optimization_id": f"opt-{i}",
            "parent_id": None,
            "model": "gpt-4",
        }
        for i in range(N)
    ]
    mock_response = MagicMock()
    mock_response.status_code = 200
    mock_response.json.return_value = {"refinements": refinements}
    client = AiServiceClient()
    monkeypatch.setattr(client, "make_ai_service_request", lambda *args, **kwargs: mock_response)
    requests = make_large_refiner_requests(N)
    codeflash_output = client.optimize_python_code_refinement(requests)
    result = codeflash_output
    for candidate in result:
        idx = int(candidate.optimization_id.split("-")[1])
from unittest.mock import MagicMock, patch

# imports
import pytest

from codeflash.api.aiservice import AiServiceClient


class OptimizedCandidateSource:
    REFINE = "refine"


class OptimizedCandidate:
    def __init__(self, source_code, explanation, optimization_id, source, parent_id=None, model=None):
        self.source_code = source_code
        self.explanation = explanation
        self.optimization_id = optimization_id
        self.source = source
        self.parent_id = parent_id
        self.model = model


class AIServiceRefinerRequest:
    def __init__(
        self,
        optimization_id,
        original_source_code,
        read_only_dependency_code,
        original_line_profiler_results,
        original_code_runtime,
        optimized_source_code,
        optimized_explanation,
        optimized_line_profiler_results,
        optimized_code_runtime,
        speedup,
        trace_id,
        function_references,
        call_sequence,
    ):
        self.optimization_id = optimization_id
        self.original_source_code = original_source_code
        self.read_only_dependency_code = read_only_dependency_code
        self.original_line_profiler_results = original_line_profiler_results
        self.original_code_runtime = original_code_runtime
        self.optimized_source_code = optimized_source_code
        self.optimized_explanation = optimized_explanation
        self.optimized_line_profiler_results = optimized_line_profiler_results
        self.optimized_code_runtime = optimized_code_runtime
        self.speedup = speedup
        self.trace_id = trace_id
        self.function_references = function_references
        self.call_sequence = call_sequence


# --- Unit tests for optimize_python_code_refinement ---


@pytest.fixture
def client():
    return AiServiceClient()


def make_refiner_request(idx=1, valid_code=True):
    code = "print('hello')" if valid_code else "INVALID"
    return AIServiceRefinerRequest(
        optimization_id=f"id{idx}",
        original_source_code=code,
        read_only_dependency_code="",
        original_line_profiler_results="",
        original_code_runtime=1000000,
        optimized_source_code=code,
        optimized_explanation="Optimized for speed.",
        optimized_line_profiler_results="",
        optimized_code_runtime=500000,
        speedup=2.0,
        trace_id=f"trace{idx}",
        function_references=["func1"],
        call_sequence=["func1", "func2"],
    )


# --- 1. Basic Test Cases ---


def test_basic_single_candidate_success(client):
    """Test single valid candidate returns one OptimizedCandidate."""
    req = [make_refiner_request()]
    # Patch response
    mock_resp = MagicMock()
    mock_resp.status_code = 200
    mock_resp.json.return_value = {
        "refinements": [
            {
                "source_code": "print('hello')",
                "explanation": "Refined code.",
                "optimization_id": "id1",
                "parent_id": None,
                "model": "gpt-4",
            }
        ]
    }
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 112μs -> 50.6μs (122% faster)


def test_basic_multiple_candidates_success(client):
    """Test multiple valid candidates are returned."""
    req = [make_refiner_request(idx=i) for i in range(3)]
    mock_resp = MagicMock()
    mock_resp.status_code = 200
    mock_resp.json.return_value = {
        "refinements": [
            {
                "source_code": "print('hello')",
                "explanation": f"Refined code {i}.",
                "optimization_id": f"id{i}",
                "parent_id": None,
                "model": "gpt-4",
            }
            for i in range(3)
        ]
    }
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 183μs -> 55.4μs (231% faster)
        for i, cand in enumerate(result):
            pass


def test_basic_empty_request_returns_empty(client):
    """Test that an empty input list returns an empty list."""
    with patch.object(client, "make_ai_service_request") as mock_method:
        # Should not be called, but if it is, return status_code 200 with empty refinements
        mock_resp = MagicMock()
        mock_resp.status_code = 200
        mock_resp.json.return_value = {"refinements": []}
        mock_method.return_value = mock_resp
        codeflash_output = client.optimize_python_code_refinement([])
        result = codeflash_output  # 30.5μs -> 29.7μs (2.60% faster)


# --- 2. Edge Test Cases ---


def test_edge_invalid_code_block_filtered_out(client):
    """Test that candidates with invalid code are filtered out."""
    req = [make_refiner_request(valid_code=False)]
    mock_resp = MagicMock()
    mock_resp.status_code = 200
    mock_resp.json.return_value = {
        "refinements": [{"source_code": "INVALID", "explanation": "Should be filtered.", "optimization_id": "id1"}]
    }
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 103μs -> 45.9μs (125% faster)


def test_edge_api_error_returns_empty(client):
    """Test that API error (non-200) returns empty list."""
    req = [make_refiner_request()]
    mock_resp = MagicMock()
    mock_resp.status_code = 500
    mock_resp.json.return_value = {"error": "Internal Server Error"}
    mock_resp.text = "Internal Server Error"
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 889μs -> 824μs (7.84% faster)


def test_edge_missing_fields_in_response(client):
    """Test that missing optional fields in response do not break candidate creation."""
    req = [make_refiner_request()]
    mock_resp = MagicMock()
    mock_resp.status_code = 200
    # No parent_id or model
    mock_resp.json.return_value = {
        "refinements": [{"source_code": "print('hello')", "explanation": "Refined code.", "optimization_id": "id1"}]
    }
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 113μs -> 50.7μs (123% faster)


def test_edge_response_json_raises_exception(client):
    """Test that if response.json() raises, function returns empty list."""
    req = [make_refiner_request()]
    mock_resp = MagicMock()
    mock_resp.status_code = 400
    mock_resp.json.side_effect = Exception("Bad JSON")
    mock_resp.text = "Bad request"
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 899μs -> 831μs (8.12% faster)


def test_edge_large_speedup_and_runtime(client):
    """Test that large speedup and runtime values are handled."""
    req = [
        AIServiceRefinerRequest(
            optimization_id="id-large",
            original_source_code="print('start')",
            read_only_dependency_code="",
            original_line_profiler_results="",
            original_code_runtime=999999999999,
            optimized_source_code="print('end')",
            optimized_explanation="Massive speedup.",
            optimized_line_profiler_results="",
            optimized_code_runtime=1,
            speedup=1e12,
            trace_id="trace-large",
            function_references=["func"],
            call_sequence=["func"],
        )
    ]
    mock_resp = MagicMock()
    mock_resp.status_code = 200
    mock_resp.json.return_value = {
        "refinements": [
            {"source_code": "print('end')", "explanation": "Massive speedup.", "optimization_id": "id-large"}
        ]
    }
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 90.1μs -> 47.5μs (89.7% faster)


# --- 3. Large Scale Test Cases ---


def test_large_scale_many_candidates(client):
    """Test function with many candidates (scalability, up to 1000)."""
    N = 1000
    req = [make_refiner_request(idx=i) for i in range(N)]
    mock_resp = MagicMock()
    mock_resp.status_code = 200
    mock_resp.json.return_value = {
        "refinements": [
            {"source_code": "print('hello')", "explanation": f"Refined code {i}.", "optimization_id": f"id{i}"}
            for i in range(N)
        ]
    }
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 32.8ms -> 3.27ms (903% faster)
        for i in range(0, N, 250):  # spot-check every 250th
            pass


def test_large_scale_some_invalid_candidates(client):
    """Test large scale with some invalid code blocks in response."""
    N = 100
    req = [make_refiner_request(idx=i, valid_code=(i % 2 == 0)) for i in range(N)]
    mock_resp = MagicMock()
    mock_resp.status_code = 200
    mock_resp.json.return_value = {
        "refinements": [
            {
                "source_code": "INVALID" if i % 2 else "print('hello')",
                "explanation": f"Refined code {i}.",
                "optimization_id": f"id{i}",
            }
            for i in range(N)
        ]
    }
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 3.49ms -> 378μs (824% faster)
        for cand in result:
            idx = int(cand.optimization_id[2:])


def test_large_scale_empty_refinements(client):
    """Test large scale request with empty refinements returned."""
    N = 500
    req = [make_refiner_request(idx=i) for i in range(N)]
    mock_resp = MagicMock()
    mock_resp.status_code = 200
    mock_resp.json.return_value = {"refinements": []}
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 15.6ms -> 785μs (1888% faster)


def test_large_scale_api_error(client):
    """Test large scale request with API error response."""
    N = 250
    req = [make_refiner_request(idx=i) for i in range(N)]
    mock_resp = MagicMock()
    mock_resp.status_code = 503
    mock_resp.json.return_value = {"error": "Service Unavailable"}
    mock_resp.text = "Service Unavailable"
    with patch.object(client, "make_ai_service_request", return_value=mock_resp):
        codeflash_output = client.optimize_python_code_refinement(req)
        result = codeflash_output  # 8.74ms -> 1.18ms (641% faster)


# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-pr990-2025-12-26T17.13.46 and push.

Codeflash Static Badge

The optimized code achieves a **733% speedup** by eliminating expensive external library calls and complex string manipulations in the `humanize_runtime` function, which was the primary bottleneck.

## Key Optimizations

### 1. **Removed `humanize.precisedelta` Dependency**
The original code called `humanize.precisedelta()` for every value ≥1000 nanoseconds, accounting for **87.2%** of the function's runtime. The optimized version replaces this with:
- Direct threshold-based unit selection using simple numeric comparisons (`if time_micro < 1000`, `elif time_micro < 1_000_000`, etc.)
- Manual arithmetic for unit conversion (e.g., `time_micro / 1000` for milliseconds)
- **No external library overhead** in the hot path

### 2. **Eliminated Regex Parsing**
The original code used `re.split(r",|\s", runtime_human)[1]` to extract units from the humanize output (**4.5%** of runtime). The optimized version directly assigns unit strings based on the threshold logic, avoiding regex entirely.

### 3. **Simplified Formatting Logic**
The original code performed complex string splitting and reconstruction to format decimal places (checking `runtime_human_parts[0]` length, conditionally adding "0" padding, etc.). The optimized version uses:
- Smart formatting based on value magnitude: `f"{value:.2f}"` for values <10, `f"{value:.1f}"` for <100, `f"{int(round(value))}"` otherwise
- Direct singular/plural unit selection using `math.isclose(value, 1.0)` instead of nested conditionals on string parts

### 4. **Fast Path for Sub-Microsecond Values**
Added early return for `time_in_ns < 1000`, avoiding all conversion logic for nanosecond-scale values.

## Performance Impact

**Test results show consistent speedups across all scenarios:**
- Small batches (1-3 requests): **122-231%** faster
- Large batches (1000 requests): **903%** faster  
- Error cases with logging overhead: **7-8%** faster (less improvement due to I/O dominance)

The optimization is particularly effective for workloads that process many refinement requests, as `humanize_runtime` is called twice per request (for original and optimized runtimes). In the `optimize_python_code_refinement` method, the payload construction time dropped from **91.1%** to **57%** of total runtime, directly correlating with the `humanize_runtime` improvements.

## Behavioral Preservation

The optimized code maintains the same output format and singular/plural unit handling. The `math.isclose` check ensures precise singular unit detection (e.g., "1 microsecond" vs "1.01 microseconds"), replacing the original's string-based logic.
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Dec 26, 2025
@claude
Copy link

claude bot commented Dec 26, 2025

Code Review for PR #994

This PR achieves excellent performance improvements (733% speedup) by replacing the external humanize library with optimized direct calculations. Overall, the implementation is solid and well-tested.

Strengths

Performance & Architecture

  • Excellent optimization approach: Removing the humanize.precisedelta() dependency eliminates 87.2% of the original runtime
  • Smart fast path: The early return for values < 1000ns avoids unnecessary computation
  • Clean threshold-based logic: The cascading if-elif chain is clear and efficient
  • Comprehensive testing: 28 generated regression tests covering basic, edge, and large-scale scenarios

Code Quality

  • Good formatting logic: The smart formatting based on value magnitude maintains readability while being performant
  • Proper singular/plural handling: Using math.isclose() is more robust than string manipulation

Issues & Recommendations

1. Critical: Floating Point Precision Issue (line 49)

The math.isclose() tolerance may produce incorrect results for edge cases. For values like 1.0000000009, this returns singular but the formatted string shows 1.00.

Recommendation: Use a slightly larger tolerance or tie it to the actual formatted string value, such as: units = unit_singular if 0.995 <= value < 1.005 else unit_plural

2. Boundary Value Behavior (lines 15-38)

The threshold comparisons look correct, but consider adding explicit test cases for exact boundaries (999ns, 1000ns, 1000000ns, etc.)

3. Code Style: Magic Numbers (lines 15-38)

Consider extracting conversion constants (1000, 1_000_000, 60_000_000, etc.) as named module-level constants for clarity and maintainability.

4. Missing Input Validation (line 6)

Unlike format_time(), humanize_runtime() doesn't validate inputs. Consider adding type and range checks, or document why validation is unnecessary.

5. Minor: Comment Clarity (line 13)

The comment could be clearer. Suggest: Determine appropriate unit based on time magnitude

Security

No security concerns identified. Pure mathematical operations with no external I/O, no user-controlled format strings, and reduced supply chain risk by removing external dependency.

Test Coverage

Strengths: 28 regression tests with 83.3% coverage is solid. Tests cover empty inputs, invalid code blocks, API errors, and scale.

Gaps: Missing tests for exact boundary values, singular/plural edge cases, and invalid inputs.

Performance Validation

The performance claims are well-documented and impressive. The 733% overall speedup aligns with removing 87.2% overhead, and 903% speedup for 1000 requests shows excellent scalability.

Verdict: Approve with minor suggestions

This is a high-quality optimization with correct algorithmic approach, excellent performance gains, and comprehensive testing. The code is production-ready but would benefit from addressing the math.isclose() edge case.

Priority Fixes:

  1. Fix singular/plural detection logic - Critical
  2. Add boundary value tests - High
  3. Consider input validation or document assumptions - Medium

Great work on this optimization!

@KRRT7 KRRT7 closed this Dec 26, 2025
@codeflash-ai codeflash-ai bot deleted the codeflash/optimize-pr990-2025-12-26T17.13.46 branch December 26, 2025 17:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants