Skip to content

Improve Lighthouse Scrore#6296

Draft
FarhanAliRaza wants to merge 5 commits intoreflex-dev:mainfrom
FarhanAliRaza:lighthouse-improve
Draft

Improve Lighthouse Scrore#6296
FarhanAliRaza wants to merge 5 commits intoreflex-dev:mainfrom
FarhanAliRaza:lighthouse-improve

Conversation

@FarhanAliRaza
Copy link
Copy Markdown
Collaborator

All Submissions:

  • Have you followed the guidelines stated in CONTRIBUTING.md file?
  • Have you checked to ensure there aren't any other open Pull Requests for the desired changed?

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

New Feature Submission:

  • Does your submission pass the tests?
  • Have you linted your code locally prior to submission?

Changes To Core Features:

  • Have you added an explanation of what your changes do and why you'd like us to include them?
  • Have you written new tests for your core changes, as applicable?
  • Have you successfully ran tests with your changes locally?

closes #6295

@FarhanAliRaza FarhanAliRaza marked this pull request as draft April 7, 2026 13:57
@codspeed-hq
Copy link
Copy Markdown

codspeed-hq bot commented Apr 7, 2026

Merging this PR will not alter performance

✅ 9 untouched benchmarks


Comparing FarhanAliRaza:lighthouse-improve (40d5aa7) with main (46ef879)

Open in CodSpeed

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Apr 7, 2026

Greptile Summary

This PR introduces a well-designed Lighthouse performance benchmarking infrastructure — a new lighthouse CI job, shared utilities (lighthouse_utils.py), a pytest wrapper (test_lighthouse.py), and a local developer convenience script (run_lighthouse.py). The Lighthouse-specific code is cleanly implemented, properly opt-in via REFLEX_RUN_LIGHTHOUSE=1, and follows existing project conventions.

Key concerns:

  • pyi_hashes.json (P1): The file was accidentally truncated from ~120 entries to just 3, removing all packages/reflex-components-* stub hashes. All affected packages still exist on disk — this silently breaks .pyi stub freshness checking and is completely unrelated to Lighthouse. The change must be reverted or explained.
  • lighthouse_utils.py (P2): subprocess.TimeoutExpired is not caught alongside CalledProcessError in run_lighthouse(), meaning a timeout will produce a raw Python traceback instead of a structured pytest.fail() message.
  • performance.yml (P2): The lighthouse job relies on Node.js/npx being pre-installed on ubuntu-22.04 without pinning a version via actions/setup-node, which is fragile against runner image updates.

Confidence Score: 4/5

Not safe to merge as-is — the pyi_hashes.json change appears accidental and silently disables stub-freshness tracking for all packages/reflex-components-*.

The Lighthouse-specific files are well-crafted and pose no significant risk. The score is reduced to 4 because of the P1 pyi_hashes.json issue: removing 100+ hash entries for packages that still exist on disk is a concrete, present defect that masks future .pyi stub regressions across many packages. The two P2 findings (missing TimeoutExpired handling, unpinned Node.js) are non-blocking improvements.

pyi_hashes.json requires immediate attention — all removed packages/reflex-components-* entries must be restored before merging.

Important Files Changed

Filename Overview
.github/workflows/performance.yml Adds a well-structured lighthouse CI job; relies on pre-installed Node.js without pinning a version via actions/setup-node
pyi_hashes.json Accidentally strips all packages/reflex-components-* stub hashes while those packages still exist on disk — appears unrelated to the Lighthouse work and breaks .pyi freshness tracking
scripts/run_lighthouse.py Clean local developer convenience script that reuses shared utilities, captures noisy output, and returns a proper exit code — no issues found
tests/integration/lighthouse_utils.py Solid utility module for Lighthouse benchmarking; run_lighthouse() sets a 180 s timeout but only catches CalledProcessError, leaving TimeoutExpired unhandled
tests/integration/test_lighthouse.py Clean pytest wrapper with proper opt-in guard, module-scoped fixture for the app root, and full delegation of scoring logic to lighthouse_utils — no issues found

Sequence Diagram

sequenceDiagram
    participant GHA as GitHub Actions
    participant pytest as pytest runner
    participant test as test_lighthouse.py
    participant utils as lighthouse_utils.py
    participant harness as AppHarnessProd
    participant lh as Lighthouse CLI
    participant artifact as Artifact Upload

    GHA->>pytest: trigger lighthouse job (REFLEX_RUN_LIGHTHOUSE=1)
    pytest->>test: collect test_blank_template_lighthouse_scores
    test->>utils: run_blank_prod_lighthouse_benchmark(app_root, report_path)
    utils->>utils: ensure_blank_lighthouse_app(app_root)
    utils->>harness: AppHarnessProd.create(root, app_name)
    harness-->>utils: frontend_url
    utils->>lh: subprocess.run([lighthouse, url, --output=json, ...])
    lh-->>utils: JSON report written to report_path
    utils->>utils: parse scores, compare against thresholds
    utils-->>test: LighthouseBenchmarkResult(report, summary, failures)
    test->>pytest: print(result.summary)
    alt failures is empty
        pytest-->>GHA: PASS
    else failures non-empty
        test->>pytest: pytest.fail(...)
        pytest-->>GHA: FAIL
    end
    GHA->>artifact: upload .pytest-tmp/lighthouse (always)
Loading

Comments Outside Diff (1)

  1. pyi_hashes.json, line 1-5 (link)

    P1 Unrelated removal of package stub hashes

    This file was truncated from ~120 entries to just 3, removing every packages/reflex-components-* entry. All of those packages still exist on disk (e.g., packages/reflex-components-code, packages/reflex-components-core, packages/reflex-components-radix, etc.). The single pyi_hashes.json at the repo root is used by packages/reflex-base/src/reflex_base/utils/pyi_generator.py to verify .pyi stub freshness. Without these hashes, stale stubs for all affected packages will silently go undetected.

    This change appears to be an accidental inclusion unrelated to the Lighthouse work. Please restore the removed entries (or, if the removal is intentional, explain why in a separate PR).

Reviews (1): Last reviewed commit: "lighthouse test" | Re-trigger Greptile

Comment on lines +234 to +248
try:
subprocess.run(
command,
check=True,
capture_output=True,
text=True,
timeout=180,
)
except subprocess.CalledProcessError as err:
pytest.fail(
"Lighthouse execution failed.\n"
f"Command: {' '.join(command)}\n"
f"stdout:\n{err.stdout}\n"
f"stderr:\n{err.stderr}"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 subprocess.TimeoutExpired not handled

subprocess.run() is called with timeout=180, but only CalledProcessError is caught. If Lighthouse takes longer than 180 seconds, subprocess.TimeoutExpired will propagate uncaught, producing a raw Python traceback in CI output rather than a friendly pytest.fail() message.

Suggested change
try:
subprocess.run(
command,
check=True,
capture_output=True,
text=True,
timeout=180,
)
except subprocess.CalledProcessError as err:
pytest.fail(
"Lighthouse execution failed.\n"
f"Command: {' '.join(command)}\n"
f"stdout:\n{err.stdout}\n"
f"stderr:\n{err.stderr}"
)
try:
subprocess.run(
command,
check=True,
capture_output=True,
text=True,
timeout=180,
)
except subprocess.TimeoutExpired as err:
pytest.fail(
f"Lighthouse timed out after {err.timeout}s.\n"
f"Command: {' '.join(command)}"
)
except subprocess.CalledProcessError as err:
pytest.fail(
"Lighthouse execution failed.\n"
f"Command: {' '.join(command)}\n"
f"stdout:\n{err.stdout}\n"
f"stderr:\n{err.stderr}"
)

Comment on lines +63 to +64
- name: Install playwright
run: uv run playwright install chromium --only-shell
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Node.js version not pinned

The lighthouse job falls back to npx --yes lighthouse@13.1.0 (via get_lighthouse_command()) when no lighthouse binary is found, relying on whatever Node.js version is pre-installed on ubuntu-22.04. The exact Node.js version shipped with GitHub's runner images can change without notice and is not formally guaranteed.

Adding an explicit actions/setup-node step would make the workflow reproducible across runner image updates:

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: "20"

      - name: Install playwright
        run: uv run playwright install chromium --only-shell

- Wrap blank template in rx.el.main landmark with page title/description
- Add aria_label/title to ColorModeIconButton and StickyBadge
- Add landing page prod benchmark alongside blank template test
- Document page structure and metadata best practices
Add a Vite build plugin that generates .gz copies of assets for sirv to
serve pre-compressed. Split socket.io and radix-ui into separate chunks
for better caching. Refactor lighthouse benchmark to use `reflex run
--env prod` directly instead of AppHarnessProd.
@benedikt-bartscher
Copy link
Copy Markdown
Contributor

Shouldn't the compression be left to the reverse-proxy/ingress/deployment? If not, consider adding brotli alongside gzip?

@masenf
Copy link
Copy Markdown
Collaborator

masenf commented Apr 7, 2026

Shouldn't the compression be left to the reverse-proxy/ingress/deployment?

Yes, generally. However, better performance can be achieved by configuring the reverse proxy to serve the pre-compressed files so they don't need to be compressed on the fly when requested.

@FarhanAliRaza a good point is raised here though, I think the vite compress plugin should be able to handle output to gzip, brotli, or zstd for completeness. We should add some documentation to the self-hosting page about how to enable and use pre-compressed responses in caddy and nginx.

The other thing is that sirv is going away as the production frontend server, and expected to be replaced by Starlette StaticFiles... however neither this nor the AppHarnessProd support pre-compressed files out of the box.

I found this old, possibly unmaintained starlette plugin: https://github.com/magnuswatn/starlette-precompressed-static

It pretty much does what we want with the exception that it doesn't support html=True mode, which we do use. I think we could probably get away with creating our own StaticFiles wrapper that provides this functionality and maintaining it in reflex for --env prod behavior.

Because compression does touch on the intersection between application and deployment, we need to make sure these options are configurable by the end user. I don't think anyone is directly reverse proxying to a reflex run --env prod --single-port instance, but they definitely could be. And if this ends up serving a compressed response, but the reverse proxy is expecting to compress the response itself, then we could end up with encoding mismatches.

…ic assets

Support gzip, brotli, and zstd build-time pre-compression via the new
`frontend_compression_formats` config option. The Vite compress plugin
now handles multiple formats, and a new PrecompressedStaticFiles middleware
negotiates Accept-Encoding to serve matching sidecar files directly.

Replace the custom http.server-based prod test harness with Starlette/Uvicorn
to match the production serving stack. Remove redundant post-build compression
pass that was re-compressing assets already handled by the Vite plugin. Move
sidecar stat calls off the async event loop into a worker thread.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Improve lighthouse performance scores (desktop and mobile)

3 participants