Conversation
Greptile SummaryThis PR introduces a well-designed Lighthouse performance benchmarking infrastructure — a new Key concerns:
Confidence Score: 4/5Not safe to merge as-is — the The Lighthouse-specific files are well-crafted and pose no significant risk. The score is reduced to 4 because of the P1
Important Files Changed
Sequence DiagramsequenceDiagram
participant GHA as GitHub Actions
participant pytest as pytest runner
participant test as test_lighthouse.py
participant utils as lighthouse_utils.py
participant harness as AppHarnessProd
participant lh as Lighthouse CLI
participant artifact as Artifact Upload
GHA->>pytest: trigger lighthouse job (REFLEX_RUN_LIGHTHOUSE=1)
pytest->>test: collect test_blank_template_lighthouse_scores
test->>utils: run_blank_prod_lighthouse_benchmark(app_root, report_path)
utils->>utils: ensure_blank_lighthouse_app(app_root)
utils->>harness: AppHarnessProd.create(root, app_name)
harness-->>utils: frontend_url
utils->>lh: subprocess.run([lighthouse, url, --output=json, ...])
lh-->>utils: JSON report written to report_path
utils->>utils: parse scores, compare against thresholds
utils-->>test: LighthouseBenchmarkResult(report, summary, failures)
test->>pytest: print(result.summary)
alt failures is empty
pytest-->>GHA: PASS
else failures non-empty
test->>pytest: pytest.fail(...)
pytest-->>GHA: FAIL
end
GHA->>artifact: upload .pytest-tmp/lighthouse (always)
|
| try: | ||
| subprocess.run( | ||
| command, | ||
| check=True, | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=180, | ||
| ) | ||
| except subprocess.CalledProcessError as err: | ||
| pytest.fail( | ||
| "Lighthouse execution failed.\n" | ||
| f"Command: {' '.join(command)}\n" | ||
| f"stdout:\n{err.stdout}\n" | ||
| f"stderr:\n{err.stderr}" | ||
| ) |
There was a problem hiding this comment.
subprocess.TimeoutExpired not handled
subprocess.run() is called with timeout=180, but only CalledProcessError is caught. If Lighthouse takes longer than 180 seconds, subprocess.TimeoutExpired will propagate uncaught, producing a raw Python traceback in CI output rather than a friendly pytest.fail() message.
| try: | |
| subprocess.run( | |
| command, | |
| check=True, | |
| capture_output=True, | |
| text=True, | |
| timeout=180, | |
| ) | |
| except subprocess.CalledProcessError as err: | |
| pytest.fail( | |
| "Lighthouse execution failed.\n" | |
| f"Command: {' '.join(command)}\n" | |
| f"stdout:\n{err.stdout}\n" | |
| f"stderr:\n{err.stderr}" | |
| ) | |
| try: | |
| subprocess.run( | |
| command, | |
| check=True, | |
| capture_output=True, | |
| text=True, | |
| timeout=180, | |
| ) | |
| except subprocess.TimeoutExpired as err: | |
| pytest.fail( | |
| f"Lighthouse timed out after {err.timeout}s.\n" | |
| f"Command: {' '.join(command)}" | |
| ) | |
| except subprocess.CalledProcessError as err: | |
| pytest.fail( | |
| "Lighthouse execution failed.\n" | |
| f"Command: {' '.join(command)}\n" | |
| f"stdout:\n{err.stdout}\n" | |
| f"stderr:\n{err.stderr}" | |
| ) |
| - name: Install playwright | ||
| run: uv run playwright install chromium --only-shell |
There was a problem hiding this comment.
The lighthouse job falls back to npx --yes lighthouse@13.1.0 (via get_lighthouse_command()) when no lighthouse binary is found, relying on whatever Node.js version is pre-installed on ubuntu-22.04. The exact Node.js version shipped with GitHub's runner images can change without notice and is not formally guaranteed.
Adding an explicit actions/setup-node step would make the workflow reproducible across runner image updates:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install playwright
run: uv run playwright install chromium --only-shell- Wrap blank template in rx.el.main landmark with page title/description - Add aria_label/title to ColorModeIconButton and StickyBadge - Add landing page prod benchmark alongside blank template test - Document page structure and metadata best practices
Add a Vite build plugin that generates .gz copies of assets for sirv to serve pre-compressed. Split socket.io and radix-ui into separate chunks for better caching. Refactor lighthouse benchmark to use `reflex run --env prod` directly instead of AppHarnessProd.
|
Shouldn't the compression be left to the reverse-proxy/ingress/deployment? If not, consider adding brotli alongside gzip? |
Yes, generally. However, better performance can be achieved by configuring the reverse proxy to serve the pre-compressed files so they don't need to be compressed on the fly when requested. @FarhanAliRaza a good point is raised here though, I think the vite compress plugin should be able to handle output to gzip, brotli, or zstd for completeness. We should add some documentation to the self-hosting page about how to enable and use pre-compressed responses in caddy and nginx. The other thing is that I found this old, possibly unmaintained starlette plugin: https://github.com/magnuswatn/starlette-precompressed-static It pretty much does what we want with the exception that it doesn't support Because compression does touch on the intersection between application and deployment, we need to make sure these options are configurable by the end user. I don't think anyone is directly reverse proxying to a |
…ic assets Support gzip, brotli, and zstd build-time pre-compression via the new `frontend_compression_formats` config option. The Vite compress plugin now handles multiple formats, and a new PrecompressedStaticFiles middleware negotiates Accept-Encoding to serve matching sidecar files directly. Replace the custom http.server-based prod test harness with Starlette/Uvicorn to match the production serving stack. Remove redundant post-build compression pass that was re-compressing assets already handled by the Vite plugin. Move sidecar stat calls off the async event loop into a worker thread.
All Submissions:
Type of change
Please delete options that are not relevant.
New Feature Submission:
Changes To Core Features:
closes #6295