Skip to content

feat: add time_to_first_review metric for pull requests#683

Open
meoyushi wants to merge 9 commits intogithub-community-projects:mainfrom
meoyushi:feat-time-to-first-review
Open

feat: add time_to_first_review metric for pull requests#683
meoyushi wants to merge 9 commits intogithub-community-projects:mainfrom
meoyushi:feat-time-to-first-review

Conversation

@meoyushi
Copy link

@meoyushi meoyushi commented Mar 4, 2026

Pull Request

fixes #682

feat: add time_to_first_review metric for pull requests

Proposed Changes

The metric measures the duration between:

  • Pull request creation time (or ready_for_review time)
  • First submitted GitHub review (submitted_at)

Behavior:

  • Applies only to pull requests
  • Excludes draft time (consistent with existing PR metrics)
  • Follows the existing pattern of dynamic metric assignment without modifying the constructor signature

Readiness Checklist

Author/Contributor

  • If documentation is needed for this change, has that been included in this pull request
  • run make lint and fix any issues that you have introduced
  • run make test and ensure you have test coverage for the lines you are introducing
  • If publishing new data to the public (scorecards, security scan results, code quality results, live dashboards, etc.), please request review from @jeffrey-luszcz

Reviewer

  • Label as either fix, documentation, enhancement, infrastructure, maintenance, or breaking

@meoyushi meoyushi requested a review from a team as a code owner March 4, 2026 13:31
Copy link
Collaborator

@jmeridth jmeridth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution.

  • There is a comment to remove
  • Please add tests.
  • Python linters are unhappy.

@meoyushi meoyushi requested a review from zkoppert as a code owner March 6, 2026 10:07
@meoyushi
Copy link
Author

meoyushi commented Mar 6, 2026

Hi, thank you for the feedback.

I made the changes requested:

  • Removed the comment from classes.py.
  • Added tests in test_time_to_first_review.py.
  • Resolved lint issues (pylint, black, flake8, isort) for the files I modified.

Yet, the Lint Code Base check failed. It appears taht the remaining failure is related to YAML_PRETTIER, as the linters are passing now.

Please do let me know if any further changes are needed. Thank you!

@jmeridth
Copy link
Collaborator

jmeridth commented Mar 6, 2026

Hi, thank you for the feedback.

I made the changes requested:

  • Removed the comment from classes.py.
  • Added tests in test_time_to_first_review.py.
  • Resolved lint issues (pylint, black, flake8, isort) for the files I modified.

Yet, the Lint Code Base check failed. It appears taht the remaining failure is related to YAML_PRETTIER, as the linters are passing now.

Please do let me know if any further changes are needed. Thank you!

Thank you for making the changes. Looks like Python isort linting is still faiing. I've learned looking at the summary of the linting workflow gives the best indicator of what is actually failing. We will be putting that table in PR comments soon. If you could fix that I'd be good approving.

@meoyushi
Copy link
Author

meoyushi commented Mar 6, 2026

Hi, thank you for the feedback.
I made the changes requested:

  • Removed the comment from classes.py.
  • Added tests in test_time_to_first_review.py.
  • Resolved lint issues (pylint, black, flake8, isort) for the files I modified.

Yet, the Lint Code Base check failed. It appears taht the remaining failure is related to YAML_PRETTIER, as the linters are passing now.
Please do let me know if any further changes are needed. Thank you!

Thank you for making the changes. Looks like Python isort linting is still faiing. I've learned looking at the summary of the linting workflow gives the best indicator of what is actually failing. We will be putting that table in PR comments soon. If you could fix that I'd be good approving.

Thank you very much for guiding me out there. I made a few formatting changes and now all checks pass well.

jmeridth
jmeridth previously approved these changes Mar 6, 2026
Copy link
Collaborator

@zkoppert zkoppert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for contributing to issue-metrics! This is a really useful metric to add, and you've clearly studied the existing codebase patterns closely. Here's a detailed review to help you take this over the finish line. 🎉


The measurement function in time_to_first_review.py is well-structured. You reuse ignore_comment() from time_to_first_response.py to get consistent filtering behavior — bots, self-reviews, and comments made during draft time are all properly excluded. Nice work!

You use of ready_for_review_at as the start time (when present) matches how measure_time_to_first_response works in time_to_first_response.py, which makes sure draft time isn't counted against review response time. ☑️


The measurement logic works, but right now the metric is computed per-issue and then never shown to the user in any output. Each existing metric in this codebase follows a full pipeline: config → compute → stats → markdown output → JSON output. Here's exactly what's needed for each step:


1. Add a HIDE_TIME_TO_FIRST_REVIEW environment variable in config.py

Every metric has a toggle so users can hide it. In config.py, inside the get_env_vars() function, there's a # Hidden columns section that reads these env vars. For example:

    # Hidden columns
    hide_time_to_close = get_bool_env_var("HIDE_TIME_TO_CLOSE", False)
    hide_time_to_first_response = get_bool_env_var("HIDE_TIME_TO_FIRST_RESPONSE", False)

You'd add a hide_time_to_first_review = get_bool_env_var("HIDE_TIME_TO_FIRST_REVIEW", False) line there, and then wire it through four other places in the same file:

  • The EnvVars class docstring (the Attributes: block near the top)
  • The EnvVars.__init__ parameter list (where hide_time_to_close: bool etc. are listed)
  • The __init__ body (where self.hide_time_to_close = hide_time_to_close etc. are assigned)
  • The __repr__ method
  • The return EnvVars(...) call at the bottom of get_env_vars()

Then in issue_metrics.py, guard your computation the same way other metrics are guarded. Currently your code (around line 163 of the PR branch) does:

            if pull_request:
                issue_with_metrics.time_to_first_review = measure_time_to_first_review(

But every other metric has an env var check. Compare with how time_to_first_response is guarded just a few lines below:

            if env_vars.hide_time_to_first_response is False:
                issue_with_metrics.time_to_first_response = (
                    measure_time_to_first_response(

Your code should follow the same pattern:

            if not env_vars.hide_time_to_first_review and pull_request:
                issue_with_metrics.time_to_first_review = measure_time_to_first_review(

2. Compute stats in main() inside issue_metrics.py

Your get_stats_time_to_first_review function is defined in time_to_first_review.py but is never imported or called anywhere. In issue_metrics.py, the import on line 42 currently only imports measure_time_to_first_review:

from time_to_first_review import measure_time_to_first_review

You'd expand this to also import get_stats_time_to_first_review.

Then in the main() function, there's a block where every metric computes its stats. It currently looks like:

    stats_time_to_first_response = get_stats_time_to_first_response(issues_with_metrics)
    stats_time_to_close = None
    if num_issues_closed > 0:
        stats_time_to_close = get_stats_time_to_close(issues_with_metrics)

    stats_time_to_answer = get_stats_time_to_answer(issues_with_metrics)
    stats_time_in_draft = get_stats_time_in_draft(issues_with_metrics)
    stats_pr_comments = get_stats_pr_comments(issues_with_metrics)

You'd add stats_time_to_first_review = get_stats_time_to_first_review(issues_with_metrics) in that block.


3. Add the metric to markdown output in markdown_writer.py

There are three places in this file that need changes:

a) Column header — In the get_non_hidden_columns() function, each metric conditionally adds its column. For example:

    hide_time_to_close = env_vars.hide_time_to_close
    if not hide_time_to_close:
        columns.append("Time to close")

    hide_time_to_answer = env_vars.hide_time_to_answer
    if not hide_time_to_answer:
        columns.append("Time to answer")

You'd add a similar block for "Time to first review".

b) Summary stats table — In the write_overall_metrics_tables() function, each metric writes a row to the summary table. For example:

        if "Time to close" in columns:
            if stats_time_to_close is not None:
                file.write(
                    f"| Time to close "
                    f"| {stats_time_to_close['avg']} "
                    f"| {stats_time_to_close['med']} "
                    f"| {stats_time_to_close['90p']} |\n"
                )
            else:
                file.write("| Time to close | None | None | None |\n")

You'd add an equivalent block for "Time to first review". This also means write_to_markdown() and write_overall_metrics_tables() need a new parameter for the stats, and the caller in main() needs to pass stats_time_to_first_review.

c) Per-issue row — In the write_to_markdown() function, the per-issue table loop writes each metric cell:

                    if "Time to close" in columns:
                        file.write(f" {issue.time_to_close} |")
                    if "Time to answer" in columns:
                        file.write(f" {issue.time_to_answer} |")

Add the equivalent for time_to_first_review.


4. Add the metric to JSON output in json_writer.py

In json_writer.py, each metric appears in two places:

a) Summary stats — Near the top of the write_to_json function, each metric extracts its stats. For example:

    # time to close
    average_time_to_close = None
    med_time_to_close = None
    p90_time_to_close = None
    if stats_time_to_close is not None:
        average_time_to_close = stats_time_to_close["avg"]
        med_time_to_close = stats_time_to_close["med"]
        p90_time_to_close = stats_time_to_close["90p"]

And then they're added to the metrics dict:

        "average_time_to_close": str(average_time_to_close),
        "median_time_to_close": str(med_time_to_close),
        "90_percentile_time_to_close": str(p90_time_to_close),

You'd add the same pattern for time_to_first_review.

b) Per-issue data — In the loop that builds the issues list, each issue's metrics are serialized:

                "time_to_first_response": str(issue.time_to_first_response),
                "time_to_close": str(issue.time_to_close),
                "time_to_answer": str(issue.time_to_answer),

You'd add "time_to_first_review": str(issue.time_to_first_review), here.

The write_to_json() function signature also needs a new stats_time_to_first_review parameter, and the caller in main() needs to pass it.


5. Add tests for get_stats_time_to_first_review

This function is currently untested. You can follow the pattern used for get_stats_time_to_first_response in test_time_to_first_response.py. Key cases to cover: a normal list of issues with review times, a list where all review times are None (should return None), and an empty list.

Also consider adding a test that exercises the ready_for_review_at path (where it's used as the start time instead of issue.created_at) and one that verifies ignore_users actually filters out a matching reviewer.


6. Small improvement: add context to the except TypeError

In time_to_first_review.py line 44, the bare except TypeError: return None silently swallows errors. The analogous code in time_to_first_response.py (inside measure_time_to_first_response) prints a helpful message:

            except TypeError as e:
                print(
                    f"An error occurred processing review comments. Perhaps the review contains a ghost user. {e}"
                )

Adding a similar message would help users debug issues in production.


Summary

The hardest part — getting the measurement logic right — is already done well. What remains is the integration plumbing to surface the metric to users. I know it's a lot of files to touch, but the patterns are very consistent across the codebase, so each step is essentially copy-and-adapt from an existing metric like time_to_close.

Happy to answer any questions if you get stuck on any of these steps! 🙌

@jmeridth jmeridth dismissed their stale review March 7, 2026 19:53

Pending changes/fixes

@jmeridth
Copy link
Collaborator

jmeridth commented Mar 8, 2026

Kudos @zkoppert for catching the items you did. 🤦

@meoyushi meoyushi force-pushed the feat-time-to-first-review branch from 806829c to 7317684 Compare March 10, 2026 05:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature : Add 'Time to First/incorporated Review' metric for all pull requests

3 participants