Skip to content

LCORE-1206: add tests for too long question#1232

Open
radofuchs wants to merge 1 commit intolightspeed-core:mainfrom
radofuchs:LCORE_1206_OLS_test_alignment
Open

LCORE-1206: add tests for too long question#1232
radofuchs wants to merge 1 commit intolightspeed-core:mainfrom
radofuchs:LCORE_1206_OLS_test_alignment

Conversation

@radofuchs
Copy link
Contributor

@radofuchs radofuchs commented Feb 27, 2026

Description

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Benchmarks improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

  • Assisted-by: (e.g., Claude, CodeRabbit, Ollama, etc., N/A if not used) Cursor
  • Generated by: (e.g., tool name and version; N/A if not used)

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Tests
    • Expanded end-to-end coverage for long-input queries (with and without shields), including scenarios asserting HTTP 413 and streamed error responses.
    • Added streaming-response handling and new steps to validate streamed SSE error messages and too-long query behavior.
  • Chores
    • Automated shield management for test scenarios tagged to disable shields (unregister/re-register around tests).
    • Improved diagnostics emitting container status, health checks, and recent logs for restoration and failure debugging.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 27, 2026

Walkthrough

Unregisters/re-registers Llama Stack shields in server-mode E2E scenarios tagged @disable-shields, adds Llama Stack shield helper module, enhances streaming-response parsing to capture SSE errors, adds tests for too-long queries (with/without shields), and emits Docker/stack diagnostics on restoration failures.

Changes

Cohort / File(s) Summary
Shield management utilities
tests/e2e/utils/llama_stack_shields.py
New helper module exposing unregister_shield() and register_shield() that call an AsyncLlamaStackClient, return/store provider IDs for restoration, and handle connection/status errors and cleanup.
Test environment & diagnostics
tests/e2e/features/environment.py
Before/after scenario hooks to unregister/re-register "llama-guard" when scenario tagged @disable-shields (server mode only); adds _print_llama_stack_diagnostics() to emit container state/health/last logs; extended error/restore handling and stdout logging.
Streaming & query step logic
tests/e2e/features/steps/llm_query_response.py
Adds _read_streamed_response() to aggregate SSE streams, extends parsing to capture stream_error, updates ask flows to attach full streamed body, and adds steps ask_question_too_long_authorized() and check_streamed_response_error_message().
Feature scenarios (tests)
tests/e2e/features/query.feature, tests/e2e/features/streaming_query.feature
Adds scenarios that submit too-long queries with shields enabled/disabled to assert 413 vs streamed error behavior; one scenario marked skipped.

Sequence Diagram(s)

sequenceDiagram
  participant Runner as Test Runner
  participant Env as environment.py hooks
  participant Shields as llama_stack_shields
  participant API as Llama Stack API
  participant Docker as Docker Engine

  Runner->>Env: start scenario (tag `@disable-shields`)
  Env->>Shields: unregister_shield("llama-guard")
  Shields->>API: list/delete shield
  API-->>Shields: provider IDs or 404/400
  Shields-->>Env: return provider IDs (store on context)

  Runner->>API: execute test (query / streaming)
  alt streaming SSE contains "error" event
    API-->>Runner: SSE stream with error event
    Runner->>Steps: parse stream, attach stream_error
  end

  alt restoration health-check fails or command error
    Env->>Docker: inspect containers / health / logs
    Docker-->>Env: container state + last logs
    Env-->>Runner: print diagnostics
  end

  Runner->>Env: end scenario
  Env->>Shields: register_shield(shield_id, provider_ids)
  Shields->>API: shields.register
  API-->>Shields: success or error
  Shields-->>Env: result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • tisnik
  • are-ces
🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning The PR adds end-to-end tests for query handling but the linked issue #1103 objectives describe 'token quotas' and 'token metrics tracking', which differ from the 'too long question' focus of this PR. Clarify whether the PR addresses token quota tests as specified in #1103 or if the linked issue reference is incorrect. The current changes focus on prompt length validation, not token metrics.
Out of Scope Changes check ❓ Inconclusive All changes are focused on end-to-end testing infrastructure for query handling; however, shield management functionality (unregister/re-register shields) appears tangential to the stated objective of testing 'too long questions'. Verify whether shield disable/enable functionality is necessary for the 'too long question' tests or if it represents scope creep. Consider separating shield management changes if they are not directly required.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title focuses on adding tests for 'too long question' which directly aligns with the main changes: new end-to-end test scenarios for handling too-long queries with and without shields.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
tests/e2e/features/steps/llm_query_response.py (1)

193-193: Remove raw streamed body dump from assertion step.

Line 193 prints full streamed content on every run. This can create noisy logs and expose payload content unnecessarily in CI output.

🔧 Suggested fix
 `@then`("The streamed response contains error message {message}")
 def check_streamed_response_error_message(context: Context, message: str) -> None:
@@
     assert context.response is not None, "Request needs to be performed first"
-    print(context.response.text)
     parsed = _parse_streaming_response(context.response.text)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/features/steps/llm_query_response.py` at line 193, Remove the raw
dump call print(context.response.text) from the assertion step (the print
statement that outputs context.response.text); either delete it or replace it
with a non-sensitive debug/log call that only emits a short, masked summary
(e.g., status and length) behind a verbosity flag or logger.debug, ensuring the
assertion logic (in the step handling LLM responses) remains unchanged.
tests/e2e/features/query.feature (1)

220-225: Consider validating token metrics in the new active long-query query scenario.

Line 220-225 checks status/body only. Adding token metric assertions here would strengthen coverage for quota-related regressions on too-long query failures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/features/query.feature` around lines 220 - 225, Add assertions to
the "Check if query with shields returns 413 when question is too long for model
context" scenario to validate token metrics in the too-long-query response;
after the step "When I use \"query\" to ask question with too-long query and
authorization header" and the existing status/body checks, assert that the
response includes token-related fields (e.g., token_count/token_usage or
relevant headers) and that those values reflect the query exceeded the
model/context limit (e.g., token_count > model_limit or a flagged usage field).
Update the scenario's expectations so the step verifying the 413 and "Prompt is
too long" message also checks the presence and correctness of these token metric
indicators returned by the query handler.
tests/e2e/features/streaming_query.feature (1)

182-195: Add explicit token-metrics assertions to new long-query streaming scenarios.

Line 182 and Line 190 validate status/error behavior, but they don’t assert token metric behavior. Please add metric capture/assertions so these paths also protect quota accounting regressions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/features/streaming_query.feature` around lines 182 - 195, Add
explicit token-metrics capture and assertions around the two long-query
streaming scenarios that use the "streaming_query" step (the scenario with
shields that returns 413 and the `@disable-shields` scenario that returns 200).
Before invoking the "streaming_query" step, record current token-related metrics
(e.g., prompt_tokens, completion_tokens, tokens_consumed) from the same metrics
endpoint or helper your test suite uses (call the existing getMetrics / metrics
helper), then call "streaming_query" and re-fetch metrics and assert the deltas
match expected behavior for each path (ensure the 413 path still records prompt
token accounting if expected, and the `@disable-shields` path records streamed
error token usage); add these metric assertions into the scenarios in
tests/e2e/features/streaming_query.feature adjacent to the existing status/body
checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/e2e/features/environment.py`:
- Around line 176-190: When restoring shields after a `@disable-shields` scenario,
don't unconditionally call register_shield with defaults if unregister_shield
returned no previous provider; change the restore logic to check the saved tuple
from unregister_shield (the values stored on context.llama_guard_provider_id and
context.llama_guard_provider_shield_id) and only call register_shield to
re-create the shield when saved is truthy (i.e., the shield existed before). If
saved is falsy/None, skip register_shield so you don't create a new default
shield and thus preserve scenario isolation; update any code paths that assume
register_shield will always run to use this presence check instead.

---

Nitpick comments:
In `@tests/e2e/features/query.feature`:
- Around line 220-225: Add assertions to the "Check if query with shields
returns 413 when question is too long for model context" scenario to validate
token metrics in the too-long-query response; after the step "When I use
\"query\" to ask question with too-long query and authorization header" and the
existing status/body checks, assert that the response includes token-related
fields (e.g., token_count/token_usage or relevant headers) and that those values
reflect the query exceeded the model/context limit (e.g., token_count >
model_limit or a flagged usage field). Update the scenario's expectations so the
step verifying the 413 and "Prompt is too long" message also checks the presence
and correctness of these token metric indicators returned by the query handler.

In `@tests/e2e/features/steps/llm_query_response.py`:
- Line 193: Remove the raw dump call print(context.response.text) from the
assertion step (the print statement that outputs context.response.text); either
delete it or replace it with a non-sensitive debug/log call that only emits a
short, masked summary (e.g., status and length) behind a verbosity flag or
logger.debug, ensuring the assertion logic (in the step handling LLM responses)
remains unchanged.

In `@tests/e2e/features/streaming_query.feature`:
- Around line 182-195: Add explicit token-metrics capture and assertions around
the two long-query streaming scenarios that use the "streaming_query" step (the
scenario with shields that returns 413 and the `@disable-shields` scenario that
returns 200). Before invoking the "streaming_query" step, record current
token-related metrics (e.g., prompt_tokens, completion_tokens, tokens_consumed)
from the same metrics endpoint or helper your test suite uses (call the existing
getMetrics / metrics helper), then call "streaming_query" and re-fetch metrics
and assert the deltas match expected behavior for each path (ensure the 413 path
still records prompt token accounting if expected, and the `@disable-shields` path
records streamed error token usage); add these metric assertions into the
scenarios in tests/e2e/features/streaming_query.feature adjacent to the existing
status/body checks.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9a6024a and 51e8477.

📒 Files selected for processing (5)
  • tests/e2e/features/environment.py
  • tests/e2e/features/query.feature
  • tests/e2e/features/steps/llm_query_response.py
  • tests/e2e/features/streaming_query.feature
  • tests/e2e/utils/llama_stack_shields.py

@radofuchs radofuchs requested a review from tisnik February 27, 2026 08:59
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tests/e2e/features/environment.py (1)

176-195: ⚠️ Potential issue | 🟠 Major

Reset shield restore state before unregister to prevent stale re-registration.

Line 187 can throw before context values are overwritten, and Line 247/Line 248 then read whatever was left from a previous scenario. That can trigger register_shield(...) for a scenario that never successfully unregistered a shield.

🔧 Suggested fix
 def before_scenario(context: Context, scenario: Scenario) -> None:
@@
     if "disable-shields" in scenario.effective_tags:
+        context.llama_guard_restore_required = False
+        context.llama_guard_provider_id = None
+        context.llama_guard_provider_shield_id = None
+
         if context.is_library_mode:
             scenario.skip(
                 "Shield unregister/register only applies in server mode (Llama Stack as a "
                 "separate service). In library mode the app's shields cannot be disabled from e2e."
             )
             return
         try:
             saved = unregister_shield("llama-guard")
-            context.llama_guard_provider_id = saved[0] if saved else None
-            context.llama_guard_provider_shield_id = saved[1] if saved else None
-            print("Unregistered shield llama-guard for this scenario")
+            if saved:
+                context.llama_guard_provider_id = saved[0]
+                context.llama_guard_provider_shield_id = saved[1]
+                context.llama_guard_restore_required = True
+                print("Unregistered shield llama-guard for this scenario")
+            else:
+                print("Shield llama-guard was not registered; nothing to restore")
         except Exception as e:  # pylint: disable=broad-exception-caught
             scenario.skip(
                 f"Could not unregister shield (is Llama Stack reachable?): {e}"
             )
             return
@@
-    if "disable-shields" in scenario.effective_tags:
+    if (
+        "disable-shields" in scenario.effective_tags
+        and not context.is_library_mode
+        and getattr(context, "llama_guard_restore_required", False)
+    ):
         provider_id = getattr(context, "llama_guard_provider_id", None)
         provider_shield_id = getattr(context, "llama_guard_provider_shield_id", None)
-        if provider_id is not None and provider_shield_id is not None:
-            try:
-                register_shield(
-                    "llama-guard",
-                    provider_id=provider_id,
-                    provider_shield_id=provider_shield_id,
-                )
-                print("Re-registered shield llama-guard")
-            except Exception as e:  # pylint: disable=broad-exception-caught
-                print(f"Warning: Could not re-register shield: {e}")
+        try:
+            register_shield(
+                "llama-guard",
+                provider_id=provider_id,
+                provider_shield_id=provider_shield_id,
+            )
+            context.llama_guard_restore_required = False
+            print("Re-registered shield llama-guard")
+        except Exception as e:  # pylint: disable=broad-exception-caught
+            print(f"Warning: Could not re-register shield: {e}")

Also applies to: 245-259

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/features/environment.py` around lines 176 - 195, Reset the
shield-related context state before attempting to unregister so stale values
from prior scenarios can't be read if unregister_shield("llama-guard") raises;
specifically, set context.llama_guard_provider_id = None and
context.llama_guard_provider_shield_id = None immediately before calling
unregister_shield, then in the try branch assign the saved tuple into those
fields (saved[0]/saved[1] or None) and in the except ensure they remain None;
apply the same defensive-reset pattern to the corresponding restore/register
code that reads these context fields (the register_shield/restore block).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/e2e/features/environment.py`:
- Around line 176-195: Reset the shield-related context state before attempting
to unregister so stale values from prior scenarios can't be read if
unregister_shield("llama-guard") raises; specifically, set
context.llama_guard_provider_id = None and
context.llama_guard_provider_shield_id = None immediately before calling
unregister_shield, then in the try branch assign the saved tuple into those
fields (saved[0]/saved[1] or None) and in the except ensure they remain None;
apply the same defensive-reset pattern to the corresponding restore/register
code that reads these context fields (the register_shield/restore block).

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 51e8477 and 5ddefe5.

📒 Files selected for processing (1)
  • tests/e2e/features/environment.py

@radofuchs radofuchs force-pushed the LCORE_1206_OLS_test_alignment branch from 5ddefe5 to 7935999 Compare March 1, 2026 13:04
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/e2e/features/environment.py (1)

309-323: ⚠️ Potential issue | 🟠 Major

Retry loop is bypassed on non-zero health check exit.

When curl exits with a non-zero status (indicating service unhealthy), line 322's check=True raises CalledProcessError. The inner exception handler at line 327 only catches TimeoutExpired, so CalledProcessError propagates to the outer handler at line 339, immediately exiting the retry loop instead of attempting the remaining retries.

Additionally, line 324's if result.returncode == 0: check is unreachable dead code when check=True is set, since non-zero exits raise an exception rather than returning.

🔧 Proposed retry-safe pattern
                 result = subprocess.run(
                     [
                         "docker",
                         "exec",
                         "llama-stack",
                         "curl",
                         "-f",
                         f"http://{context.hostname_llama}:{context.port_llama}/v1/health",
                     ],
                     capture_output=True,
                     timeout=5,
-                    check=True,
+                    check=False,
                 )
                 if result.returncode == 0:
                     print("✓ Llama Stack connection restored successfully")
                     break
+                print(
+                    f"Health check failed on attempt {attempt + 1}/6 "
+                    f"(exit={result.returncode})"
+                )
             except subprocess.TimeoutExpired:
                 print(f"⏱ Health check timed out on attempt {attempt + 1}/6")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/features/environment.py` around lines 309 - 323, The retry loop in
tests/e2e/features/environment.py currently uses subprocess.run(..., check=True)
inside the "for attempt in range(6)" loop which causes CalledProcessError to
escape the inner handler and short-circuit retries; also the "if
result.returncode == 0" check is dead when check=True. Change the subprocess.run
invocation to use check=False (or remove check=True), so failures return a
CompletedProcess you can inspect; then update the inner exception handling to
still catch subprocess.TimeoutExpired and after the call inspect
result.returncode (use result.returncode == 0 to break/return success, otherwise
continue the retry loop and log the failure). Ensure references to
subprocess.run, result.returncode, TimeoutExpired (and optionally
CalledProcessError if you choose to catch it) are updated accordingly.
🧹 Nitpick comments (2)
tests/e2e/features/steps/llm_query_response.py (1)

91-91: Remove debug print() calls from test step paths.

At Line 91 and Line 193, these prints add noisy logs and may expose full streamed payloads without improving assertions.

🧹 Proposed cleanup
-    print(f"Request: query length={len(long_query)}, model={context.default_model}")
     ask_question_authorized(context, endpoint)
@@
-    print(context.response.text)
     parsed = _parse_streaming_response(context.response.text)

Also applies to: 193-193

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/features/steps/llm_query_response.py` at line 91, Remove the debug
print statements that leak full payloads and clutter test logs—specifically the
print(f"Request: query length={len(long_query)}, model={context.default_model}")
call in the llm_query_response step and the other print at the later step;
delete these prints or replace them with a low-verbosity logger.debug call
(e.g., using the test suite's logger) that does not print full payloads, keeping
only minimal safe info if needed and ensuring no assertions rely on the print
output.
tests/e2e/utils/llama_stack_shields.py (1)

55-58: Broaden error handling for shields.delete() responses to cover both 400 and 404 status codes.

At line 57, the code only tolerates status_code == 400 with "not found" text. However, shields.delete() can return either 400 or 404 depending on whether the server exposes the DELETE endpoint and error message formatting varies across API versions. If the API returns 404 or uses different error text, the exception will be unnecessarily raised.

🔧 Recommended hardening
         except APIStatusError as e:
             # 400 "not found": shield already absent, scenario can proceed
-            if e.status_code == 400 and "not found" in str(e).lower():
+            err = str(e).lower()
+            if e.status_code in {400, 404} and (
+                "not found" in err or "does not exist" in err
+            ):
                 return None
             raise
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/utils/llama_stack_shields.py` around lines 55 - 58, The except
block catching APIStatusError from shields.delete() is too strict: change the
condition in the except APIStatusError as e handler so it tolerates both
status_code 400 and 404 (e.g., check if e.status_code in (400, 404)) and keep
the existing "not found" text check for 400 while allowing 404 to return None
even if message differs; locate the except block handling APIStatusError around
shields.delete() and update the condition that currently checks `e.status_code
== 400 and "not found" in str(e).lower()`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/e2e/features/environment.py`:
- Around line 186-195: Reset context.llama_guard_provider_id and
context.llama_guard_provider_shield_id to None before calling unregister_shield
in before_scenario so stale values can't be used later by the restore logic;
specifically, set those context fields to None immediately before the try that
calls unregister_shield (the block using unregister_shield and setting
context.llama_guard_provider_id / context.llama_guard_provider_shield_id), and
apply the same defensive reset in the symmetric block that handles restore (the
other before_scenario/after_scenario block around the restore logic) so both
paths clear prior state on failure.

---

Outside diff comments:
In `@tests/e2e/features/environment.py`:
- Around line 309-323: The retry loop in tests/e2e/features/environment.py
currently uses subprocess.run(..., check=True) inside the "for attempt in
range(6)" loop which causes CalledProcessError to escape the inner handler and
short-circuit retries; also the "if result.returncode == 0" check is dead when
check=True. Change the subprocess.run invocation to use check=False (or remove
check=True), so failures return a CompletedProcess you can inspect; then update
the inner exception handling to still catch subprocess.TimeoutExpired and after
the call inspect result.returncode (use result.returncode == 0 to break/return
success, otherwise continue the retry loop and log the failure). Ensure
references to subprocess.run, result.returncode, TimeoutExpired (and optionally
CalledProcessError if you choose to catch it) are updated accordingly.

---

Nitpick comments:
In `@tests/e2e/features/steps/llm_query_response.py`:
- Line 91: Remove the debug print statements that leak full payloads and clutter
test logs—specifically the print(f"Request: query length={len(long_query)},
model={context.default_model}") call in the llm_query_response step and the
other print at the later step; delete these prints or replace them with a
low-verbosity logger.debug call (e.g., using the test suite's logger) that does
not print full payloads, keeping only minimal safe info if needed and ensuring
no assertions rely on the print output.

In `@tests/e2e/utils/llama_stack_shields.py`:
- Around line 55-58: The except block catching APIStatusError from
shields.delete() is too strict: change the condition in the except
APIStatusError as e handler so it tolerates both status_code 400 and 404 (e.g.,
check if e.status_code in (400, 404)) and keep the existing "not found" text
check for 400 while allowing 404 to return None even if message differs; locate
the except block handling APIStatusError around shields.delete() and update the
condition that currently checks `e.status_code == 400 and "not found" in
str(e).lower()`.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5ddefe5 and 7935999.

📒 Files selected for processing (5)
  • tests/e2e/features/environment.py
  • tests/e2e/features/query.feature
  • tests/e2e/features/steps/llm_query_response.py
  • tests/e2e/features/streaming_query.feature
  • tests/e2e/utils/llama_stack_shields.py
✅ Files skipped from review due to trivial changes (1)
  • tests/e2e/features/streaming_query.feature

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant