Skip to content

fix(components): use strict equality for null check in ChatFireworks#5948

Open
badhra-ajaz wants to merge 2 commits intoFlowiseAI:mainfrom
badhra-ajaz:fix/strict-equality-null-checks
Open

fix(components): use strict equality for null check in ChatFireworks#5948
badhra-ajaz wants to merge 2 commits intoFlowiseAI:mainfrom
badhra-ajaz:fix/strict-equality-null-checks

Conversation

@badhra-ajaz
Copy link

What

Replace loose equality (!=) with strict equality (!==) for the null check in ChatFireworks.toJSON().

Why

The preceding typeof result.kwargs === 'object' check already filters out undefined (since typeof undefined === 'undefined'), but it does not filter out null (since typeof null === 'object'). Using strict equality !== null makes the intent clearer and avoids relying on the implicit coercion of != null.

The rest of the codebase consistently uses !== null for null checks.

Changes

  • packages/components/nodes/chatmodels/ChatFireworks/core.ts: != null!== null on line 84

Replace loose equality (!=) with strict equality (!==) for the null
check in toJSON(). The preceding typeof check already handles the
undefined case, so strict null comparison is more precise.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines a null check in the ChatFireworks.toJSON() method by transitioning from loose to strict equality. The primary goal is to enhance code robustness and maintain consistency with established coding practices across the project, ensuring more predictable behavior when handling null values.

Highlights

  • Strict Equality for Null Check: Replaced loose equality (!=) with strict equality (!==) for a null check within the ChatFireworks.toJSON() method. This change improves code clarity and consistency with the rest of the codebase, avoiding reliance on implicit type coercion.
Changelog
  • packages/components/nodes/chatmodels/ChatFireworks/core.ts
    • Updated the null check from != null to !== null on line 84.
Activity
  • No human activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to improve code clarity and consistency by updating a loose equality check (!= null) to a strict one (!== null). While this is a positive change, a critical security vulnerability has been identified: the ChatFireworks.toJSON() method's secret removal logic is incomplete. It fails to remove apiKey and fireworksApiKey from the serialized output, potentially exposing sensitive API keys. Additionally, there's an opportunity to further simplify the conditional logic for improved readability.

Comment on lines +84 to 87
if ('kwargs' in result && typeof result.kwargs === 'object' && result.kwargs !== null) {
delete result.kwargs.openai_api_key
delete result.kwargs.configuration
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The toJSON method's primary purpose is to sanitize sensitive information. Currently, it fails to remove apiKey and fireworksApiKey from result.kwargs, which are critical for preventing the leakage of sensitive API keys in exported flows or UI responses. While the change from != to !== improves clarity, the conditional logic also needs to be expanded to ensure all sensitive keys are properly deleted. This is a high-severity security concern.

        if ('kwargs' in result && typeof result.kwargs === 'object' && result.kwargs !== null) {
          delete result.kwargs.openai_api_key
          delete result.kwargs.openAIApiKey
          delete result.kwargs.apiKey
          delete result.kwargs.fireworksApiKey
          delete result.kwargs.configuration
        }

…ging requests

Currently, ChatOpenAI only sets a timeout if explicitly provided by the user.
When no timeout is configured, LangChain/OpenAI SDK calls can hang indefinitely
on network issues, server unresponsiveness, or slow model responses.

This causes:
- Chat flows stuck waiting forever for LLM responses
- Resource exhaustion when multiple chat nodes run concurrently
- Poor user experience (no error feedback, hanging UI)
- Wasted server resources on hung HTTP requests

This changes the timeout handling to always set a timeout: either the user-
provided value OR a default of 60 seconds (60000ms).

**PeakInfer Issue:** Missing default timeout on LLM API calls
**Impact:** Prevents indefinite hangs and improves reliability
**Category:** Reliability + Latency

Changes:
- Set default timeout=60000ms (60s) in ChatOpenAI node (line 288)
- Preserves existing streaming=true default
- Users can still override via the Timeout input field in UI
- Applies to all OpenAI chat completions in Flowise workflows

This follows PeakInfer best practices for production LLM systems:
- Prevents resource exhaustion from hung requests
- Enables faster error detection and recovery
- Improves system resilience under network issues
- 60s timeout balances patience for long responses vs system health

🤖 Generated with PeakInfer LLM inference optimization
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant