fix(components): use strict equality for null check in ChatFireworks#5948
fix(components): use strict equality for null check in ChatFireworks#5948badhra-ajaz wants to merge 2 commits intoFlowiseAI:mainfrom
Conversation
Replace loose equality (!=) with strict equality (!==) for the null check in toJSON(). The preceding typeof check already handles the undefined case, so strict null comparison is more precise.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refines a null check in the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request aims to improve code clarity and consistency by updating a loose equality check (!= null) to a strict one (!== null). While this is a positive change, a critical security vulnerability has been identified: the ChatFireworks.toJSON() method's secret removal logic is incomplete. It fails to remove apiKey and fireworksApiKey from the serialized output, potentially exposing sensitive API keys. Additionally, there's an opportunity to further simplify the conditional logic for improved readability.
| if ('kwargs' in result && typeof result.kwargs === 'object' && result.kwargs !== null) { | ||
| delete result.kwargs.openai_api_key | ||
| delete result.kwargs.configuration | ||
| } |
There was a problem hiding this comment.
The toJSON method's primary purpose is to sanitize sensitive information. Currently, it fails to remove apiKey and fireworksApiKey from result.kwargs, which are critical for preventing the leakage of sensitive API keys in exported flows or UI responses. While the change from != to !== improves clarity, the conditional logic also needs to be expanded to ensure all sensitive keys are properly deleted. This is a high-severity security concern.
if ('kwargs' in result && typeof result.kwargs === 'object' && result.kwargs !== null) {
delete result.kwargs.openai_api_key
delete result.kwargs.openAIApiKey
delete result.kwargs.apiKey
delete result.kwargs.fireworksApiKey
delete result.kwargs.configuration
}…ging requests Currently, ChatOpenAI only sets a timeout if explicitly provided by the user. When no timeout is configured, LangChain/OpenAI SDK calls can hang indefinitely on network issues, server unresponsiveness, or slow model responses. This causes: - Chat flows stuck waiting forever for LLM responses - Resource exhaustion when multiple chat nodes run concurrently - Poor user experience (no error feedback, hanging UI) - Wasted server resources on hung HTTP requests This changes the timeout handling to always set a timeout: either the user- provided value OR a default of 60 seconds (60000ms). **PeakInfer Issue:** Missing default timeout on LLM API calls **Impact:** Prevents indefinite hangs and improves reliability **Category:** Reliability + Latency Changes: - Set default timeout=60000ms (60s) in ChatOpenAI node (line 288) - Preserves existing streaming=true default - Users can still override via the Timeout input field in UI - Applies to all OpenAI chat completions in Flowise workflows This follows PeakInfer best practices for production LLM systems: - Prevents resource exhaustion from hung requests - Enables faster error detection and recovery - Improves system resilience under network issues - 60s timeout balances patience for long responses vs system health 🤖 Generated with PeakInfer LLM inference optimization
What
Replace loose equality (
!=) with strict equality (!==) for the null check inChatFireworks.toJSON().Why
The preceding
typeof result.kwargs === 'object'check already filters outundefined(sincetypeof undefined === 'undefined'), but it does not filter outnull(sincetypeof null === 'object'). Using strict equality!== nullmakes the intent clearer and avoids relying on the implicit coercion of!= null.The rest of the codebase consistently uses
!== nullfor null checks.Changes
packages/components/nodes/chatmodels/ChatFireworks/core.ts:!= null→!== nullon line 84