Added support for custom LLM provider URLs for OpenAI and Anthropic, …#9731
Added support for custom LLM provider URLs for OpenAI and Anthropic, …#9731dpage wants to merge 1 commit intopgadmin-org:masterfrom
Conversation
…allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM. pgadmin-org#9703 - Add configurable API URL fields for OpenAI and Anthropic providers - Make API keys optional when using custom URLs (for local providers) - Auto-clear model dropdown when provider settings change - Refresh button uses current unsaved form values - Update documentation and release notes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
WalkthroughAdds support for custom API URLs for Anthropic and OpenAI LLM providers, enabling use of compatible endpoints. Implementation includes new configuration constants, updated provider client constructors accepting api_url parameter, enhanced model-fetching flows, and UI changes for preference dependency tracking. Changes
Sequence DiagramsequenceDiagram
participant User as User
participant Pref as Preferences UI
participant Schema as SchemaView
participant LLMModule as LLM Module
participant Provider as Provider Client
participant API as Custom API Endpoint
User->>Pref: Set custom API URL
Pref->>Pref: Emit depchange event
Schema->>Schema: Detect dependency change
Schema->>LLMModule: Trigger model refresh with api_url
LLMModule->>LLMModule: Get api_url from preferences/config
LLMModule->>Provider: Initialize with api_url
Provider->>Provider: Build endpoint from api_url
Provider->>API: Fetch models from custom endpoint
API-->>Provider: Return models
Provider-->>LLMModule: Return models
LLMModule-->>Schema: Update model list
Schema-->>Pref: Render updated models
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (5)
web/pgadmin/static/js/components/SelectRefresh.jsx (1)
182-188: Add missing PropTypes for new props.The component now receives
options,optionsReloadBasis, andonChangeas props (destructured at line 64), but these are not declared in PropTypes.🔧 Proposed fix
SelectRefresh.propTypes = { required: PropTypes.bool, label: PropTypes.string, className: CustomPropTypes.className, helpMessage: PropTypes.string, testcid: PropTypes.string, controlProps: PropTypes.object, + options: PropTypes.oneOfType([PropTypes.array, PropTypes.func]), + optionsReloadBasis: PropTypes.any, + onChange: PropTypes.func, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/pgadmin/static/js/components/SelectRefresh.jsx` around lines 182 - 188, SelectRefresh's propTypes are missing declarations for the newly used props: add PropTypes entries for options (array or arrayOf/object as appropriate), optionsReloadBasis (string/number/oneOfType depending on usage) and onChange (func) to the SelectRefresh.propTypes object so the destructured props at the top (options, optionsReloadBasis, onChange) are validated; update the SelectRefresh.propTypes block to include these three keys matching the types used by the component.web/pgadmin/llm/client.py (2)
149-153: Consider updating error message to reflect that custom URL is now an alternative.The error message still says "Anthropic API key not configured" but now a custom API URL is also a valid configuration path. Consider updating to clarify both options:
💡 Suggested improvement
if not api_key and not api_url: raise LLMClientError(LLMError( - message="Anthropic API key not configured", + message="Anthropic API key or custom API URL not configured", provider="anthropic" ))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/pgadmin/llm/client.py` around lines 149 - 153, The error raised when both api_key and api_url are missing uses a message "Anthropic API key not configured" which is no longer accurate; update the LLMError message in the block that raises LLMClientError (the check using api_key and api_url) to state that either an Anthropic API key or a custom API URL must be provided (e.g., "Anthropic API key or custom API URL not configured") and keep provider="anthropic" and the same exception types (LLMError, LLMClientError) unchanged.
163-167: Same suggestion for OpenAI error message.For consistency, update the OpenAI error message as well:
💡 Suggested improvement
if not api_key and not api_url: raise LLMClientError(LLMError( - message="OpenAI API key not configured", + message="OpenAI API key or custom API URL not configured", provider="openai" ))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/pgadmin/llm/client.py` around lines 163 - 167, The OpenAI error raised when both api_key and api_url are missing should include more descriptive details; modify the LLMClientError instantiation that wraps LLMError so the message clearly states which configuration is missing (e.g., "OpenAI API key or API URL not configured") and/or include the values of api_key and api_url presence (without exposing secrets) for clarity; update the code that constructs LLMError (the branch checking api_key and api_url) to produce the improved message while keeping provider="openai" and raising LLMClientError(LLMError(...)).web/pgadmin/llm/__init__.py (2)
635-658: Consider validating URL scheme for security hardening.The
urllib.request.urlopencall accepts arbitrary URL schemes includingfile://. While this is user-configured and represents low risk, you could add scheme validation to restrict tohttp://andhttps://only.Additionally, the exception at lines 656-658 should use
raise ... from eto preserve the exception chain.💡 Suggested improvement
def _fetch_anthropic_models(api_key, api_url=''): ... base_url = (api_url or 'https://api.anthropic.com/v1').rstrip('/') + + # Validate URL scheme for security + if not base_url.startswith(('http://', 'https://')): + raise ValueError('API URL must use http:// or https:// scheme') + url = f'{base_url}/models' ... except urllib.error.URLError as e: - raise ConnectionError( + raise ConnectionError( f'Cannot connect to Anthropic API: {e.reason}' - ) + ) from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/pgadmin/llm/__init__.py` around lines 635 - 658, Validate the constructed base_url/url scheme before creating the Request and calling urllib.request.urlopen: parse base_url (or url) and ensure the scheme is either "http" or "https", and raise a ValueError if not, then proceed to build urllib.request.Request and call urllib.request.urlopen with SSL_CONTEXT; also update the exception handling in the except blocks that currently raise ConnectionError or ValueError to use "raise ... from e" so the original urllib.error.HTTPError/URLError (variable e) is preserved in the exception chain (refer to base_url, url, urllib.request.Request, urllib.request.urlopen, SSL_CONTEXT).
702-725: Same suggestions apply to OpenAI model fetching.Apply the same URL scheme validation and exception chaining improvements to
_fetch_openai_models.💡 Suggested improvement
def _fetch_openai_models(api_key, api_url=''): ... base_url = (api_url or 'https://api.openai.com/v1').rstrip('/') + + # Validate URL scheme for security + if not base_url.startswith(('http://', 'https://')): + raise ValueError('API URL must use http:// or https:// scheme') + url = f'{base_url}/models' ... except urllib.error.URLError as e: - raise ConnectionError( + raise ConnectionError( f'Cannot connect to OpenAI API: {e.reason}' - ) + ) from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/pgadmin/llm/__init__.py` around lines 702 - 725, The _fetch_openai_models function currently builds base_url from api_url without validating the URL scheme and re-raises new exceptions without chaining; update it to parse and validate api_url's scheme (using urllib.parse.urlparse) and only allow 'https' (or 'http' if you accept it) otherwise raise a ValueError referencing api_url, and modify the exception handlers for urllib.error.HTTPError and urllib.error.URLError to re-raise ConnectionError/ValueError using exception chaining (raise ... from e) so the original error is preserved; reference symbols: _fetch_openai_models, base_url, api_url, urllib.parse.urlparse, urllib.error.HTTPError, urllib.error.URLError, SSL_CONTEXT.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@web/pgadmin/llm/__init__.py`:
- Around line 635-658: Validate the constructed base_url/url scheme before
creating the Request and calling urllib.request.urlopen: parse base_url (or url)
and ensure the scheme is either "http" or "https", and raise a ValueError if
not, then proceed to build urllib.request.Request and call
urllib.request.urlopen with SSL_CONTEXT; also update the exception handling in
the except blocks that currently raise ConnectionError or ValueError to use
"raise ... from e" so the original urllib.error.HTTPError/URLError (variable e)
is preserved in the exception chain (refer to base_url, url,
urllib.request.Request, urllib.request.urlopen, SSL_CONTEXT).
- Around line 702-725: The _fetch_openai_models function currently builds
base_url from api_url without validating the URL scheme and re-raises new
exceptions without chaining; update it to parse and validate api_url's scheme
(using urllib.parse.urlparse) and only allow 'https' (or 'http' if you accept
it) otherwise raise a ValueError referencing api_url, and modify the exception
handlers for urllib.error.HTTPError and urllib.error.URLError to re-raise
ConnectionError/ValueError using exception chaining (raise ... from e) so the
original error is preserved; reference symbols: _fetch_openai_models, base_url,
api_url, urllib.parse.urlparse, urllib.error.HTTPError, urllib.error.URLError,
SSL_CONTEXT.
In `@web/pgadmin/llm/client.py`:
- Around line 149-153: The error raised when both api_key and api_url are
missing uses a message "Anthropic API key not configured" which is no longer
accurate; update the LLMError message in the block that raises LLMClientError
(the check using api_key and api_url) to state that either an Anthropic API key
or a custom API URL must be provided (e.g., "Anthropic API key or custom API URL
not configured") and keep provider="anthropic" and the same exception types
(LLMError, LLMClientError) unchanged.
- Around line 163-167: The OpenAI error raised when both api_key and api_url are
missing should include more descriptive details; modify the LLMClientError
instantiation that wraps LLMError so the message clearly states which
configuration is missing (e.g., "OpenAI API key or API URL not configured")
and/or include the values of api_key and api_url presence (without exposing
secrets) for clarity; update the code that constructs LLMError (the branch
checking api_key and api_url) to produce the improved message while keeping
provider="openai" and raising LLMClientError(LLMError(...)).
In `@web/pgadmin/static/js/components/SelectRefresh.jsx`:
- Around line 182-188: SelectRefresh's propTypes are missing declarations for
the newly used props: add PropTypes entries for options (array or arrayOf/object
as appropriate), optionsReloadBasis (string/number/oneOfType depending on usage)
and onChange (func) to the SelectRefresh.propTypes object so the destructured
props at the top (options, optionsReloadBasis, onChange) are validated; update
the SelectRefresh.propTypes block to include these three keys matching the types
used by the component.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c51fd4b0-8590-46be-9af7-2330e380a96e
📒 Files selected for processing (13)
docs/en_US/ai_tools.rstdocs/en_US/preferences.rstdocs/en_US/release_notes_9_14.rstweb/config.pyweb/pgadmin/llm/__init__.pyweb/pgadmin/llm/client.pyweb/pgadmin/llm/providers/anthropic.pyweb/pgadmin/llm/providers/openai.pyweb/pgadmin/llm/utils.pyweb/pgadmin/preferences/static/js/components/PreferencesHelper.jsxweb/pgadmin/static/js/SchemaView/MappedControl.jsxweb/pgadmin/static/js/components/FormComponents.jsxweb/pgadmin/static/js/components/SelectRefresh.jsx
|
Just to add my 2c, notice that OpenAI compatible api servers (for example, vllm) may not require a API Key. |
…allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM. #9703
Summary by CodeRabbit
New Features
Documentation