Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions .github/workflows/bun.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
name: Bun CI

on:
push:
branches: [master]
pull_request:
branches: [master]
merge_group:

jobs:
test:
strategy:
fail-fast: false
matrix:
os: ['ubuntu-latest', 'macos-latest']

name: Test (Bun, ${{ matrix.os }})
runs-on: ${{ matrix.os }}

concurrency:
group: bun-${{ github.workflow }}-#${{ github.event.pull_request.number || github.head_ref || github.ref }}-(${{ matrix.os }})
cancel-in-progress: true

steps:
- uses: actions/checkout@v6

- uses: oven-sh/setup-bun@v2
with:
bun-version: latest

- uses: actions/setup-node@v6
with:
node-version: '22'

- uses: pnpm/action-setup@v4

- run: pnpm install --frozen-lockfile

- name: Patch vitest imports
run: |
for f in test/*.test.ts; do
[ -e "$f" ] || exit 0
sed --version >/dev/null 2>&1 && \
sed -i "s@'vite-plus/test'@'vitest'@g" "$f" || \
sed -i '' "s@'vite-plus/test'@'vitest'@g" "$f"
done
sed --version >/dev/null 2>&1 && \
sed -i "s@'vite-plus'@'vite'@g" vite.config.ts || \
sed -i '' "s@'vite-plus'@'vite'@g" vite.config.ts
jq 'del(.overrides) | del(.pnpm.overrides)' package.json > package.json.tmp && mv package.json.tmp package.json
pnpm install -D vitest@latest vite@latest
Comment on lines +27 to +51
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI uses floating versions (bun-version: latest and pnpm install -D vitest@latest vite@latest). This makes the workflow non-deterministic and can introduce sudden failures unrelated to repo changes. Consider pinning Bun (and the patched vitest/vite versions) to known-good versions (or a stable semver range) so CI results are reproducible.

Copilot uses AI. Check for mistakes.

- name: Run tests with Bun
run: bun --bun node_modules/vitest/vitest.mjs run --reporter=dot
23 changes: 23 additions & 0 deletions src/FormData.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,27 @@ export class FormData extends _FormData {

return contentDisposition;
}

/**
* Convert FormData to Buffer by consuming the CombinedStream.
* This is needed for Bun compatibility since Bun's undici
* doesn't support Node.js Stream objects as request body.
*
* Note: CombinedStream (which form-data extends) requires
* resume() to start data flow, unlike standard Readable streams.
*/
async toBuffer(): Promise<Buffer> {
return new Promise<Buffer>((resolve, reject) => {
const chunks: Buffer[] = [];
this.on('data', (chunk: Buffer | string) => {
// CombinedStream emits boundary/header strings alongside Buffer data
chunks.push(typeof chunk === 'string' ? Buffer.from(chunk) : chunk);
});
this.on('end', () => resolve(Buffer.concat(chunks)));
this.on('error', reject);
Comment on lines +51 to +56
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FormData.toBuffer() attaches data/end/error listeners with .on(...) but never removes them. If the same FormData instance is re-used or toBuffer() is called more than once, listeners can accumulate and cause leaks or duplicate resolution attempts. Prefer once(...) handlers and/or explicitly remove listeners in both resolve and reject paths.

Suggested change
this.on('data', (chunk: Buffer | string) => {
// CombinedStream emits boundary/header strings alongside Buffer data
chunks.push(typeof chunk === 'string' ? Buffer.from(chunk) : chunk);
});
this.on('end', () => resolve(Buffer.concat(chunks)));
this.on('error', reject);
this.once('data', (chunk: Buffer | string) => {
// CombinedStream emits boundary/header strings alongside Buffer data
chunks.push(typeof chunk === 'string' ? Buffer.from(chunk) : chunk);
});
this.once('end', () => resolve(Buffer.concat(chunks)));
this.once('error', reject);

Copilot uses AI. Check for mistakes.
// CombinedStream pauses by default and only starts
// flowing when piped or explicitly resumed
this.resume();
});
Comment on lines +48 to +60
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

toBuffer() attaches data/end/error listeners with on(...) but never removes them. If toBuffer() is called more than once on the same instance, listeners will accumulate and the method may resolve with duplicated data or leak memory. Consider using once(...) (and/or removing listeners on resolve/reject) to make this method safe against repeated calls.

Copilot uses AI. Check for mistakes.
}
Comment on lines +48 to +61
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This method buffers the entire FormData stream into memory. While the comment explains this is for Bun compatibility, it's important to be aware of the memory implications. For large forms with large files, this could lead to high memory usage and potential performance problems. This is a significant trade-off for compatibility.

}
89 changes: 74 additions & 15 deletions src/HttpClient.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import diagnosticsChannel from 'node:diagnostics_channel';
import type { Channel } from 'node:diagnostics_channel';
import { EventEmitter } from 'node:events';
import { createReadStream } from 'node:fs';
import { readFile } from 'node:fs/promises';
import { STATUS_CODES } from 'node:http';
import type { LookupFunction } from 'node:net';
import { basename } from 'node:path';
Expand Down Expand Up @@ -111,8 +112,18 @@ export type ClientOptions = {
};

export const VERSION: string = 'VERSION';
export const isBun: boolean = !!process.versions.bun;

function getRuntimeInfo(): string {
if (isBun) {
return `Bun/${process.versions.bun}`;
}
return `Node.js/${process.version.substring(1)}`;
}

// 'node-urllib/4.0.0 Node.js/18.19.0 (darwin; x64)'
export const HEADER_USER_AGENT: string = `node-urllib/${VERSION} Node.js/${process.version.substring(1)} (${process.platform}; ${process.arch})`;
// 'node-urllib/4.0.0 Bun/1.2.5 (darwin; x64)'
export const HEADER_USER_AGENT: string = `node-urllib/${VERSION} ${getRuntimeInfo()} (${process.platform}; ${process.arch})`;
Comment on lines +117 to +126
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep the canonical Node.js/... User-Agent token.

This replaces the repo-wide Node.js/<version> token with Bun/<version>, which is a behavior change for servers/tests that parse the existing UA format. If Bun needs to be surfaced, add it separately instead of changing the canonical value.

As per coding guidelines, **/*.ts: Set User-Agent header to node-urllib/{version} Node.js/{version} ({platform}; {arch}).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/HttpClient.ts` around lines 117 - 126, The User-Agent construction
currently swaps the canonical "Node.js/..." token for "Bun/..." when running in
Bun; change getRuntimeInfo so it always returns the canonical Node.js token
(e.g. `Node.js/${process.version.substring(1)}`) and adjust HEADER_USER_AGENT to
append a separate Bun token only when running under Bun (e.g. add `
Bun/${process.versions.bun}` when isBun is true) so HEADER_USER_AGENT preserves
`node-urllib/${VERSION} Node.js/{version} (...)` and surfaces Bun separately;
update references in getRuntimeInfo and HEADER_USER_AGENT (and use VERSION)
accordingly.


function getFileName(stream: Readable): string {
const filePath: string = (stream as any).path;
Expand Down Expand Up @@ -427,16 +438,21 @@ export class HttpClient extends EventEmitter {
let maxRedirects = args.maxRedirects ?? 10;

try {
// Bun's undici doesn't honor headersTimeout/bodyTimeout,
// use AbortSignal.timeout() as fallback
let requestSignal = args.signal;
if (isBun) {
const bunTimeoutSignal = AbortSignal.timeout(headersTimeout + bodyTimeout);
requestSignal = args.signal ? (AbortSignal as any).any([bunTimeoutSignal, args.signal]) : bunTimeoutSignal;
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Bun, args.signal is allowed to be an EventEmitter (per RequestOptions.signal), but this code unconditionally feeds it into AbortSignal.any(...). If a caller passes an EventEmitter signal under Bun, this will throw synchronously (since AbortSignal.any expects AbortSignal instances), changing behavior from a request rejection to a runtime error. Consider detecting non-AbortSignal values when isBun and either (a) ignore/omit the user signal, or (b) throw a clear, early validation error explaining that Bun only supports AbortSignal for signal.

Suggested change
requestSignal = args.signal ? (AbortSignal as any).any([bunTimeoutSignal, args.signal]) : bunTimeoutSignal;
if (args.signal == null) {
// No user-provided signal: use only the timeout-based AbortSignal
requestSignal = bunTimeoutSignal;
} else if (args.signal instanceof AbortSignal) {
// User provided an AbortSignal: safely combine it with the timeout signal
requestSignal = (AbortSignal as any).any([bunTimeoutSignal, args.signal]);
} else {
// On Bun, RequestOptions.signal may be an EventEmitter or other type.
// Pass it through as-is to avoid AbortSignal.any throwing at runtime.
requestSignal = args.signal;
}

Copilot uses AI. Check for mistakes.
}
const requestOptions: IUndiciRequestOption = {
method,
// disable undici auto redirect handler
// maxRedirections: 0,
headersTimeout,
headers,
bodyTimeout,
opaque: internalOpaque,
dispatcher: args.dispatcher ?? this.#dispatcher,
signal: args.signal,
signal: requestSignal,
reset: false,
};
if (typeof args.highWaterMark === 'number') {
Expand Down Expand Up @@ -500,14 +516,24 @@ export class HttpClient extends EventEmitter {
let value: any;
if (typeof file === 'string') {
fileName = basename(file);
value = createReadStream(file);
// Bun's CombinedStream can't pipe file streams
value = isBun ? await readFile(file) : createReadStream(file);
} else if (Buffer.isBuffer(file)) {
Comment on lines +519 to 521
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Please don't silently materialize every Bun multipart upload in memory.

This path reads file inputs with readFile(), drains readable uploads into Buffer.concat(), and then materializes the full multipart body via toBuffer(). Large uploads lose streaming backpressure and can OOM the process. A safer fallback is to reject unsupported Bun streaming uploads or enforce a size cap instead of buffering unboundedly.

Based on learnings, urllib is built as a Node.js HTTP client library on top of undici with features including basic/digest authentication, redirections, timeout handling, gzip/brotli compression, file uploads, and HTTP/2 support.

Also applies to: 528-535, 548-550

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/HttpClient.ts` around lines 521 - 523, The code currently forces Bun
multipart streams into memory via readFile / Buffer.concat / toBuffer which can
OOM for large uploads; update the HttpClient upload handling (check isBun and
the multipart file/stream branches in the upload/multipart logic) to avoid
silently materializing streams: detect when a file input is a Readable/stream
and on Bun either (a) throw a clear, documented error rejecting streaming
uploads on Bun with guidance to provide a file path or a Buffer, or (b) enforce
a configurable size cap (e.g., maxUploadBytes) and reject if the stream exceeds
it; do not call readFile or Buffer.concat unconditionally for stream
inputs—either require file paths (use createReadStream) or write to a temporary
file on disk with streaming I/O before building the multipart body so
backpressure is preserved; ensure the change touches the
isBun/readFile/createReadStream/toBuffer/Buffer.concat code paths used for
multipart handling.

fileName = customFileName || `bufferfile${index}`;
value = file;
} else if (file instanceof Readable || isReadable(file as any)) {
fileName = getFileName(file) || customFileName || `streamfile${index}`;
isStreamingRequest = true;
value = file;
if (isBun) {
// Bun's CombinedStream can't pipe Node.js streams
const streamChunks: Buffer[] = [];
for await (const chunk of file) {
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Bun path for args.files, you buffer non-Buffer inputs by iterating with for await (const chunk of file). This will throw if file is only “readable-like” (passes isReadable) but is not actually async-iterable. To avoid runtime errors for legacy/old-style streams, wrap non-Readable streams into a real Readable (e.g., via new Readable().wrap(...)) or check for Symbol.asyncIterator before using for await.

Suggested change
for await (const chunk of file) {
const sourceStream: any =
typeof (file as any)[Symbol.asyncIterator] === 'function'
? file
: new Readable({ read() {} }).wrap(file as any);
for await (const chunk of sourceStream) {

Copilot uses AI. Check for mistakes.
streamChunks.push(Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk));
}
value = Buffer.concat(streamChunks);
Comment on lines +526 to +532
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This implementation buffers the entire stream into memory before sending the request when running on Bun. This introduces a risk of high memory consumption for large file uploads, which could lead to performance issues or out-of-memory errors in memory-constrained environments.

} else {
isStreamingRequest = true;
value = file;
}
}
const mimeType = mime.lookup(fileName) || '';
formData.append(field, value, {
Expand All @@ -517,17 +543,26 @@ export class HttpClient extends EventEmitter {
debug('formData append field: %s, mimeType: %s, fileName: %s', field, mimeType, fileName);
}
Object.assign(headers, formData.getHeaders());
requestOptions.body = formData;
if (isBun) {
// Bun's undici can't consume Node.js streams as request body
requestOptions.body = await formData.toBuffer();
} else {
requestOptions.body = formData;
}
} else if (args.content) {
if (!isGETOrHEAD) {
// handle content
requestOptions.body = args.content;
if (isBun && args.content instanceof FormData) {
requestOptions.body = await (args.content as FormData).toBuffer();
} else {
requestOptions.body = args.content;
}
if (args.contentType) {
headers['content-type'] = args.contentType;
Comment on lines +555 to 561
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Buffering Bun FormData needs to copy the multipart headers too.

Unlike the args.files branch, this path turns args.content into a bare Buffer without copying getHeaders(). Once the body is just bytes, the multipart boundary metadata is gone unless content-type is carried over here as well.

🐛 Proposed fix
           if (isBun && args.content instanceof FormData) {
-            requestOptions.body = await (args.content as FormData).toBuffer();
+            const formHeaders = args.content.getHeaders();
+            if (!headers['content-type'] && formHeaders['content-type']) {
+              headers['content-type'] = formHeaders['content-type'];
+            }
+            requestOptions.body = await args.content.toBuffer();
           } else {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (isBun && args.content instanceof FormData) {
requestOptions.body = await (args.content as FormData).toBuffer();
} else {
requestOptions.body = args.content;
}
if (args.contentType) {
headers['content-type'] = args.contentType;
if (isBun && args.content instanceof FormData) {
const formHeaders = args.content.getHeaders();
if (!headers['content-type'] && formHeaders['content-type']) {
headers['content-type'] = formHeaders['content-type'];
}
requestOptions.body = await args.content.toBuffer();
} else {
requestOptions.body = args.content;
}
if (args.contentType) {
headers['content-type'] = args.contentType;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/HttpClient.ts` around lines 557 - 563, When running in Bun and
args.content is a FormData, the code currently buffers it to a Buffer but
forgets to copy multipart headers (boundary) from FormData.getHeaders(), so the
content-type/boundary is lost; update the isBun branch where args.content is
handled (look for isBun, args.content, requestOptions.body and FormData casting)
to call (args.content as FormData).getHeaders() and merge its content-type into
the headers map before/after setting requestOptions.body so the multipart
boundary is preserved (ensure headers['content-type'] is set from getHeaders()
when present).

} else if (typeof args.content === 'string' && !headers['content-type']) {
headers['content-type'] = 'text/plain;charset=UTF-8';
}
isStreamingRequest = isReadable(args.content);
isStreamingRequest = !isBun && isReadable(args.content);
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Bun, args.content can still be a Node.js Readable (you later convert requestOptions.body via Readable.toWeb). However isStreamingRequest is currently forced to false on Bun (!isBun && isReadable(args.content)), which means retries/socketErrorRetry/redirects may stay enabled for a non-rewindable streaming body. That can break retries/redirect handling (subsequent attempts will re-use an already-consumed stream). Consider marking isStreamingRequest whenever the body is stream-like regardless of runtime (and disabling retry/redirect accordingly), even if Bun requires a Web stream conversion.

Suggested change
isStreamingRequest = !isBun && isReadable(args.content);
isStreamingRequest = isReadable(args.content);

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bun content streams still need the streaming retry/redirect guard.

Line 567 forces Bun args.content streams to look replayable, but Lines 620-621 still send them as a one-shot web stream. That means Lines 609-613 never disable redirect/socketErrorRetry for this path, so a 30x or retry can end up reusing a consumed body from options.content.

🔒 Proposed fix
-          isStreamingRequest = !isBun && isReadable(args.content);
+          isStreamingRequest = isReadable(args.content);
As per coding guidelines, `**/*.ts`: Disable retry and redirect functionality for streaming requests.

Also applies to: 609-613, 620-622

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/HttpClient.ts` at line 567, The code treats Bun stream bodies as
replayable by setting isStreamingRequest = !isBun && isReadable(args.content);
instead, detect readable streams regardless of isBun and ensure you disable
redirect and socketErrorRetry for those cases: update the isStreamingRequest
logic to consider isReadable(args.content) even when isBun, and in the
request-paths (the blocks referencing redirect and socketErrorRetry around where
options.content/args.content are used) explicitly turn off redirect and
socketErrorRetry when args.content or options.content is a readable stream so
streaming requests are not retried or redirected.

}
} else if (args.data) {
const isStringOrBufferOrReadable =
Expand Down Expand Up @@ -579,6 +614,11 @@ export class HttpClient extends EventEmitter {
args.socketErrorRetry = 0;
}

// Bun's undici can't consume Node.js Readable as request body
if (isBun && requestOptions.body instanceof Readable) {
requestOptions.body = Readable.toWeb(requestOptions.body) as any;
Comment on lines +618 to +619
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Bun body conversion only runs when requestOptions.body instanceof Readable. For “old-style” streams that satisfy isReadable(...) but are not instanceof Readable (which this codebase already supports elsewhere via isReadable/Readable().wrap(...)), Bun will still receive a Node stream and likely fail. Consider changing this guard to use isReadable(requestOptions.body) and wrapping non-Readable streams before calling Readable.toWeb(...).

Suggested change
if (isBun && requestOptions.body instanceof Readable) {
requestOptions.body = Readable.toWeb(requestOptions.body) as any;
if (isBun && isReadable(requestOptions.body)) {
const nodeReadable =
requestOptions.body instanceof Readable
? requestOptions.body
: new Readable({ read() {} }).wrap(requestOptions.body as any);
requestOptions.body = Readable.toWeb(nodeReadable) as any;

Copilot uses AI. Check for mistakes.
}

debug(
'Request#%d %s %s, headers: %j, headersTimeout: %s, bodyTimeout: %s, isStreamingRequest: %s, isStreamingResponse: %s, maxRedirections: %s, redirects: %s',
requestId,
Expand Down Expand Up @@ -659,18 +699,20 @@ export class HttpClient extends EventEmitter {
}
}

// Bun's undici auto-decompresses response body, so skip decompression on Bun
const needDecompress = isCompressedContent && !isBun;
let data: any = null;
if (args.dataType === 'stream') {
// only auto decompress on request args.compressed = true
if (args.compressed === true && isCompressedContent) {
if (args.compressed === true && needDecompress) {
// gzip or br
const decoder = contentEncoding === 'gzip' ? createGunzip() : createBrotliDecompress();
res = Object.assign(pipeline(response.body, decoder, noop), res);
} else {
res = Object.assign(response.body, res);
}
} else if (args.writeStream) {
if (args.compressed === true && isCompressedContent) {
if (args.compressed === true && needDecompress) {
const decoder = contentEncoding === 'gzip' ? createGunzip() : createBrotliDecompress();
await pipelinePromise(response.body, decoder, args.writeStream);
} else {
Expand All @@ -679,7 +721,7 @@ export class HttpClient extends EventEmitter {
} else {
// buffer
data = Buffer.from(await response.body.arrayBuffer());
if (isCompressedContent && data.length > 0) {
if (needDecompress && data.length > 0) {
try {
data = contentEncoding === 'gzip' ? gunzipSync(data) : brotliDecompressSync(data);
} catch (err: any) {
Expand Down Expand Up @@ -769,9 +811,19 @@ export class HttpClient extends EventEmitter {
err = new HttpClientRequestTimeoutError(bodyTimeout, { cause: err });
} else if (err.name === 'InformationalError' && err.message.includes('stream timeout')) {
err = new HttpClientRequestTimeoutError(bodyTimeout, { cause: err });
} else if (isBun && err.name === 'TimeoutError') {
// Bun's undici throws TimeoutError instead of HeadersTimeoutError/BodyTimeoutError
err = new HttpClientRequestTimeoutError(headersTimeout || bodyTimeout, { cause: err });
} else if (isBun && err.name === 'TypeError' && /timed?\s*out|timeout/i.test(err.message)) {
// Bun may wrap timeout as TypeError
err = new HttpClientRequestTimeoutError(headersTimeout || bodyTimeout, { cause: err });
Comment on lines +816 to +819
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The timeout value passed to HttpClientRequestTimeoutError is headersTimeout || bodyTimeout, which can be misleading. For Bun, you are creating a single timeout of headersTimeout + bodyTimeout. The error message should reflect this total timeout duration to avoid confusion. For example, if headersTimeout is 400 and bodyTimeout is 500, the actual timeout is 900ms, but the error will state 400ms.

I suggest using headersTimeout + bodyTimeout here to be consistent.

Note that this will require updating the corresponding test in test/options.timeout.test.ts to expect the correct total timeout value.

Suggested change
err = new HttpClientRequestTimeoutError(headersTimeout || bodyTimeout, { cause: err });
} else if (isBun && err.name === 'TypeError' && /timed?\s*out|timeout/i.test(err.message)) {
// Bun may wrap timeout as TypeError
err = new HttpClientRequestTimeoutError(headersTimeout || bodyTimeout, { cause: err });
err = new HttpClientRequestTimeoutError(headersTimeout + bodyTimeout, { cause: err });
} else if (isBun && err.name === 'TypeError' && /timed?\s*out|timeout/i.test(err.message)) {
// Bun may wrap timeout as TypeError
err = new HttpClientRequestTimeoutError(headersTimeout + bodyTimeout, { cause: err });

Comment on lines +814 to +819
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Report the timeout Bun actually enforced.

Line 818 wraps Bun timeouts with headersTimeout || bodyTimeout, but the fallback signal above aborts after headersTimeout + bodyTimeout. With split values, callers will see the wrong timeout in HttpClientRequestTimeoutError.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/HttpClient.ts` around lines 816 - 821, The Bun timeout branches (checks
on isBun and err.name) currently construct HttpClientRequestTimeoutError using
headersTimeout || bodyTimeout, which hides the actual timeout when the code uses
the fallback abort signal that fires after headersTimeout + bodyTimeout; update
those branches to compute and pass the real enforced timeout (e.g., use
headersTimeout + bodyTimeout when both are set, otherwise use the one that
exists) into HttpClientRequestTimeoutError so the error reports the correct
timeout value (refer to isBun, the err.name checks,
HttpClientRequestTimeoutError, headersTimeout, and bodyTimeout to locate the
changes).

} else if (err.code === 'UND_ERR_CONNECT_TIMEOUT') {
err = new HttpClientConnectTimeoutError(err.message, err.code, { cause: err });
} else if (err.code === 'UND_ERR_SOCKET' || err.code === 'ECONNRESET') {
} else if (
err.code === 'UND_ERR_SOCKET' ||
err.code === 'ECONNRESET' ||
(isBun && (err.code === 'ConnectionClosed' || err.message?.includes('socket')))
) {
// auto retry on socket error, https://github.com/node-modules/urllib/issues/454
if (args.socketErrorRetry > 0 && requestContext.socketErrorRetries < args.socketErrorRetry) {
requestContext.socketErrorRetries++;
Expand All @@ -783,12 +835,19 @@ export class HttpClient extends EventEmitter {
return await this.#requestInternal(url, options, requestContext);
}
}
// Some errors (e.g. DOMException in Bun) may not be extensible
if (!Object.isExtensible(err)) {
const wrappedErr: any = new Error(err.message, { cause: err });
wrappedErr.name = err.name;
wrappedErr.code = err.code;
wrappedErr.stack = err.stack;
err = wrappedErr;
}
err.opaque = originalOpaque;
err.status = res.status;
err.headers = res.headers;
err.res = res;
if (err.socket) {
// store rawSocket
err._rawSocket = err.socket;
}
err.socket = socketInfo;
Expand Down
8 changes: 7 additions & 1 deletion src/fetch.ts
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,13 @@ export class FetchFactory {
}

// get undici internal response
const state = getResponseState(res!);
// Bun's Response doesn't have the same internal state as npm undici's Response
let state: any;
try {
state = getResponseState(res!);
} catch {
state = {};
Comment on lines +250 to +251
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This try/catch around getResponseState(res) is unconditional, so any unexpected failure in Node (e.g., undici internal API changes) will now be silently swallowed and timingInfo will be dropped without any signal. To keep Node behavior fail-fast while still supporting Bun, consider gating the fallback to Bun only (or catching only the expected error type and logging/debugging when it happens).

Suggested change
} catch {
state = {};
} catch (err) {
// Only fall back silently on Bun, where getResponseState is expected to fail.
const isBun = typeof (globalThis as any).Bun !== 'undefined';
if (isBun) {
state = {};
} else {
debug('getResponseState failed unexpectedly: %o', err);
throw err;
}

Copilot uses AI. Check for mistakes.
}
updateSocketInfo(socketInfo, internalOpaque);

urllibResponse.headers = convertHeader(res!.headers);
Expand Down
3 changes: 2 additions & 1 deletion test/HttpClient.connect.rejectUnauthorized.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import { strict as assert } from 'node:assert';

import { describe, it, beforeAll, afterAll } from 'vite-plus/test';

import { isBun } from '../src/HttpClient.js';
import { HttpClient } from '../src/index.js';
import { startServer } from './fixtures/server.js';

Expand Down Expand Up @@ -60,7 +61,7 @@ describe('HttpClient.connect.rejectUnauthorized.test.ts', () => {
);
});

it('should 200 on rejectUnauthorized = false', async () => {
it.skipIf(isBun)('should 200 on rejectUnauthorized = false', async () => {
const httpclient = new HttpClient({
connect: {
rejectUnauthorized: false,
Expand Down
Loading
Loading