Skip to content

Python: fix: use workflow factory to avoid RuntimeError under parallel requests#5212

Open
LEDazzio01 wants to merge 1 commit intomicrosoft:mainfrom
LEDazzio01:fix/4766-workflow-parallel-requests-v2
Open

Python: fix: use workflow factory to avoid RuntimeError under parallel requests#5212
LEDazzio01 wants to merge 1 commit intomicrosoft:mainfrom
LEDazzio01:fix/4766-workflow-parallel-requests-v2

Conversation

@LEDazzio01
Copy link
Copy Markdown
Contributor

Summary

Fixes #4766 — Supersedes #4772 (rebased onto current main).

The hosted agent sample at writer_reviewer_agents_in_workflow/main.py passes a pre-built workflow agent to from_agent_framework(). When the endpoint receives parallel requests, the shared Workflow instance attempts to run concurrently, raising:

RuntimeError: Workflow is already running. Concurrent executions are not allowed.

Fix

     async with create_agents() as (writer, reviewer):
-        agent = create_workflow(writer, reviewer)
-        await from_agent_framework(agent).run_async()
+        # Use a factory lambda so each incoming request gets a fresh Workflow
+        # instance, avoiding RuntimeError from concurrent executions (#4766).
+        await from_agent_framework(lambda: create_workflow(writer, reviewer)).run_async()

from_agent_framework() accepts either a pre-built agent or a factory callable. When given a factory, it creates a fresh agent per request, avoiding shared state.

Contribution Checklist

  • The code builds clean without any errors or warnings
  • The PR follows the Contribution Guidelines
  • Is this a breaking change? No

Copilot AI review requested due to automatic review settings April 10, 2026 21:24
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the Python hosted-agent workflow sample to avoid reusing a single Workflow instance across parallel requests, which can raise RuntimeError: Workflow is already running in hosted environments.

Changes:

  • Pass a factory callable into from_agent_framework(...) so each request gets a fresh workflow instance.
  • Introduce a small workflow-construction helper (create_workflow) and async agent setup (create_agents) for the sample.

Comment on lines +7 to +10
from agent_framework import Agent, WorkflowBuilder
from agent_framework.foundry import FoundryChatClient
from azure.ai.agentserver.agentframework import from_agent_framework
from azure.identity.aio import AzureCliCredential, ManagedIdentityCredential
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agent_framework.foundry/FoundryChatClient doesn't exist in this repo's Python packages (there is no agent_framework/foundry module), so this sample will fail at import time. Please switch to a supported client (e.g., agent_framework.azure.AzureOpenAIResponsesClient using project_endpoint=... for Foundry, or another existing chat client) and update the env var names accordingly.

Copilot uses AI. Check for mistakes.
Comment on lines +51 to +53
return Agent(
client=workflow,
)
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create_workflow() wraps a Workflow inside Agent(client=workflow), but Workflow is not a chat client and doesn't implement the get_response API expected by Agent. Use workflow.as_agent(...) (returns WorkflowAgent) to expose a workflow as an agent for from_agent_framework(...).

Suggested change
return Agent(
client=workflow,
)
return workflow.as_agent(name="WriterReviewerWorkflow")

Copilot uses AI. Check for mistakes.
# Read the explicit variables present in the .env file
FOUNDRY_PROJECT_ENDPOINT = os.getenv(
"FOUNDRY_PROJECT_ENDPOINT"
) # e.g., "https://<project>.services.ai.azure.com/api/projects/<project-name>"
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FOUNDRY_PROJECT_ENDPOINT is required, but os.getenv(...) allows it to be None and will pass None into the client constructor, leading to a harder-to-diagnose failure later. Prefer os.environ["..."] or validate and raise a clear error when it's missing.

Suggested change
) # e.g., "https://<project>.services.ai.azure.com/api/projects/<project-name>"
) # e.g., "https://<project>.services.ai.azure.com/api/projects/<project-name>"
if not FOUNDRY_PROJECT_ENDPOINT:
raise RuntimeError(
"FOUNDRY_PROJECT_ENDPOINT environment variable is required. "
'Set it to your Foundry project endpoint, for example: '
'"https://<project>.services.ai.azure.com/api/projects/<project-name>".'
)

Copilot uses AI. Check for mistakes.
@LEDazzio01
Copy link
Copy Markdown
Contributor Author

@copilot This import is identical to what's already on main — the file was copied verbatim from the current main branch with only the factory lambda change applied.

FoundryChatClient is defined in python/packages/foundry/agent_framework_foundry/_chat_client.py and is re-exported via the agent_framework.foundry namespace. It's used across 231 files in the repo (including 8+ other hosted agent samples in the same directory structure). The import is correct.

The only change in this PR is on line 73:

-        agent = create_workflow(writer, reviewer)
-        await from_agent_framework(agent).run_async()
+        await from_agent_framework(lambda: create_workflow(writer, reviewer)).run_async()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Python: [Bug]: Hosted agent sample reuses a single workflow instance and breaks under parallel requests

3 participants