Python: fix: use workflow factory to avoid RuntimeError under parallel requests#5212
Python: fix: use workflow factory to avoid RuntimeError under parallel requests#5212LEDazzio01 wants to merge 1 commit intomicrosoft:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Updates the Python hosted-agent workflow sample to avoid reusing a single Workflow instance across parallel requests, which can raise RuntimeError: Workflow is already running in hosted environments.
Changes:
- Pass a factory callable into
from_agent_framework(...)so each request gets a fresh workflow instance. - Introduce a small workflow-construction helper (
create_workflow) and async agent setup (create_agents) for the sample.
| from agent_framework import Agent, WorkflowBuilder | ||
| from agent_framework.foundry import FoundryChatClient | ||
| from azure.ai.agentserver.agentframework import from_agent_framework | ||
| from azure.identity.aio import AzureCliCredential, ManagedIdentityCredential |
There was a problem hiding this comment.
agent_framework.foundry/FoundryChatClient doesn't exist in this repo's Python packages (there is no agent_framework/foundry module), so this sample will fail at import time. Please switch to a supported client (e.g., agent_framework.azure.AzureOpenAIResponsesClient using project_endpoint=... for Foundry, or another existing chat client) and update the env var names accordingly.
| return Agent( | ||
| client=workflow, | ||
| ) |
There was a problem hiding this comment.
create_workflow() wraps a Workflow inside Agent(client=workflow), but Workflow is not a chat client and doesn't implement the get_response API expected by Agent. Use workflow.as_agent(...) (returns WorkflowAgent) to expose a workflow as an agent for from_agent_framework(...).
| return Agent( | |
| client=workflow, | |
| ) | |
| return workflow.as_agent(name="WriterReviewerWorkflow") |
| # Read the explicit variables present in the .env file | ||
| FOUNDRY_PROJECT_ENDPOINT = os.getenv( | ||
| "FOUNDRY_PROJECT_ENDPOINT" | ||
| ) # e.g., "https://<project>.services.ai.azure.com/api/projects/<project-name>" |
There was a problem hiding this comment.
FOUNDRY_PROJECT_ENDPOINT is required, but os.getenv(...) allows it to be None and will pass None into the client constructor, leading to a harder-to-diagnose failure later. Prefer os.environ["..."] or validate and raise a clear error when it's missing.
| ) # e.g., "https://<project>.services.ai.azure.com/api/projects/<project-name>" | |
| ) # e.g., "https://<project>.services.ai.azure.com/api/projects/<project-name>" | |
| if not FOUNDRY_PROJECT_ENDPOINT: | |
| raise RuntimeError( | |
| "FOUNDRY_PROJECT_ENDPOINT environment variable is required. " | |
| 'Set it to your Foundry project endpoint, for example: ' | |
| '"https://<project>.services.ai.azure.com/api/projects/<project-name>".' | |
| ) |
|
@copilot This import is identical to what's already on
The only change in this PR is on line 73: - agent = create_workflow(writer, reviewer)
- await from_agent_framework(agent).run_async()
+ await from_agent_framework(lambda: create_workflow(writer, reviewer)).run_async() |
Summary
Fixes #4766 — Supersedes #4772 (rebased onto current
main).The hosted agent sample at
writer_reviewer_agents_in_workflow/main.pypasses a pre-built workflow agent tofrom_agent_framework(). When the endpoint receives parallel requests, the sharedWorkflowinstance attempts to run concurrently, raising:Fix
from_agent_framework()accepts either a pre-built agent or a factory callable. When given a factory, it creates a fresh agent per request, avoiding shared state.Contribution Checklist