web
You’re offline. This is a read only version of the page.
close
Skip to main content

Announcements

News and Announcements icon
Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Copilot Studio / Multi-Agent Orchestrat...
Copilot Studio
Answered

Multi-Agent Orchestration Problems – Multi turn conversation issue, Conversation Boosting

(2) ShareShare
ReportReport
Posted on by 57
Hey everyone,
I'm running into a frustrating set of issues with a multi-agent setup in Microsoft Copilot Studio and wanted to see if anyone else has hit these or found workarounds.

My Setup
  • Orchestrator / Main Agent
  • Connected Agents (fully published, connected to the main agent)
  • Channels: MS Teams/ Copilot studio test

Issues I'm Seeing
1. Conversation Boosting Topic firing during multi-turn conversations
Even when the conversation history clearly contains context for a follow-up question (i.e., the user is mid-conversation and asking a clarifying question), the orchestrator keeps falling back to the Conversation Boosting system topic instead of continuing the active topic or delegating to the appropriate connected agent. It looks like the LLM-based orchestration isn't properly using conversation history to maintain context.

2. Connected Agent not getting triggered
Some of the time, even when the user's intent clearly maps to what one of my connected agents is designed to handle, the orchestrator doesn't invoke that connected agent at all – it either tries to answer itself or falls back to Conversation Boosting again. My connected agents are published and the agent instructions in the orchestrator do describe when to hand off to each connected agent.

3. Escalation topic triggering unexpectedly
The Escalation system topic is triggering far too often – in situations where the query should have been handled either by the orchestrator or a connected agent. This is happening even on straightforward questions.

Key Observation – Model-Specific Behaviour
This is the most important clue I've found so far: all three issues above only occur when the orchestrator is using GPT-5-chat. When I switch the orchestrator model to GPT-4.1, the behaviour is stable – conversation boosting doesn't fire inappropriately, connected agents get triggered correctly, and escalations are rare.
This strongly suggests the issue is related to how the newer model interprets orchestrator instructions and conversation history in a multi-agent context, rather than a configuration problem on my end. Has anyone else observed this model-specific behaviour?

What I've Already Tried
  • Verified connected agents are published before being connected to the main agent
  • Reviewed agent instructions on the orchestrator to ensure clear handoff conditions are described
  • Checked that conversation history passing is enabled between orchestrator and connected agents
  • Tested connected agents in isolation – they work fine independently
  • Switched orchestrator model from GPT-5-chat → GPT-4.1 → issues disappear
My Questions
  • Is there a known compatibility issue between the newer GPT model GPT-5-chat and multi-agent orchestration in Copilot Studio?
  • Does the newer model require different instruction patterns or prompt formatting to reliably delegate to connected agents across multi-turn conversations?
  • Is there a way to suppress or deprioritize system topics (Conversation Boosting, Escalation) so the LLM orchestration layer takes precedence?
  • Is this a platform-level bug that Microsoft is tracking? Worth raising a support ticket?
Environment Info
  • Copilot Studio version: [Latest as of your date]
  • Agent type: Connected Agents (not child agents)
  • Orchestration mode: Generative (AI-based orchestration)
  • Model where issue occurs: GPT-5-chat
  • Model where it works fine: GPT-4.1

Any help, documentation pointers, or workarounds would be hugely appreciated. For now I've rolled back to GPT-4.1 on the orchestrator, but I'd love to use the newer model without these instabilities.
Thanks!
I have the same question (0)
  • Verified answer
    Valantis Profile Picture
    5,113 on at
     
    Your GPT-5 vs GPT-4.1 observation is the key here and it makes sense.
     
    GPT-5 Chat is generally available in Copilot Studio since November 2025 but it uses a different reasoning and routing mechanism than GPT-4.1. In multi-agent setups this causes exactly what you are seeing Conversation Boosting firing when it should not, connected agents not getting triggered, and unexpected escalations. GPT-4.1 remains the recommended model for stable orchestration in production.
     
    One confirmed signal: Microsoft docs state that with generative orchestration on, Conversation Boosting should never fire when the orchestrator handles the query. When it does fire it means the orchestrator failed to match the intent which tells you GPT-5 is dropping context or misrouting in your setup.
     
    What to do:
     
    1. Keep the orchestrator on GPT-4.1 for now. That is the right call.
    2. If you want GPT-5 capabilities, run it in the connected agents not the orchestrator. GPT-5 works better for deep reasoning within a focused agent scope rather than as the routing layer.
    3. Tighten the connected agent descriptions in the orchestrator. GPT-5 is more sensitive to overlapping or vague descriptions than GPT-4.1.
    4. Open a Microsoft support ticket with your model comparison evidence. This is exactly the data they need to tune GPT-5 for multi-agent orchestration.
     
     

     

    Best regards,

    Valantis

     

    ✅ If this helped solve your issue, please Accept as Solution so others can find it quickly.

    ❤️ If it didn’t fully solve it but was still useful, please click “Yes” on “Was this reply helpful?” or leave a Like :).

    🏷️ For follow-ups  @Valantis.

    📝 https://valantisond365.com/

     

  • akash.talole Profile Picture
    57 on at
    Thanks @Valantis for the suggestion of using GPT-4.1 on the main agent and GPT-5-chat on the connected agents – I can see why that would work in a standard setup, but unfortunately it doesn’t fit my architecture.

    Here’s why: I’ve deliberately disabled the direct response from connected agents by setting `ContinueResponse = False`. Instead of letting each connected agent reply independently, I’m capturing their output into a global variable and passing it back to the main orchestrator agent. The orchestrator then reads that global variable and generates a single, consolidated response to the user in Microsoft Teams.

    This approach is intentional – when a user asks a question that spans multiple agents in a single turn, I don’t want the user to receive multiple fragmented responses in the Teams channel. I want one clean, unified answer.
    Because the main agent is the one doing the final synthesis and response generation – reading global variable values from one or more connected agents and composing the final reply – it needs to be capable enough to handle that reasoning. That’s exactly why I need GPT-5-chat on the main agent as well, not just on the connected agents.

    So the issue remains: GPT-5-chat on the orchestrator breaks multi-turn conversation handling, but I can’t simply downgrade the orchestrator to GPT-4.1 without losing the response quality I need for this consolidated output pattern. Would love to know if anyone has found a way to make GPT-5-chat more stable for multi-turn orchestration in this kind of setup.

    As per https://learn.microsoft.com/en-us/microsoft-copilot-studio/advanced-generative-actions#known-limitations-for-generative-orchestration
     
    You are right as per https://learn.microsoft.com/en-us/microsoft-copilot-studio/advanced-generative-actions#knowledge, it states conversation boosting never triggered by Generative orchestation.
     
    Regarding conversation history and the follow-up question from last answer, In my case, agent is not able to answer from last messgae(n-1)
     
    Previous conversation context
    An agent that uses generative orchestration has access to the recent conversation with the user, which provides context for making decisions about which tools to call or filling inputs with values. The amount of conversation history is currently limited, which means that sometimes the agent can't see or use the information in earlier parts of the conversation. In these cases, it might be necessary to collect some information again from the user, or ensure that key information is included in the transcript at regular intervals.
     
  • akash.talole Profile Picture
    57 on at
    Now I am also facing same issue with GPT 4.1 Model. So its very unstable and unreliable to build multi-agent solution using copilot studio for multi-turn conversation. it feels like not ready for production grade enterprise complex env. Very disappointed with Copilot studio as a Product from microsoft. 

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Introducing the 2026 Season 1 community Super Users

Congratulations to our 2026 Super Users!

Kudos to our 2025 Community Spotlight Honorees

Congratulations to our 2025 community superstars!

Congratulations to the April Top 10 Community Leaders!

These are the community rock stars!

Leaderboard > Copilot Studio

#1
Valantis Profile Picture

Valantis 895

#2
Vish WR Profile Picture

Vish WR 337

#3
Haque Profile Picture

Haque 276

Last 30 days Overall leaderboard