web
You’re offline. This is a read only version of the page.
close
Skip to main content

Notifications

Announcements

Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Copilot Studio / Generative Answers Nod...
Copilot Studio
Unanswered

Generative Answers Node: "We are experiencing higher than expected demand, please try again later."

(0) ShareShare
ReportReport
Posted on by 3

Some of our users are encountering a response of "We are experiencing higher than expected demand, please try again later." as the response from the generative answers node.

 

I assume this relates to the stipulation "This capability may be subject to usage limits or capacity throttling." mentioned in the FAQ for generative answers. Try as I may I can find no formal declaration of what this usage limit is, if it can be raised, what's the cost implication of this etc. Currently we only have around 15 people making use of the Copilot, acting as a source of Q&A. Obviously this number is going to increase as time goes on, so we need to be able to understand the limitations of the current service. This is so we can make a decision on whether to stay the path with the current generative answer node, or write our own endpoint to replace its functionality (i.e. an API endpoint making use of Azure OpenAI Service behind the scenes - which won't have such limitations).

Categories:
I have the same question (3)
  • citron-truc Profile Picture
    394 on at

    Good morning !
    I hope you are doing well. Had the same message this morning (I'm the only one to use my copilot so I think it has to do with a lot of user as a whole using the GPT models and that it isn't tenant related). I resent my questions and it worked fine.
    Happened two or three times during the few months I have been using copilot studio.

  • _Kevin_ Profile Picture
    3 on at

    Thanks, that helps to narrow it down a bit (and the users approached the problem in the same way you did, by resubmitting their questions).

     

    However we still need clarification what those capacity limits are and if anything can be done about it. For example:

     

    - Could we associate an LLM that we have deployed in Azure (i.e. Azure OpenAI Service) with the generative node for the purposes of creating responses rather than using the communal resource - which would mean we would be able to control capacity?

     

    - Could the generative answers node automatically retry on capacity failure, rather than reporting the aforementioned error message?

     

    - Could the generative node include some success indicator other than text, so we can trap the error - rather than having to match on text (which isn't the best way to go)?

     

     

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Forum hierarchy changes are complete!

In our never-ending quest to improve we are simplifying the forum hierarchy…

Ajay Kumar Gannamaneni – Community Spotlight

We are honored to recognize Ajay Kumar Gannamaneni as our Community Spotlight for December…

Leaderboard > Copilot Studio

#1
Michael E. Gernaey Profile Picture

Michael E. Gernaey 270 Super User 2025 Season 2

#2
Romain The Low-Code Bearded Bear Profile Picture

Romain The Low-Code... 181 Super User 2025 Season 2

#3
S-Venkadesh Profile Picture

S-Venkadesh 93 Moderator

Last 30 days Overall leaderboard