web
You’re offline. This is a read only version of the page.
close
Skip to main content

Announcements

News and Announcements icon
Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Copilot Studio / Open AI model Token Limit
Copilot Studio
Suggested Answer

Open AI model Token Limit

(2) ShareShare
ReportReport
Posted on by 9

Hello everyone,

We have deployed an agent within our team to help them answer their specific questions. Agent is deployed to Teams

The agent is powered by the GPT‑4.1 model, but unexpectedly, some users can no longer use it and are seeing the following error message during conversations:

 

“An error has occurred. Error code: OpenAIModelTokenLimit Conversation ID: … ”

 

I tried clearing the conversation, and it works for the first question, but the error appears again right after.

 

From what I understand, each model has a token limit per conversation ? Does the retrieval of many sharepoint/files consume token as well for each request ? 



Should I upgrade to GPT‑5 to increase this limit? optimize sources ? 

 

EDIT : after some research and from my udnerstanding, each request consume token and there is x thousand of tokens allowed to my model

But is it allowed per agent ? per model ? How can I know how much token is left ? 

 

thank you

I have the same question (0)
  • JH-08081156-0 Profile Picture
    27 on at
    We started encountering the same behavior on a similar type of agent; see thread Copilot Studio Agents Begin Encountering "OpenAIModelTokenLimit" Errors on 4 February 2026.  That said, the errors seem to be less frequent today than yesterday. YMMV,
  • JL-01070040-0 Profile Picture
    4 on at
    Did you end up switching to gpt 5? did that help? 
  • Suggested answer
    rezarizvii Profile Picture
    333 on at
    Hi, hope you are doing well.
     
    You’re hitting the conversation token limit, not a quota you can “monitor” or “top up”. Every single message you send to an AI model, it's not just your prompt that the AI receives. It receives the entire chat history of your messages and the AI's responses to those messages (to answer with the context of the conversation), it receives all the retrieved content like SharePoint files, it receives its own configured system message and instructions telling it how to behave, and all of that counts towards tokens for that one particular message. The next message in that chat will increase the token count as your previous message and AI's response just got added to the history.
    To put it together:
    • The limit is per request (context window), not per agent or per tenant
    • Each turn includes:
      • Chat history
      • Retrieved content (SharePoint, files, etc.)
      • System + instructions
    Retrieval does consume tokens, and often a lot of them.
     
    Switching models is a temporary fix, as it WILL increase your conversational token limit, but you will still hit the limit eventually when the context and history grows.
     
    What to actually do
    • Reduce retrieved content
      • Fewer sources
      • Smaller documents
      • Avoid large pages/files
    • Limit conversation memory
      • Disable or shorten history if possible
     
    ====================================================================
    If this reply helped you in any way, please mark 'Yes' for "Was this reply helpful?" and give it a Like 💜
    In case it resolved your issue, please mark it as the Verified Answer ✅.

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Introducing the 2026 Season 1 community Super Users

Congratulations to our 2026 Super Users!

Kudos to our 2025 Community Spotlight Honorees

Congratulations to our 2025 community superstars!

Congratulations to the March Top 10 Community Leaders!

These are the community rock stars!

Leaderboard > Copilot Studio

#1
Valantis Profile Picture

Valantis 618

#2
Haque Profile Picture

Haque 147

#3
Vish WR Profile Picture

Vish WR 140

Last 30 days Overall leaderboard