web
You’re offline. This is a read only version of the page.
close
Skip to main content

Announcements

News and Announcements icon
Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Copilot Studio / Copilot Studio Agent O...
Copilot Studio
Suggested Answer

Copilot Studio Agent OpenAIModelTokenLimit error

(1) ShareShare
ReportReport
Posted on by 66
Hi All,
 
We have created a copilot studio agent which uses "Get Emails (V3)" connector. The issue we are facing is when any email has an attachment size of more than 500 KB
 
than we are getting the "toomuchdatatohandle OpenAIModelTokenLimit"" error.
 
Even if we give the user prompt to the agent link how many emails recieved today we are getting above  OpenAIModelTokenLimit error.
 
Any suggestions or if any one has faced the same issue and found a solution for this issue please update it here.
 
Regards.
Avinash.
Categories:
I have the same question (0)
  • Suggested answer
    11manish Profile Picture
    2,286 on at
    Do NOT allow Microsoft Copilot Studio to process raw Outlook email payloads directly.
     
    The correct architecture is:
    • Use Power Automate as a preprocessing layer
    • Strip attachments/body HTML
    • Return lightweight structured JSON only
    • Process attachments separately if needed
    This is the standard and scalable solution for avoiding OpenAIModelTokenLimit issues.
  • Nivedipa-MSFT Profile Picture
    Microsoft Employee on at
    Hello   ,

    Root cause: The Get Emails (V3) action returns the full email body along with base64 attachments, which exceeds the model's context window. This large output remains in context, so even simple queries like "how many emails today" can hit the limit.

    Recommended steps:

    1. In the Get Emails action, set Include Attachments to No. This is the most effective change.
    2. Set Top to a smaller value, such as 10–25.
    3. Use $select to retrieve only essential fields: subject, from, receivedDateTime, and bodyPreview (not the full body).
    4. Apply $filter server-side to limit by date range.
    5. Instead of passing the entire connector output to the LLM, extract only the required fields into a variable and use that.
    6. In Settings → Generative AI, reduce the number of conversation history turns (e.g., to 3).
    7. If possible, switch to GPT-4o / GPT-4 Turbo (128K).

    Setting Include Attachments to No and using $select for minimal fields will resolve this issue in about 95% of cases.

     

  • Suggested answer
    Haque Profile Picture
    2,952 on at
    Hi @AD-01081101-0,
     
    Exclude attachments from the prompt or set the low limit: Do not include attachment content or large base64 data in the prompt sent to OpenAI if not needed or at least custom your prompt to deal with less than 20/30/50 MB.
     
     

    I am sure some clues I tried to give. If these clues help to resolve the issue brought you by here, please don't forget to check the box Does this answer your question? At the same time, I am pretty sure you have liked the response!

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Introducing the 2026 Season 1 community Super Users

Congratulations to our 2026 Super Users!

Kudos to our 2025 Community Spotlight Honorees

Congratulations to our 2025 community superstars!

Congratulations to the April Top 10 Community Leaders!

These are the community rock stars!

Leaderboard > Copilot Studio

#1
Valantis Profile Picture

Valantis 802

#2
Vish WR Profile Picture

Vish WR 331

#3
Haque Profile Picture

Haque 292

Last 30 days Overall leaderboard