Skip to main content

Notifications

Community site session details

Community site session details

Session Id : E5Inc/55ugkT2IU7HZhlhs
Copilot Studio - General
Unanswered

Copilot Studio Gen AI outputs different from Copilot for Microsoft 365 for the same set of uploaded documents

Like (2) ShareShare
ReportReport
Posted on 12 Mar 2024 14:06:46 by 5

Seeking advice from community experts on Copilot Studio Gen AI vs. Copilot for M365

I have created a Copilot Studio bot with Gen AI capability enabled. I manually uploaded multiple purchasing agreements / contracts into the Copilot Studio bot (Content Moderation set to Medium), and also the same set of documents into my Copilot for M365 interface.

When I submit the same prompt into both interface (e.g. What are the standard payment term in all the documents?), I will get different results, with the ones from my Copilot Studio chatbot much more limited.
Even for the same prompt and data source, for Copilot Studio chatbot, the responses seem to always come from only one or two documents that Gen AI selects, while Copilot for M365's are usually more extensive.

Are there any technical workarounds / functionality / prompt writing techniques to make my Copilot Studio chatbot more "intelligent" and comprehensive, especially in being able to process and compare/summarize multiple documents for the same prompt?

Any advice would be helpful, at least for my understanding on the limitations of Gen AI in Copilot Sutdio.

Categories:
  • SS-14080712-0 Profile Picture
    3 on 14 Aug 2024 at 07:31:43
    Copilot Studio Gen AI outputs different from Copilot for Microsoft 365 for the same set of uploaded documents
    Here's a simple test that the studio based copilots seem to fail on...  or perhaps my understanding of them does...
     
    In the regular Copilot (from MS store),  ask "referring only to VendorA.com and VendorB.com, list some products" and provided the list of options is small enough, it will respond with products from both vendors.
     
    Now in a custom copilot from copilot studio (genai enabled, any content moderation value), add VendorA.com and VendorB.com as knowledge sources, and ask "list some products" (or ALL products), and it will usually only respond with a product from one vendor or the other.
     
    Same result is happening when uploading VendorA.pdf and VendorB.pdf to the copilot studio version. I haven't tried this in the store-based copilot, but I presume it also gets it right.
     
    So is this a factor of the subscription level one is on for copilot studio? Mine is on the MS Teams channel (via M365 licenses).
     
    Kantasit's suggestion about use of better prompts and topics to better direct the bot seems useful when directing VendorX queries to/away-from specific sources, but in the above example, the query needs to read evenly across all sources.
     
    I've done similar with 10-20 pdf's together in a copilot, and the problem is even more noticable... the query " Which Vendors have a certain capability" seemed to max out at 2-3 sources.
     
    If this is a genuine limitation of the current copilot studio bots, then I fear they're only useful for guided query/search scenarios only.
     
    ?
  • Kantasit Profile Picture
    10 on 16 Jul 2024 at 11:24:42
    Copilot Studio Gen AI outputs different from Copilot for Microsoft 365 for the same set of uploaded documents

    I've experienced similar issues with Copilot Studio struggling to provide more extensive answers based on the context and content of my documents/knowledge, particularly with complex structures.

    One approach that improved its performance for me was creating better prompts. Additionally, I found that using the "Topic" function to divide complex documents and knowledge into different sections or topics helped significantly. This approach seems to aid the model in chunking the documents and understanding the content more effectively. It is also possible to set up triggers and prompts for these topics, which can help the model locate the relevant content more accurately.

    This might be a current limitation of Copilot compared to models like GPT-4 or Gemini, which tend to handle complex documents more effectively. I assume this may have to do with the models used in Copilot and the context window limitations of the current model, making it challenging to perform Retrieval Augmented Generation (RAG) processes to provide better answers.

  • TN-16070546-0 Profile Picture
    2 on 16 Jul 2024 at 05:54:30
    Copilot Studio Gen AI outputs different from Copilot for Microsoft 365 for the same set of uploaded documents
    I am facing the similar issue. For the same set of source documents and exact same prompt, data generated from studio bot and team copilot (office365 copilot) are totally different (teams bot has better response). Can someone help to understand this behavior and any work around to get similar output.
     
    I have SharePoint source with 10-15 documents that goes as input and as of now I see 5 document limits in teams copilot.  

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Understanding Microsoft Agents - Introductory Session

Confused about how agents work across the Microsoft ecosystem? Register today!

Warren Belz – Community Spotlight

We are honored to recognize Warren Belz as our May 2025 Community…

Congratulations to the April Top 10 Community Stars!

Thanks for all your good work in the Community!

Leaderboard > Copilot Studio - General

Overall leaderboard
Loading started