Regular Microsoft Copilot (free tier, quick answers) gives a comprehensive and accurate answer to a query such as "According to [website], [question]". The response is well-crafted and represents well the [website] information source.
An equivalent Copilot Studio agent, with Knowledge set to the same [website], and asked the exact same [question] consistently fails to answer altogether (i.e. routes to Fallback topic).
Elaborating on the [question] by adding an additional keyword known to feature in the knowledge source does get a response, but the response is so hugely abbreviated from the knowledge source that it's useless to the user. Also, even this approach is inconsistent because, whilst I can get an abridged response by using one particular keyword, using a different keyword which features in the exact same section of [website] content as the first keyword defaults to Fallback.
Knowledge source = single [website]
Generative AI = Enabled
Moderation = High
Foundational Knowledge = Disabled
In an effort to get a more comprehensive response, I've also tried setting Response Instructions to:
1. Focus on providing detailed and complete answers, using content verbatim from the knowledge source.
2. Highlight key points and relevant information from the knowledge source.
3. If multiple sections of the document are relevant to a question, provide all sections.
I'd be happy to share the specific [website] and [question] in a direct conversation. I just thought perhaps not appropriate to share publicly here as it's work-related.
Any suggestions welcome. I'm keen to promote Copilot Studio agents internally by demonstrating a PoC, but I need to get over this basic issue if I'm to get business buy-in.