Hello,
I'm grappling with an intriguing behavior from a Copilot bot deployed through Azure Open AI Studio.
In my project within Azure Open AI Studio, I've trained the model using a mix of documents, with seven in German and one in English, sourced from an Azure Blob Storage container. During testing in the Azure playground, the bot provided excellent responses to questions in both German and English documents, showcasing its versatility.
Upon deploying the solution in a web app, the results remained consistent and satisfactory. However, when attempting to deploy the same solution in Copilot, the bot seems to struggle. It consistently responds with "I’m sorry, I’m not sure how to help with that. Can you try rephrasing?" even to straightforward prompts like "What information can you provide me?"
What perplexes me is this: when I deploy a bot trained solely on English documents in Copilot, it performs well. However, when the dataset includes German documents and one English document, the bot fails to respond coherently, even to prompts related to the English document.
Does anyone in the community have insights into this behavior? Any help or suggestions would be greatly appreciated.