Hi everyone,
I'm currently in the process of rolling out GenAI via Teams to my company.
I have released two apps so far:
First App: A simple GPT-4 connection for 1-to-1 chat via Teams and Azure OpenAI. For this, I utilized Azure OpenAI Services, CoPilot Studio, and Power Automate to manage aspects like chat history and session-based responses.
Second App: An Azure Prompt Flow Endpoint that utilizes the "Chat with your own data" service.
Both apps employ Power Automate to establish the connection with the LLM.
However, I'm facing two issues:
- The responses sometimes take more than 2 minutes, leading to a timeout.
- I would like to implement streaming text responses to enhance the user experience.
I have already found the KB article on setting up the Azure ML endpoint: https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-enable-streaming-mode?view=azureml-api-2
Setting this up is not a big issue, but how do I adapt the requirements for CoPilot Studio and Power Automate to handle the streamed texts?
Has anyone already done this, or is there another way to achieve it?
I look forward to your answers, and thank you very much!
Stefan