Hi all,
The support for generative answers is really great, but seems to have a major flaw: conversation history is not factored in. Take a simple example:
Me: How do I do x when y is true?
Answer: [ good answer ]
Me: What about when z is true?
Answer: [ totally incorrect answer because the initial question and response don't seem to be included: the follow-up question is used without any context ]
Because the conversational UI/UX so strongly implies that you're conducting a multi-turn conversation, the lack of context can lead to results that are both terrible and very confusing for the user. It feels like a fatal flaw in the implementation.
Is there some kind of workaround, to include conversation history somehow in fallback generative answers?
Please let me know.
Thank you!
Chris
Conversation history is still working fine for us.
Example scenario:
The second and third messages are checking whether the bot can refer back to previous turns. Tested this morning, all good. 🤖
Something that can disrupt the conversation is other 'topics'. You might accidentally trigger another topic and find it hard to get back to your gen AI conversation. If you trigger the 'Multiple topics matched' topic you may end up in the 'Fallback' topic (if you use default topic config) and it feels to the user like the AI just can't answer your question, when actually it might have been able to.
I guess if we want to focus on generative AI conversations then maybe we should avoid using other topics, or keep their triggers conceptually narrow, to reduce the risk of triggering them when not needed?
We seem to be having related issues still now 5/28/2024, is anybody still having issues with conversational history? any work arounds?
I am working on a project that is using Copilot Studio.
In our experience, Copilot temporarily lost the ability to use conversation history around 6 to 8 weeks ago, roughly. It was answering every question from scratch. It had no knowledge of a 'previous answer'.
Then, it regained conversation history around 2 weeks ago.
Perhaps it was briefly disabled, while MS made some fixes? It was possible for the bot to go quite far off-topic before, but the functionality does seem a bit better since they re-enabled the conversation history.
It would be nice to understand what was going on.
maybe maybe not... these things are highly buggy and not reliable at the moment...
for example, the answer sent from a dynamically chained plugin is something different from a classic message sent to the user... so it has to be tested what exactly is included in the conversation history sent to the Generative answers node
@HenryJammes If I use Custom data from Jira Service Management API(get the knowledge base) via Conversational boosting topic. Do the last 10 turns to ground follow-up questions still work?
Hi @chrislrobert, the behavior has indeed changed since then, and we do use the last 10 turns to ground follow up questions 🙂
If you have examples to share with a public website data source (the easiest one to repro), feel free to share them in IM.
So Henry, I've continued to have inconsistent results here. And I found this post that explicitly says "Single versus multi-turn answers: The boost conversation capability in PVA does not remember context across multiple questions in the conversation. Each boosted conversation post will consider as context only the URL and the latest query, not on the entire conversation. Make sure to provide sufficient context in each query instead of assuming the feature knows about the prior turn."
That was from earlier last year, so maybe this behavior has changed? But like I said, in my own testing, results are quite inconsistent, with context sometimes seeming to be included and sometimes not.
Thanks,
Chris
I just re-tested and you're right! I'm sorry. Is this something that recently changed? I'm using the same sequence I'd tested with three weeks ago, but perhaps there was something subtle that had reset the conversation in that earlier case.
Thanks for the quick clarification!
Chris
That's odd. Generative answers actually keeps context of the last 10 turns.
So, follow-up questions are interpreted in the context of previous questions and answers.
Romain The Low-Code...
25
Pablo Roldan
25
stampcoin
10