web
You’re offline. This is a read only version of the page.
close
Skip to main content

Notifications

Announcements

Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Copilot Studio / Improving Chatbot Accu...
Copilot Studio
Unanswered

Improving Chatbot Accuracy and Handling Uncertain Answers

(0) ShareShare
ReportReport
Posted on by 44

Hi All,

I have created a chatbot with a link to a knowledge base as the data source. The chatbot works well; however, in many cases, it makes up answers that are not correct, which makes it less reliable. I have set the moderation level to high (which is the default), and I have added a prompt at the "Create Generative Answers" topic's "Content Instructions" setup instructing the bot NOT to make up answers when it is not 100% sure, but this is not working.

 

I'm reaching out to the community to seek advice on improving the accuracy of the chatbot's answers. If you've succeeded in using the "content instructions" feature, could you share which prompts worked effectively for you?

 

If the bot is not 100% sure of an answer, I would like it to let the user know it cannot answer that question.

 

Thanks in advance for your help!

Categories:
I have the same question (0)
  • JT-28081628-0 Profile Picture
    6 on at

    Will let others answer with more specifics, but it's my understanding that 100% is not achievable using a generative AI model even with RAG and custom instructions. 

  • citron-truc Profile Picture
    394 on at

    Hello !

    I hope you are doing well. If you want your chatbot to have deterministic answers, you should set the temperature & top_p to low values (works only for azure openAI models, not for copilots unfortunately).

     

    From what I understood, content instructions are used to give context and ask for specific answer forms ("Pretend you are a pirate and start every answers with 'shiver me timbers'"). Giving a few example questions and answers might help your chatbot (see this : https://platform.openai.com/docs/guides/prompt-engineering/strategy-write-clear-instructions)

     

    When you use generative AI on your data, there is an intermediary step where an indexer ranks documents according to how relevant they are to the question. There is then a hidden prompt where we ask the generative AI "given these five documents and this question, find a suitable answer". The AI doesn't know the exact probabilities of each document matching the question and therefore always "feel" 100% sure. Telling it to not answer if it isn't sure won't help on this one.

     

    Hope it helps, have a great day.

  • CleanFox Profile Picture
    20 on at

    Telling a LLM Bot something like "Dont", "Do Not", "Never", is generally considered bad practice.
    It would be better to enforce the "Do's" instead of the "Dont's".
    The "Do's" is like telling you to do a task.

    Using "Do's" and "Dont's" is like telling you to do a task, but do NOT think about trains (now you will probably think about trains).

     

    I can't guarantee better results, but I think this should at least improve your models accuracy.

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Forum hierarchy changes are complete!

In our never-ending quest to improve we are simplifying the forum hierarchy…

Ajay Kumar Gannamaneni – Community Spotlight

We are honored to recognize Ajay Kumar Gannamaneni as our Community Spotlight for December…

Leaderboard > Copilot Studio

#1
Michael E. Gernaey Profile Picture

Michael E. Gernaey 265 Super User 2025 Season 2

#2
Romain The Low-Code Bearded Bear Profile Picture

Romain The Low-Code... 240 Super User 2025 Season 2

#3
S-Venkadesh Profile Picture

S-Venkadesh 101 Moderator

Last 30 days Overall leaderboard