web
You’re offline. This is a read only version of the page.
close
Skip to main content

Announcements

News and Announcements icon
Community site session details

Community site session details

Session Id :
Copilot Studio
Suggested Answer

Training my Agent

(1) ShareShare
ReportReport
Posted on by 2
My direct question is, can you train your agent in the testing phase or in another area? I'm not interest in adding new knowledge articles or adding to the instructions, I'm curious if I can interact with my agent and when it's wrong, tell it when it's wrong and what it should've said instead. I'm not sure if this is possible but one example is: if you have a questions about XX department, call this number but the agent gives the wrong number. I have over a dozen knowledge articles so I'm not sure where it's pulling the incorrect number from but would rather just tell the agent it's wrong and what to say instead. 
 
Thanks! 
I have the same question (0)
  • Suggested answer
    David_MA Profile Picture
    14,090 Super User 2026 Season 1 on at
    To my knowledge, you cannot train your agent the way you have described (or at least how I am interpreting what you are saying). When you are testing, does it not show the knowledge that the agent searched to get the response? For example,

    And in the response, does it not show the reference where it got the information from:
     
    Everything I've done so far in Copilot Studio, this is the behavior I have seen. I didn't set up the environment I use, so maybe there is something that needs to be enabled. To ensure references are provided, in the agent instructions I usually include something like this: Share warm, conversational observations that are grounded entirely in the text, using quotes or references to support every point.
     
    When it provided the wrong information, did you ask the agent what it's source was for its prior response. That tends to work:
    As for telling the agent that it's wrong, that will not work. It will forget this at the next session. This is why the knowledge you provide to your agent needs to be maintained and accurate. It goes to the old IT acronym of GIGO: garbage in, garbage out. This is the primary reason AI projects fail, which is because the knowledge provided to the agent is not correct.

    You may find this helpful: Garbage In, Garbage Out? Trust In The Data Behind AI Is Vanishing. AI has no concept of right or wrong. You telling AI that it was wrong will not work because the data provided to it says otherwise. That is why you need to update the knowledge and not tell the agent it was wrong.

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Introducing the 2026 Season 1 community Super Users

Congratulations to our 2026 Super Users!

Kudos to our 2025 Community Spotlight Honorees

Congratulations to our 2025 community superstars!

Congratulations to the March Top 10 Community Leaders!

These are the community rock stars!

Leaderboard > Copilot Studio

#1
Valantis Profile Picture

Valantis 594

#2
chiaraalina Profile Picture

chiaraalina 170 Super User 2026 Season 1

#3
deepakmehta13a Profile Picture

deepakmehta13a 118

Last 30 days Overall leaderboard