web
You’re offline. This is a read only version of the page.
close
Skip to main content

Announcements

News and Announcements icon
Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Copilot Studio / Generative Answer Node...
Copilot Studio
Suggested Answer

Generative Answer Nodes Disregarding Custom Instructions Across Agents - Please Help

(3) ShareShare
ReportReport
Posted on by 136
Hello,
 
I'm experiencing some very concerning issues with generative answer nodes, and would love any guidance or suggestions you might provide. I suspect it's something on Microsoft's end, but I really don't know how I can get support from them with this product other than posting here.
 
In short, it seems like more often than not, Copilot Studio now completely disregards any custom instructions associated with a generative answers node. For example, I have an agent that is designed to reason over resources associated with Virginia standardized tests. It should respond to the user's question by serving up a link to the appropriate resource, and the page # or slide # where the answer can be found. Sometimes, this works amazingly well. Other times, it seems like the agent is deciding to completely ignore my custom directions. Screenshots of the directions and the test response are attached.
 
As you can see in the screenshot, the first time I ask a specific question, the agent answers perfectly. Then when I ask the exact same question again, it instead responds as if the only text sent to the LLM was the user's question. 
 
I'm really excited about the possibilities that Copilot Studio creates for building custom educational and training experiences. I am trying to serve as an advocate for the tool within my organization, and will be giving a demonstration at a large district event at the end of this month. However, it's a really big concern that I can have an agent working perfectly one day, and then despite no changes on my end, it will randomly start behaving erratically and ignoring instructions. Since I can't be confident that my agent will act in a predictable way, at this point I'm concerned my presentation will have to be "Here's a video of the time it actually worked how it was supposed to to show you what's possible, maybe we should explore this tool if Microsoft gets it to work in a few years". 
 
This issue seems to be consistent and to appear in multiple agents. I recently submitted a paper for publication that explores AI in education, with a focus on Copilot Studio. I have created a tutoring agent that uses generative answer nodes to answer user questions that I recently recorded a demo video for as part of my paper. The attached screenshot shows the expected output of the agent, which aligns with the specific instructions regarding how to answer that I placed in the generative answers node. But when recording my video, it seems the only input the LLM received was "B" (the value of the variable passed as input), because it's answer was just ramblings about how B is an important letter of the alphabet. Each time I test this agent, it's a roll of the dice as to whether it will work or not.  
 
On a possibly related note, over the last few weeks it seems like the tool is less effective in general. I have a very simple tool that answers questions about my district's website. It was created through the "agent creation wizard" process - basically just associating the website link and nothing else. It used to work great, but now it is returning fake URLs that lead to nowhere, and it insists that I am not mentioned anywhere on the district site even though I know I am. 
Generative Response.jpg
Generative Answer Instructions.jpg
TutorScreenshot.jpg
Categories:
I have the same question (0)
  • Suggested answer
    Michael E. Gernaey Profile Picture
    53,969 Moderator on at
     
    So this is where I wish I could simply say... no no here are some options it works perfectly. Well honestly I run into this myself, and that's when I am using a single datasource (a file) and I do not even have custom requirements so much..
     
    That being said. One thing I have noticed, and while I cannot promise you it will solve your issue, it worked better for me, was to add the instructions at the agent level. Yes yes, I also would put things in the action/tool/step, but... it never fully worked as I wanted, or at least, not as often as I wanted.
     
    In some cases, I essentially added very specifical and detailed instructions in the Main page of instructions for the Agent in general, as well as instructions in the steps. Playing with them until it responded.... MOST of the time exactly how I wanted. However I am not able to get it to be 100%.
     
    That being said, I don't know the sources you are using, can you tell me what type for sure? Just a web site internally? Is it sharepoint or web or?? Or are these public sites?
     
    Now, as for it thinking you only typed b.... that would need more debugging, as I have not had that issue, so unless its an issue with your session(s), or conversation(s) control management (ending them, redirects, variable session loss etc, its just not stuff I can easy answer with the stuff above.
     
    Try first, adding some specific instructions at the Agent level, explain them in detail about the knowledge source and more indepth than in your other instructions. Let's at least see, if we can get you closer
     
    as for reaching Microsoft, unless you have an account with the ability to open a case OR you are willing to pay the Per Case Fee, there isn't really a Microsoft Free Support exactly, but you can try to go to M365 (portal.office.com) for your tenant and open one there. That's your starting point.
  • Suggested answer
    RH-08070332-0 Profile Picture
    4 on at
    I have the same problem, I manage 3 agents and two of them are causing me this issue. :(
  • ShaneMeisnerTH Profile Picture
    195 on at
    What we have found here is that in the settings of the agent enabling "Tenant graph grounding with semantic search" can cause all kinds of strange/non-Responses.

    The majority of the time that users report their agent is not providing correct answers from their grounded knowledge sources, we have found disabling that setting solves the issue.
     
    It is to the point that I wish Microsoft would disable that as the default as we now tell everyone to disable it for all agents.
     
    Not only because of results being inaccurate, but there is also an extra cost associated with it's use.
  • JT-29071526-0 Profile Picture
    136 on at
     
    Thanks so much for the responses!
     
    In case others have this problem, I want to share a work around I discovered. Instead of adding custom directions within the generative answers node, I tried adding the instructions to the input, which worked! For example, the default input might be Activity.Text (last message from user). Before the Generative Answers node, I added a "Set Variable Value" node to create a new custom variable. Ex: Set Variable - Topic.YourVarHere ,To value - Activity.Text. I then incorporated this variable into the input of the generative answers node using an expression like this: 
     
    "<instructions>
    Custom instructions here. 
    </instructions>
     
    <user_question>
    " & Topic.YourVarHere & "
    </user_question>"
     
    @ShaneMeisnerTh 
     
    Thanks for the suggestion. That is definitely easier than what I did. I will give this a try! 
     
    @RH-08070332-0 
     
    It's good to know I'm not the only one. If Shane's suggestion doesn't work, using a formula for the generative answers input might be worth a try as described above. 
     
    @Michael E. Gernaey
     
    All of my sources are currently PDFs. I have tried adding directions for expected outputs at the agent level as you described. That seems to work/help, and for a while was quite successful. But at some point something changed, and it no longer helps. For my use case, I also want the directions to be different for each node. For example, with my tutoring agent, I want to give custom instructions for each question like:
     
    ----
    The Context:
     
    The user has incorrectly answered a multiple choice question.
     
    The question was: "The smallest unit of biological structure that meets the functional requirements of 'living' is the __."
     
    The choices were:
    A. organ
    B. organelle
    C. cell
    D. macromolecule
     
    The correct answer is C. cell. The student chose a different answer -
    The student selected this answer:
     $quizAnswer
     
    Your task:
     
    In your response, explain the following things to the user:
     
    1. Explain that the correct answer is "cell", and explain why.
    2. Explain why the answer they chose ($quizAnswer) is incorrect.
    3. Finally, tell the user what page of their Openstax Textbook Concepts of Biology they can go to learn more. 
    ---------
     
    But with the instructions above, the agent's answer is just about the input variable, which is a letter. It's like it doesn't register at all that we're dealing with biological structure or any of the terms listed in the instruction. "C is the third letter of the alphabet. What would you like to know about C?"
  • Michael E. Gernaey Profile Picture
    53,969 Moderator on at
    Hi 
     
    Ugg this is so annoying I understand completely. Let me think on this a little bit more for suggestions
  • JT-29071526-0 Profile Picture
    136 on at
    I tried @ShaneMeisnerTH's suggestion of disabling "Tenant graph grounding with semantic search". Unfortunately, that did not solve the problem for me. I'm attaching two new screenshots showing how the agent responds to my question when instructions are provided as input vs when they are added to a generative answers node. In both circumstances, the instructions are identical. This is with "tenant graph grounding with semantic search" disabled. 
    DirectionsAsInput.png
    DirectionsInGenAnswersNode.png
  • JT-29071526-0 Profile Picture
    136 on at
    I apologize for the multiple replies.
     
    I'm also now realizing that even when my agent follows instructions, it will only do so once.
     
    For example, I ask it a question. It answers as intended - only providing the slide # and not actually summarizing the answer. I then ask it the exact same question, and it answers by summarizing (screenshot attached). In both cases, I can see the topic has been triggered on the left, and since it's presenting my canned message for this topic. I don't understand why it would behave differently.
     
    What's even weirder is that this was occurring when the final node of my topic was "end current topic". If I change it to "end conversation", then the agent works as intended. This is really puzzling. 
     
    Edit: Now it's happening regardless of how the topic ends. Sometimes. And sometimes not. We're back to dice roll territory. I recently started experimenting with Google Gems. With the current state of both products, a Gem is better at following directions for very specific/complex behaviors that are provided in natural language than Copilot is at following directions that are presented in a way that should not be ambiguous at all, using expressions, topic flows and variables.  
    2025-07-09_08-26-56.jpg
  • Michael E. Gernaey Profile Picture
    53,969 Moderator on at
     
    Apologies, too many surgeries and catching up.
     
    Were you able to resolve this or are you still having issues?
     
    I notice that in 1 of my tenants, where i have different environments with different cadences I see different results also  
     
     
  • JT-29071526-0 Profile Picture
    136 on at
     
    I am still having issues unfortunately. Sometimes it works just fine - but other times not.
     
    The best work around for me so far has been to add the custom instructions as input for the generative answer node alongside the variable. 

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Introducing the 2026 Season 1 community Super Users

Congratulations to our 2026 Super Users!

Kudos to our 2025 Community Spotlight Honorees

Congratulations to our 2025 community superstars!

Congratulations to the March Top 10 Community Leaders!

These are the community rock stars!

Leaderboard > Copilot Studio

#1
Valantis Profile Picture

Valantis 599

#2
chiaraalina Profile Picture

chiaraalina 170 Super User 2026 Season 1

#3
deepakmehta13a Profile Picture

deepakmehta13a 118

Last 30 days Overall leaderboard