web
You’re offline. This is a read only version of the page.
close
Skip to main content

Announcements

News and Announcements icon
Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Copilot Studio / Invalid Reference Link...
Copilot Studio
Suggested Answer

Invalid Reference Links Using GPT 5 Chat

(1) ShareShare
ReportReport
Posted on by 2
I've created an HR assistant to answer Human Resources questions using the documentation and knowledge stored on a SharePoint site, using GPT 5 Chat as the agent model.
 
Everything was working perfectly, up until about a week ago where I began noticing weird behavior. Despite the assistant answering questions correctly, the reference links it would provide at the end of the response - when clicked would take the user to a location that doesnt exist what so ever! In addition to the url location not existing, it's even referencing files in Jira! For the record, i only have one knowledge source (the hr sharepoint) and have never referenced Jira anywhere in the instructions or anywhere else. This is occuring in test mode as well as in the published agent. I've not changed my instructions or anything else. This randomly started happening.
 
I've tried looking around online to see if there's a known outage or reported issue, but cannot find anything concrete. The worst part, is that when i switch the agent model to Sonnet or Opus, it works perfectly - however i cant use those models because they take forever to respond, thus using GPT 5 Chat which is the only one that responds in a timely fashion.
 
Anybody have any ideas / insight? Is this a Microsoft issue? OpenAI issue? Was there a patch / update deployed within the past week that could've affected it? 
I have the same question (0)
  • CU29091925-0 Profile Picture
    4 on at

    hi @AA-01051611-0 So what you're dealing with is actually a known pain point with GPT-5 Chat in Copilot Studio and honestly, the Jira thing is the biggest clue here. You never configured Jira, never mentioned it anywhere, yet it's showing up in citations. That's classic hallucination behavior. The model is answering your questions correctly because it's actually pulling content from SharePoint just fine but when it goes to generate the reference links, it kind of "makes them up" based on what enterprise URLs typically look like. And Jira is everywhere in enterprise environments, so it fits the pattern the model has seen in training.

    The timing you mentioned about a week ago lines up with Microsoft pushing some backend changes to how SharePoint knowledge retrieval works in Copilot Studio. You didn't change anything, but they did. That's almost certainly what flipped the switch here.

    The reason Sonnet and Opus work fine makes total sense too. Those models handle citation grounding differently. The tradeoff is obviously the speed issue, which is a real problem for an HR assistant where people expect quick answers.

    Here's what I'd actually do:

    First, go check your "Work IQ" setting in the agent  it's a SharePoint retrieval feature that requires authentication to be set to "Authenticate with Microsoft." If something changed there, it could be silently breaking citation resolution.

    Second, double-check how your SharePoint URL is entered as a knowledge source. It sounds minor, but Microsoft changed some things around how those URLs get resolved, and a URL that worked before might not resolve the same way now.

    Third and this is just to stop the bleeding for your users you can suppress the citations entirely with a simple formula in your message node. The broken links are arguably worse than no links at all, so hiding them while you sort this out is a reasonable call.

    But honestly, the most important thing is to open a support ticket with Microsoft. This isn't something you can fully fix on your side. The citation generation pipeline broke on their end, and you need them to look at it. Log it through the Power Platform admin center. Also worth checking your Microsoft 365 service health dashboard sometimes these regressions show up there quietly.

    You're not going crazy and you didn't break anything. This is on Microsoft.

  • Suggested answer
    11manish Profile Picture
    2,072 on at
    The issue you’re experiencing is most likely due to GPT-5 Chat generating (hallucinating) citation links instead of strictly using the retrieved SharePoint sources.
     
    This can happen due to recent model behavior changes, and it explains why the answers remain correct while the reference links are invalid or unrelated (such as Jira).
     
    Since other models like Sonnet or Opus work correctly, the problem is model-specific rather than configuration-related.
     
    The recommended approach is to enforce grounded citations in your prompt, avoid letting the model generate URLs freely, or post-process references to ensure
     
    only valid SharePoint links are returned.

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Introducing the 2026 Season 1 community Super Users

Congratulations to our 2026 Super Users!

Kudos to our 2025 Community Spotlight Honorees

Congratulations to our 2025 community superstars!

Congratulations to the March Top 10 Community Leaders!

These are the community rock stars!

Leaderboard > Copilot Studio

#1
Valantis Profile Picture

Valantis 827

#2
Vish WR Profile Picture

Vish WR 294

#3
Haque Profile Picture

Haque 249

Last 30 days Overall leaderboard