web
You’re offline. This is a read only version of the page.
close
Skip to main content

Notifications

Announcements

Community site session details

Community site session details

Session Id :
Power Platform Community / Forums / Copilot Studio / Azure AI Foundry Infer...
Copilot Studio
Unanswered

Azure AI Foundry Inference connector?

(1) ShareShare
ReportReport
Posted on by 23
Does anyone have experience with this connector? When trying to use it i get "Error Message: The connector 'Azure AI Foundry Inference' returned an HTTP error with code 404. Error Code: ConnectorRequestFailure". When i connect to the connector it looks like the image. I am not sure if this is how it is supposed to look like as it doesn't look like it really is connected?
 
For the connection can someone clarify on which Uri should be set here? As i in my Foundry Playground have tried all of the endpoints to the project (image below) as well as the dicerct Uri to my concrete model (for example the gpt-4o). And none of them seem to work.
 
Categories:
I have the same question (3)
  • Romain The Low-Code Bearded Bear Profile Picture
    2,111 Super User 2025 Season 2 on at
    to folow too, since i prefere to use the "deploy to copilot studio" model from ai foundry playground, i don't play with this one, so i m curious :)
  • chrisadriane Profile Picture
    23 on at
    It seems I don't have that option. When i Open my model in playground it looks like i can only Deploy as a web app. Do you experience the same issue or does it allow you to deploy to Copilot Studio? 
  • Romain The Low-Code Bearded Bear Profile Picture
    2,111 Super User 2025 Season 2 on at
    @chrisadriane many things could have change this the MsBuild monday but i made a demos on may the 7th and i had to change Change the url from Azure AI Foundry  to  https://oai.azure.com/
    since it was in preview. Probably manythings have changed since the MsBuild.
     
    But it s a different things , inference are not the same , i was juste speaking about this case because i never try your way :) so i m curious on the answer :)
     

     
     
     
     
     
     
     
  • Suggested answer
    MM-18061923-0 Profile Picture
    4 on at
    I get it work by fidling with endpoint URL. It should end on GPT, Like this:
     
    Where xyz is your name of Azure AI Foundry instance. It it in default url.
    And use api version
  • WCarterMVUSD Profile Picture
    87 on at
    I'm getting a similar issue, whenever I try to use "Create a Chat Completion" I get 
     "error": {
                "code""404",
                "message""Resource not found"
            }
     
    I'm not sure what to do with this. I've tried many permutations of the target uri, of parameters like model name, nothing seems to work. Any help would be appreciated!
  • Suggested answer
    RMIC Profile Picture
    2 on at
    This worked for me. 
     
    Go to your model in Azure AI Foundry and copy the URL and Key of your model:
    The URL must be modified to be accepted:
    https://<xyz>.cognitiveservices.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2025-01-01-preview
     
    You need to delete everything after your model name:
     
    With this information you can add a new Connections "Azure AI Foundry Inference":
    Display Name: <name of your connections>
    API Key: Key from your model (see above)
    Model deployment name: Name of your model. I used gpt-4o
    Base Model name: I used the same information as before: gpt-40
     
    Next, add in Power Automate the action "Create a chat completion":
    API version: 2025-01-01-preview (see also the URL above after "api-version")
    Body/Message: 
    [
      {
        "role": "system",
        "content": [
          {
            "type": "text",
            "text": "You are a helpful assistant."
          }
        ]
      },
      {
        "role": "assistant",
        "content": [
          {
            "type": "text",
            "text": "I am going to Paris, what should I do?"
          }
        ]
      }
    ]
    Body/temperature: I tested with: 0.1
    Body/top: I tested with: 0.1
    Body/max tokens: I tested with: 100
    Body/model: gpt-40
  • CW-07110951-0 Profile Picture
    3 on at

    I encountered this issue too.
     
    Based on @MM-18061923-0  's answer, here is my final configuration. It should work if you follow this screenshot.

    Good luck.




Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Forum hierarchy changes are complete!

In our never-ending quest to improve we are simplifying the forum hierarchy…

Ajay Kumar Gannamaneni – Community Spotlight

We are honored to recognize Ajay Kumar Gannamaneni as our Community Spotlight for December…

Leaderboard > Copilot Studio

#1
Michael E. Gernaey Profile Picture

Michael E. Gernaey 265 Super User 2025 Season 2

#2
Romain The Low-Code Bearded Bear Profile Picture

Romain The Low-Code... 240 Super User 2025 Season 2

#3
S-Venkadesh Profile Picture

S-Venkadesh 101 Moderator

Last 30 days Overall leaderboard