Hi, has the "Create text with GPT on Azure OpenAI Service" function lost the ability to adjust the model parameters like temperature and token limit? I believe I was able to set these parameters two weeks ago, but I no longer see the ability to do so.
It is! Thanks, Ashish.
The fix should be available now. Can you please confirm if your scenario is working fine?
The fixed we have deployed should be available early next week. I would suggest trying the scenario on Tuesday.
Hi, any updates on this? Thanks.
Okay, thanks. The issue existed before the ability to edit parameters was removed. Hopefully you’ll consider re-inserting the ability to edit! Adjusting temperature and other features is also very important to this type of model usage.
There is a bug with the change we made we are resolving that asap.
Thanks, understood. The max_tokens parameter is very helpful for creating GPT text generation, as it seems like the default Completions tokens far exceeds the necessary tokens. For instance, one of my API calls requires just a two-three word response from GPT, but it says it used 3,000 tokens for completion. Any pointers? Below is my error.
"
Invalid prompt input. This model's maximum context length is 4097 tokens, however you requested 4118 tokens (1152 in your prompt; 2966 for the completion). Please reduce your prompt; or completion length..\",\"properties\":{\"BackendErrorCode\":\"InvalidInferenceInput\",\"DependencyHttpStatusCode\":\"400\"},\"innerErrors\":[{\"scope\":\"Generic\",\"target\":null,\"code\":\"TooManyInputTokens\",\"type\":\"Error\",\"properties\":{\"MlIssueCode\":\"TooManyInputTokens\"}}]},\"predictionId\":null}"}}}
Yes the model parameters have been deprecated.
Michael E. Gernaey
497
Super User 2025 Season 2
David_MA
436
Super User 2025 Season 2
Riyaz_riz11
244
Super User 2025 Season 2