Hello everyone,
I have a Custom Vision trained model in Azure, and it works well with the AI Builder component in Power Apps and with the native connector in Power Automate to call the model and get the results for the specific image sent or taken by the user. The image shows the bounding boxes with the objects detected, and I can work with the outputs from the model to show information to the user. Everything ok here.
What I want to do is to allow the user to upload an image and manually create bounding boxes for objects in the image, so the image can be sent to the AI model service and enrich the model itself being part of further trainings. The user would have to be capable of using the mouse or touch the screen to create the boxes and then send this image appropriately to Azure.
Is that a possibility? Does anyone have an ideia of how to make this work?
Thanks.