I have a question about feeding a single custom copilot knowledge. Are there any recommended best practices for this? I understand that there is a storage limit that is tied to the dataverse storage, but I am curious if there is a point at which having too much knowledge causes performance degradation.
For example, if I have a document library filled with 500+ documents and I feed all of that data to the copilot, is there a point at which having more knowledge becomes more harmful for producing accurate and good results?
I have noticed that when feeding a copilot data from a single SharePoint site, the results were not as good as when feeding it individual files. This is because SharePoint does not index, vectorize, or chunk the data, so the answers are not as accurate.
As a result, I decided to feed an entire document library's worth of files to my copilot and the results were definitely better. However, I am hesitant to feed it too many files, as I am worried that the performance may degrade.
Any advice or recommendations would be greatly appreciated. Thank you.
Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.