The bot needs to answer only from two uploaded sources (an XML hierarchy file and a PDF manual).
- If I turn **model knowledge OFF**, it sticks to the sources but fails at interpreting queries — it often says “not sure” even when the match clearly exists.
- If I keep **model knowledge ON** and give it strict instructions to only answer from the sources, it still sometimes ignores that and blends in hallucinated results.
1. Use LLM reasoning only to interpret and normalize the user’s query (synonyms, typos, related terms).
2. Always pull actual answers strictly from my uploaded XML/PDF.
3. If there’s no exact match, go up one level in the XML hierarchy and try again.
4. If still nothing, say “no match found” — never fabricate.
Looking for any working patterns, prompt engineering tips, or configuration tricks that have worked for you.

Report
All responses (
Answers (