I am using perplexity deep research but when I check the source of the response. The information is not correct is there anything I can do to prevent this?
3 Likes
I have has this same issue when using perplexity. It sounds strange but the easiest way to prevent hallucination is to tell the AI to not make anything up and give only information which is correct.
3 Likes
Thanks - Funny that it worked.
4 Likes
Yes, AI is incredible and can do many helpful things — but large language models are still in their infancy, so they sometimes need a bit of guidance along the way.
3 Likes