Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can they also perform fact-checking?


The fact checking with prompt chaining[0] repository has something along those lines (their readme has an example).

It is not perfect and it is costly in number of calls to a language model but it does help a lot.

[0]: https://github.com/jagilley/fact-checker


One idea is if the probability of QA API call is high but API returns nothing or conflicting results, then model may learn to say it isn’t sure about the result.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: