» » »

Truth-O-Meter: Fact-checking content generated by a LLM

A text obtained by a Large Language Model (LLM) such as GPT4 usually has issues in terms of incorrectness and hallucinations. We build a fact-checking system 'Truth-O-Meter' which identifies wrong facts, comparing the generation results with the web and other sources of information, and suggests corrections. Text mining and web mining techniques are leveraged to identify correct corresponding sentences; also, the syntactic and semantic generalization procedure adopted to the content improvement task. To handle inconsistent sources while fact-checking, we rely on an argumentation analysis in the form of defeasible logic programming. We compare our fact checking engine with competitive approach based on reinforcement learning on top of LLM or token-based hallucination detection. Our approach is an instance of what we call "Shaped-charge learning architecture" which is intended to combine an efficient LLM with explainable inductive learning. It is observed that LLM content can be substantially improved for factual correctness and meaningfulness.
https://github.com/bgalitsky/Truth-O-Meter-Making-ChatGPT-Truthful

Speaker: Boris Galitsky

See weblink to register

Editor's Note: This event was originally scheduled for June 26 and is now a hybrid event.  Attend in person or online.

Tuesday, 06/27/23

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Hacker Dojo

855 Maude Avenue
Mountain View, CA 94043

Categories: