Thu Sep 01 2022
Tue Aug 30 2022
Faithful Reasoning Using Large Language Models
Artificial Intelligence
Natural Language Processing
Machine Learning
Improving performance on multi-step problems
Generating humanly interpretable reasoning traces
Demonstrates the effectiveness of the model on multi-step logical deduction and scientific QA and generates humanly interpretable reasoning traces whose validity can be checked by the user.
This research shows how large language models can be used for multi-step reasoning and how it can significantly improve performance on inherently multi-step problems. It also introduces a method for generating humanly interpretable reasoning traces that can be checked for validity by the user.
Sun Aug 28 2022
Thu Aug 25 2022
Wed Aug 24 2022
Mon Aug 15 2022