Language Models (Mostly) Know What They Know
Studies whether LMs can evaluate the validity of their own claims and predict which questions they will be able to answer correctly.
Provides insights on self-evaluation of language models on open-ended sampling tasks and predicting the probability of knowing the answer to a question, which could be used for training more honest models.
Inner Monologue: Embodied Reasoning through Planning with Language Models
Closed-loop language feedback significantly improves high-level instruction completion.
Recommends using closed-loop language feedback to improve high-level instruction completion in robotic control scenarios.
HelixFold: An Efficient Implementation of AlphaFold2 using PaddlePaddle
HelixFold saves costs about half the training time of original AlphaFold2 and OpenFold when using hybrid parallelism.
Suggests using HelixFold, implemented using PaddlePaddle, to improve training and inference speed and reduce memory consumption for protein structure prediction, which could accelerate the development of life science.