Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs
LLMs can produce formal sketches from informal proofs to improve performance on solving math problems
This method is useful for automating the formalization of mathematical proofs which is typically time-consuming and difficult. It also demonstrates the potential of large language models in producing well-structured formal sketches that can enhance the performance of automated proof search algorithms.
Instruction-Following Agents with Jointly Pre-Trained Vision-Language Models
InstructRL outperforms SotA pre-trained and train-from-scratch instruction following methods on vision-based instruction tasks
InstructRL is a simple yet effective model for robots to solve instruction-following tasks in vision-based environments. It uses a pre-trained multimodal transformer that encodes visual observations and language instructions to produce generic cross-modal representations. The transformer-based policy keeps track of the full history of observations and actions to predict actions autoregressively. InstructRL outperforms all state-of-the-art pre-trained or trained-from-scratch methods in both single-task and multi-task settings, while also showing better model scalability and generalization ability than prior work.