Mon Feb 27 2023
Sun Feb 26 2023

Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback

External knowledge and automated feedback
Natural Language Processing
Machine Learning
Task-oriented dialog
Open-domain question answering

LLM-Augmenter significantly reduces ChatGPT’s hallucinations without sacrificing the fluency and informativeness of its responses.

Implement LLM-Augmenter to improve the performance of large language models in real-world mission-critical applications such as task-oriented dialog and question answering.

Decoupling Human and Camera Motion from Videos in the Wild

Data-driven human motion priors
Computer Vision
Machine Learning
Posetrack

The optimization method proposed in the paper decouples the camera and human motion, allowing the placement of people in the same world coordinate frame.

Use the proposed optimization method to reconstruct global human trajectories from videos in challenging in-the-wild scenarios to improve performance of downstream tracking in PoseTrack.

Language-Driven Representation Learning for Robotics

Language-driven representations
Robotics
Machine Learning
Grasp affordance prediction
Language-conditioned imitation learning
Intent scoring for human-robot collaboration

Voltron's language-driven representations strictly outperform the prior art.

Implement Voltron to learn from human videos and associated captions for language-conditioned imitation learning and intent scoring for human-robot collaboration among other diverse set of robot learning problems.

Modulating Pretrained Diffusion Models for Multimodal Image Synthesis

Diffusion models
Computer Vision
Image generation and synthesis

Enables conditional image synthesis using pretrained models with multimodal conditioning modules (MCM)

Enables user control over spatial layout of images and better alignment with conditioning inputs. Cheap to train with limited examples.

MUX-PLMs: Pre-training Language Models with Data Multiplexing

Transformers
Machine Learning
Natural Language Processing
Language modeling

Pre-trained multiplexed language models (MUX-PLMs) for improving inference efficiency in downstream tasks

Achieves 2x/5x inference speedup with minimal drop in performance on GLUE and token-level tasks. Released pre-trained checkpoints for different configurations.

Thu Feb 23 2023
Tue Feb 21 2023
Mon Feb 20 2023
Sun Feb 19 2023