FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance
FrugalGPT proposes LLM cascade strategy to reduce cost and improve accuracy of using large language models
FrugalGPT can reduce the cost of using large language models by up to 98% while maintaining its performance or improve the accuracy over GPT-4 by 4% with the same cost
NerfAcc: Efficient Sampling Accelerates NeRFs
NerfAcc proposes advanced efficient sampling techniques to accelerate NeRFs training
NerfAcc can reduce the training time of several recent NeRF methods by 1.5x to 20x with minimal modifications to the existing codebase
Recommender Systems with Generative Retrieval
The paper proposes a new single-stage paradigm for training recommender systems using generative retrieval models
The proposed single-stage paradigm improves the results achieved by current SOTA models on the Amazon dataset and offers better generalization for cold-start items
Large Language Model Programs
Methodology of embedding LLMs in a classic program to perform more complex tasks
Can expand the capabilities of LLMs at a much lower cost than finetuning, providing a 6.4% improvement over the baseline for question-answering tasks.
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
Proposes a parameter-efficient fine-tuning approach for pre-trained diffusion models to improve the capacity for narrative prompts in text-to-image generation
Enables diffusion models to understand and reason concise natural language without image quality degradation, making text-to-image diffusion models easier to use with better user experience. The approach has the potential to further advance the development of user-friendly text-to-image generation models.