Self-conditioned Embedding Diffusion for Text Generation
Proposes a continuous diffusion mechanism that operates on token embeddings, allowing to learn flexible and scalable diffusion models for text generation. Shows that their models generate samples comparable to those produced by standard autoregressive language models while being more efficient on accelerator hardware at inference time.
Can be used to improve text generation models for business operations such as chatbots, automated customer service, and content creation. Implementation of this method can result in faster and more efficient inference times.
Tell Your Story: Task-Oriented Dialogs for Interactive Content Creation
Introduces a new dataset C3 (Conversational Content Creation) consisting of 10k multi-turn dialogs conditioned on media montages simulated from a large media collection. Proposes task-oriented dialogs for montage creation as an interactive tool to seamlessly search, compile, and edit montages from a media collection.
Can be used to improve content creation workflows and enhance user experiences. The proposed method can help automate the manual and time-consuming process of montage creation and improve storytelling capabilities.