Sun Aug 28 2022
Thu Aug 25 2022
MaskCLIP: Masked Self-Distillation Advances Contrastive Language-Image Pretraining
Self-Distillation
Natural Language Processing (NLP)
Computer Vision (CV)
Language-Image Pretraining
Linear Probing
Finetuning
Zero-shot Performance
This paper presents a simple yet effective framework MaskCLIP, which incorporates a newly proposed masked self-distillation into contrastive language-image pretraining. It achieves superior results in linear probing, finetuning as well as the zero-shot performance.
Businesses that deal with language and image data can use MaskCLIP to improve their pretraining performance and achieve better downstream task results.
Wed Aug 24 2022
Mon Aug 15 2022
Sun Aug 14 2022
Wed Aug 10 2022