Wed Jul 20 2022
Tue Jul 19 2022

Is Integer Arithmetic Enough for Deep Learning Training?

Quantization
Deep Learning
Machine Learning
Reducing energy consumption and improving computational efficiency in deep learning models

Proposes a method that allows replacing floating-point arithmetic with low-bit integer arithmetic with minimal performance degradation.

Implementing integer arithmetic in deep learning models can save energy, memory footprint, and latency. The proposed method provides a fully functional integer training pipeline without the need for quantization, gradient clipping, or special hyper-parameter tuning. It shows similar results to floating-point arithmetic in classification, object detection, and semantic segmentation tasks.

Thu Jul 14 2022
Wed Jul 13 2022
Tue Jul 12 2022
Mon Jul 11 2022