1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bit
Abstract
Fully quantized training (FQT) accelerates the training of deep neural networks by quantizing the activations, weights, and <PRE_TAG>gradients</POST_TAG> into lower precision. To explore the ultimate limit of FQT (the lowest achievable precision), we make a first attempt to 1-bit FQT. We provide a theoretical analysis of FQT based on Adam and SGD, revealing that the gradient variance influences the convergence of FQT. Building on these theoretical results, we introduce an Activation Gradient Pruning (AGP) strategy. The strategy leverages the heterogeneity of <PRE_TAG>gradients</POST_TAG> by pruning less informative <PRE_TAG>gradients</POST_TAG> and enhancing the numerical precision of remaining <PRE_TAG>gradients</POST_TAG> to mitigate gradient variance. Additionally, we propose Sample Channel joint Quantization (SCQ), which utilizes different quantization strategies in the computation of weight <PRE_TAG>gradients</POST_TAG> and activation <PRE_TAG>gradients</POST_TAG> to ensure that the method is friendly to low-bitwidth hardware. Finally, we present a framework to deploy our algorithm. For fine-tuning VGGNet-16 and ResNet-18 on multiple datasets, our algorithm achieves an average accuracy improvement of approximately 6%, compared to per-sample quantization. Moreover, our training speedup can reach a maximum of 5.13x compared to full precision training.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper