Quantized SG-MCMC for Bayesian deep posterior compression

Autor
Hernández, Sergio
López-Cortes, Xaviera
Fecha
2025Resumen
In this paper, we propose a novel quantization technique for Bayesian deep learning aimed at enhancing efficiency without compromising performance. Our approach leverages post-training quantization to significantly reduce the memory footprint of stochastic gradient samplers, particularly Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) methods. This technique achieves a level of compression comparable to optimal thinning, which traditionally necessitates not only the original samples in single precision floating-point representation but also the gradients, resulting in substantial computational overhead. In contrast, our quantization method requires only the original samples and can accurately recover posterior modes through a simple affine transformation. This process incurs minimal additional memory or computational costs, making it a highly efficient alternative for Bayesian deep learning applications.
Fuente
Communications in Computer and Information Science, 2270, 158-169Link de Acceso
Click aquí para ver el documentoIdentificador DOI
doi.org/10.1007/978-3-031-80084-9_11Colecciones
La publicación tiene asociados los siguientes ficheros de licencia: