Langevin dynamics (LD) has been proven to be a powerful technique for optimizing a non-convex objective as an efficient algorithm to find local minima while eventually visiting a global minimum on longer time-scales. LD is based on the first-order Langevin diffusion which is reversible in time. We consider variants on non-reversible Langevin diffusions: the underdamped Langevin dynamics (ULD), stochastic gradient Hamiltonian Monte Carlo (SGHMC) and the Langevin dynamics with a non-symmetric drift (NLD). We study non-convex stochastic optimization problems, as well as the recurrence and escape times, and expected exit times, and show that acceleration is possible over the first-order dynamics.
This is based on the joint work with Xuefeng Gao and Mert Gurbuzbalaban.