Wavelets in the deep learning era

Abstract

Sparsity based methods, such as wavelets, have been state-of-the-art for more than 20 years for inverse problems before being overtaken by neural networks. In particular, U-nets have proven to be extremely effective. Their main ingredients are a highly non-linear processing, a massive learning made possible by the flourishing of optimization algorithms with the power of computers (GPU) and the use of large available datasets for training. It is far from obvious to say which of these three ingredients has the biggest impact on the performance. While the many stages of non-linearity are intrinsic to deep learning, the usage of learning with training data could also be exploited by sparsity based approaches. The aim of our study is to push the limits of sparsity to use, similarly to U-nets, massive learning and large datasets, and then to compare the results with U-nets. We present a new network architecture, called learnlets, which conserves the properties of sparsity based methods such as exact reconstruction and good generalization properties, while fostering the power of neural networks for learning and fast calculation. We evaluate the model on image denoising tasks. Our conclusion is that U-nets perform better than learnlets, while learnlets have better generalization properties.

Publication
Journal of Mathematical Imaging and Vision
Kevin Michalewicz
Kevin Michalewicz
Research Postgraduate

Some of my main interests include Machine Learning, Image and Signal Processing and Physics in general.