Learning with Banach Spaces and Sparse Approximation Methods
Learning with Banach Spaces. In machine learning, extracting features by mapping the patterns into a Hilbert space is dominant. Hilbert spaces constitute a special and limited class
of Banach spaces. By reaching out to other Banach spaces, one obtains more variety in geometric structures and norms that are potentially useful for learning and approximation.
For instance, the L1 norm is usually engaged for sparsity pursuit. The purpose of this ongoing project is to explore Banach space methods in machine learning.
- Reproducing kernel Banach spaces with the dsicrete l1 norm that satisfy linear representer theorem were constructed in . Linear representer theorem
brings improvement to the learning rate of l1 norm coefficient regularization, which for square loss functions was shown in .
- By making use of semi-inner products, we proposed a notion of reproducing kernel Banach spaces (RKBS) and
investigated regularized learning schemes in the framework of RKBS .
- The representer theorem for various regularized learning schemes in RKBS was proved in  and .
The results extend the classical representer theorem for reproducing kernel Hilbert spaces (RKHS).
- The classical theory of frames and Riesz bases has a natural extension to Banach spaces via semi-inner products ,
where the Shannon sampling theorem for RKBS was also established. The RKHS of the Gaussian kernel does not possess the Shannon sampling theorem.
- In some applications, for instance, when the Lp norm is used for regularization, the standard semi-inner product might not be convenient. All possible generalizations of
semi-inner products were explored in .