×

Continuity of approximation by neural networks in \(L_p\) spaces. (English) Zbl 1055.41020

Summary: Devices such as neural networks typically approximate the elements of some function space \(X\) by elements of a nontrivial finite union \(M\) of finite-dimensional spaces. It is shown that if \(X= L^p(\Omega)\) \((1< p<\infty\) and \(\Omega\subset \mathbb{R}^d)\), then for any positive constant \(\Gamma\) and any continuous function \(\phi\) from \(X\) to \(M\), \(\|f- \phi(f)\|> \|f-M\|+ \Gamma\) for some \(f\) in \(X\). Thus, no continuous finite neural network approximation can be within any positive constant of a best approximation in the \(L^p\)-norm.

MSC:

41A65 Abstract approximation theory (approximation in normed linear spaces and other abstract spaces)
49K40 Sensitivity, stability, well-posedness
92B20 Neural networks for/in biological studies, artificial life and related topics
Full Text: DOI