A noninformative prior for neural networks. (English) Zbl 1033.68088
Summary: While many implementations of Bayesian neural networks use large, complex hierarchical priors, in much of modern Bayesian statistics, noninformative (flat) priors are very common. This paper introduces a noninformative prior for feed-forward neural networks, describing several theoretical and practical advantages of this approach. In particular, a simpler prior allows for a simpler Markov chain Monte Carlo algorithm. Details of MCMC implementation are included.
MSC:
68T05 | Learning and adaptive systems in artificial intelligence |