On learning μ-perceptron networks on the uniform distribution

M Golea, M Marchand, TR Hancock�- Neural Networks, 1996 - Elsevier
M Golea, M Marchand, TR Hancock
Neural Networks, 1996Elsevier
We investigate the learnability, under the uniform distribution, of neural concepts that can be
represented as simple combinations of nonoverlapping perceptrons (also called μ-
perceptrons) with binary weights and arbitrary thresholds. Two perceptrons are said to be
nonoverlapping if they do not share any input variables. Specifically, we investigate, within
the distribution-specific PAC model, the learnability of μ-perceptron unions, decision lists,
and generalized decision lists. In contrast to most neural network learning algorithms, we do�…
We investigate the learnability, under the uniform distribution, of neural concepts that can be represented as simple combinations of nonoverlapping perceptrons (also called μ-perceptrons) with binary weights and arbitrary thresholds. Two perceptrons are said to be nonoverlapping if they do not share any input variables. Specifically, we investigate, within the distribution-specific PAC model, the learnability of μ-perceptron unions, decision lists, and generalized decision lists. In contrast to most neural network learning algorithms, we do not assume that the architecture of the network is known in advance. Rather, it is the task of the algorithm to find both the architecture of the net and the weight values necessary to represent the function to be learned. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target concept. Because the algorithms are statistical in nature, they are robust against large amounts of random classification noise.
Elsevier
Showing the best result for this search. See all results