Skip to main content
Log in

Fuzzy polynomial neurons as neurofuzzy processing units

  • Original Article
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

In this study, we introduce and study fuzzy polynomial neurons (FPNs) being regarded as generic processing units in neurofuzzy computing. The underlying topology of FPNs is formed through fuzzy rules, fuzzy inference and polynomials. Each polynomial offers a nonlinear mapping and is centred around a modal value of the corresponding membership functions defined in the input space of the neuron. The adjustable order of the polynomial is essential when addressing the level of nonlinearity to be handled in the approximation problem. We demonstrate that fuzzy polynomial neurons form a certain class of functional neurons and afterwards discuss their properties and an overall design process. Furthermore, these neurons are discussed in the context of universal approximation and universal approximators

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press

  2. Buckley JJ (1993) Sugeno type controllers are universal controllers. Fuzzy Sets Syst 53(3):299–303

    Article  MATH  MathSciNet  Google Scholar 

  3. Castillo E, Cobo A, Gutierrez JM, Pruneda E (2000) Functional networks: a new network-based methodology. Comput Aided Civil Infrastructure Eng 15:90–106

    Article  Google Scholar 

  4. Dickerson JA, Kosko B (1996) Fuzzy function approximation with ellipsoidal rules. IEEE Trans Syst Man Cybern B 26(4):542–560

    Article  Google Scholar 

  5. Ferraty F, Vieu P (2002) The functional nonparametric model and application to spectrometric data. Comput Statist 17(4):545–564

    Article  MATH  MathSciNet  Google Scholar 

  6. Jang J-SR (1993) ANFIS: adaptive-network-based fuzzy inference system. IEEE Trans Syst Man Cybern 23(3):665–685

    Article  MathSciNet  Google Scholar 

  7. Joo MG, Lee JS (2005) A class of hierarchical fuzzy systems with constraints on the fuzzy rules. IEEE Trans Fuzzy Syst 13(2):194–203

    Article  Google Scholar 

  8. Narendra KS, Parthasarathy K (1990) Identification and control of dynamical systems using neural networks. IEEE Trans Neural Netw 1(1):4–27

    Article  Google Scholar 

  9. Pao YH (1989) Adaptive pattern recognition and neural networks. Addison-Wesley, New York

    Google Scholar 

  10. Pao YH, Takefuji Y (1992) Functional-link net computing: theory, system architecture, and functionalities. Computer 25(5):76–79

    Article  Google Scholar 

  11. Park B-J, Pedrycz W, Oh S-K (2001) Identification of fuzzy models with the aid of evolutionary data granulation. IEE Proc Control Theory Appl 148(5):406–418

    Article  Google Scholar 

  12. Pedrycz W, Reformat M (1997) Rule-based modeling of nonlinear relationships. IEEE Trans Fuzzy Syst 5(2):256–269

    Article  Google Scholar 

  13. Ramsay JO, Silverman BW (1997) Functional data analysis. Springer, Berlin Heidelberg New York

    Google Scholar 

  14. Rossi F, Conan-Guez B (2005) Functional multi-layer perceptron: a non-linear tool for functional data analysis. Neural Netw 18:45–60

    Article  MATH  Google Scholar 

  15. Rossi F, Delannay N, Conna-Guez B, Verleysen M (2005) Representation of functional data in neural networks. Neurocomputing 64:183–210

    Article  Google Scholar 

  16. Rudin W (1976) Principles of mathematical analysis. McGraw-Hill, New York

    Google Scholar 

  17. Wang LX, Mendel JM (1992) Fuzzy basis functions, universal approximations and orthogonal least squares learning. IEEE Trans Neural Netw 3(5):807–814

    Article  Google Scholar 

  18. Ying H (1998) General SISO Takagi-Sugeno fuzzy systems with linear rule consequent are universal approximators. IEEE Trans Fuzzy Syst 6:582–587

    Article  Google Scholar 

  19. Zeng XJ, Singh MG (1994) Approximation theory of fuzzy systems-SISO case. IEEE Trans Fuzzy Syst 2(2):162–176

    Article  Google Scholar 

  20. Zeng XJ, Singh MG (1995) Approximation theory of fuzzy systems-MIMO case. IEEE Trans Fuzzy Syst 3(2):219–235

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the Korea Research Foundation Grant funded by the Korea Government (MOEHRD, Basic Research Promotion Fund) (M01-2004-000-20175-0). Support from the Natural Sciences and Engineering Council of Canada (NSERC) and the Canada Research Chair (CRC) Program (W. Pedrycz) is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Witold Pedrycz.

Appendices

Appendix 1

The adjustment of a modal value v i is done through a standard gradient-based learning We consider the Euclidean distance expressing the observed learning error

$$E_{p} = (y_{p} - \hat{y}_{p})^{2} $$
(15)

where E p is an error for the pth data, y p is the pth target output data (desired response) and \(\hat{y}_{p}\) stands for the pth actual output of the FPN for this specific data point.

Next we get

$$ - \frac{{\partial E_{p}}}{{\partial v_{i}}} = - \frac{{\partial E_{p}}}{{\partial \hat{y}_{p}}} \cdot \frac{{\partial \hat{y}_{p}}}{{\partial v_{i}}}$$
(16)

which leads to the detailed expression of the form

$$ - \frac{{\partial E_{p}}}{{\partial \hat{y}_{p}}} = - \frac{\partial}{{\partial \hat{y}_{p}}}(y_{p} - \hat{y}_{p})^{2} = - 2(y_{p} - \hat{y}_{p})(- 1) = 2(y_{p} - \hat{y}_{p})$$

Depending upon the location of x, we distinguish between several cases over which different derivatives are formed \(\frac{{\partial \hat{y}_{p} }}{{\partial v_{i} }}:\)

  1. i)
    $$\begin{aligned} \, & {\text{For}}\;x < v_{1} \quad \left\{ {\begin{array}{*{20}l} {{A_{1} (x):1}} \\ {{{\rm others}:0}} \\ \end{array} } \right. \\ \, & \frac{{\partial \hat{y}_{p} }}{{\partial v_{i} }} = \frac{\partial }{{\partial v_{1} }}{\left({A_{1}(x) \cdot \varphi _{1}(x)} \right)} = \frac{{\partial \varphi _{1}(x)}}{{\partial v_{1} }} \\ \, & \therefore \frac{{\partial \hat{y}_{p} }}{{\partial v_{i} }} = \frac{{\partial \varphi _{1}(x)}}{{\partial v_{1} }} \\ \end{aligned} $$
  2. ii)
    $$\begin{aligned} & {\text{For}}\;v_{c} \leq x\quad \left\{ {\begin{array}{*{20}l} {{A_{c} (x):1}} \\ {{{\rm others}:0}} \\ \end{array} } \right. \\ & \frac{{\partial \hat{y}_{p} }}{{\partial v_{i} }} = \frac{\partial }{{\partial v_{c} }}{\left({A_{c}(x) \cdot \varphi _{c}(x)} \right)} = \frac{{\partial \varphi _{c} (x)}}{{\partial v_{c} }} \\ & \therefore \frac{{\partial \hat{y}_{p} }}{{\partial v_{i} }} = \frac{{\partial \varphi _{c}(x)}}{{\partial v_{c} }} \\ \end{aligned} $$
  3. iii)
    $$\begin{aligned} & {\text{For}}\;v_{i} \leq x < v_{{i + 1}} \left\{ {\begin{array}{*{20}l} {{A_{i} (x):\frac{{v_{i} - x}} {{v_{{i + 1}} - v_{i} }} + 1}} \\ {{A_{{i + 1}}(x):1 - A_{i}(x) = \frac{{x - v_{i} }}{{v_{{i + 1}} - v_{i} }}}} \\ {{{\rm others}:0}} \\ \end{array} } \right. \\ & \frac{{\partial \hat{y}_{p} }} {{\partial v_{i} }} = \frac{\partial }{{\partial v_{i} }}{\left({A_{i}(x)\varphi _{i}(x) + A_{{i + 1}}(x)\varphi _{{i + 1}}(x)} \right)} \\ & \quad \quad = \frac{{\partial A_{i}(x)}}{{\partial v_{i} }}\varphi _{i}(x) + \frac{{\partial \varphi _{i}(x)}}{{\partial v_{i} }}A_{i}(x) + \frac{{\partial A_{{i + 1}}(x)}}{{\partial v_{i} }}\varphi _{{i + 1}}(x) \\ & \frac{{\partial A_{i}(x)}} {{\partial v_{i} }} = \frac{\partial }{{\partial v_{i} }}{\left({\frac{{v_{i} - x}}{{v_{{i + 1}} - v_{i} }} + 1} \right)} = \frac{{(v_{{i + 1}} - v_{i}) + (v_{i} - x)}}{{(v_{{i + 1}} - v_{i})^{2} }} = \frac{{v_{{i + 1}} - x}}{{(v_{{i + 1}} - v_{i})^{2} }} = \frac{{A_{i} }}{{v_{{i + 1}} - v_{i} }} \\ & \because \frac{{v_{{i + 1}} - x}}{{v_{{i + 1}} - v_{i} }} = \frac{{v_{i} - x}}{{v_{{i + 1}} - v_{i} }} + 1 = A_{i} \\ & \frac{{\partial A_{{i + 1}}(x)}}{{\partial v_{i} }} = \frac{\partial }{{\partial v_{i} }}{\left({\frac{{x - v_{i} }}{{v_{{i + 1}} - v_{i} }}} \right)} = \frac{{ - (v_{{i + 1}} - v_{i}) + (x - v_{i})}}{{(v_{{i + 1}} - v_{i})^{2} }} = \frac{{x - v_{{i + 1}} }}{{(v_{{i + 1}} - v_{i})^{2} }} = - \frac{{A_{i} }}{{v_{{i + 1}} - v_{i} }} \\ & \because \frac{{x - v_{{i + 1}} }}{{v_{{i + 1}} - v_{i} }} = \frac{{x - v_{i} }}{{v_{{i + 1}} - v_{i} }} - 1 = - A_{i} \\ \end{aligned} $$

Therefore,

$$\frac{{\partial \hat{y}_{p}}}{{\partial v_{i}}} = \frac{{A_{i}}}{{v_{{i + 1}} - v_{i}}}\varphi _{i} + \frac{{\partial \varphi _{i}}}{{\partial v_{i}}}A_{i} - \frac{{A_{i}}}{{v_{{i + 1}} - v_{i}}}\varphi _{{i + 1}} = {\left({\frac{{\varphi _{i} - \varphi _{{i + 1}}}}{{v_{{i + 1}} - v_{i}}} + \frac{{\partial \varphi _{i}}}{{\partial v_{i}}}} \right)}A_{i} $$

For any input, the process of learning involves only two modal values v i and v i+1. For v i x < v i+1, \(\frac{{\partial\hat{y}_{p}}}{{\partial v_{{i+ 1}}}}\) comes in the form

$$\begin{aligned} \frac{{\partial \hat{y}_{p} }}{{\partial v_{{i + 1}} }} & = \frac{\partial }{{\partial v_{{i + 1}} }}{\left({A_{i} (x)\varphi _{i} (x) + A_{{i + 1}} (x)\varphi _{{i + 1}} (x)} \right)} \\ & = \frac{{\partial A_{i} (x)}}{{\partial v_{{i + 1}} }}\varphi _{i} (x) + \frac{{\partial A_{{i + 1}} (x)}}{{\partial v_{{i + 1}} }}\varphi _{{i + 1}} (x) + \frac{{\partial \varphi _{{i + 1}} (x)}}{{\partial v_{{i + 1}} }}A_{{i + 1}} (x) \\ \frac{{\partial A_{i} (x)}}{{\partial v_{{i + 1}} }} & = \frac{\partial }{{\partial v_{{i + 1}} }}{\left({\frac{{v_{i} - x}}{{v_{{i + 1}} - v_{i} }} + 1} \right)} = \frac{{ - (v_{i} - x)}}{{(v_{{i + 1}} - v_{i})^{2} }} = \frac{{x - v_{i} }}{{(v_{{i + 1}} - v_{i})^{2} }} = \frac{{A_{{i + 1}} }}{{v_{{i + 1}} - v_{i} }} \\ \because \frac{{x - v_{i} }}{{v_{{i + 1}} - v_{i} }} & = A_{{i + 1}} \\ \frac{{\partial A_{{i + 1}} (x)}}{{\partial v_{{i + 1}} }} & = \frac{\partial }{{\partial v_{{i + 1}} }}{\left({\frac{{x - v_{i} }}{{v_{{i + 1}} - v_{i} }}} \right)} = \frac{{ - (x - v_{i})}}{{(v_{{i + 1}} - v_{i})^{2} }} = - \frac{{A_{{i + 1}} }}{{v_{{i + 1}} - v_{i} }} \\ \end{aligned} $$

Therefore,

$$\frac{{\partial \hat{y}_{p} }}{{\partial v_{{i + 1}} }} = \frac{{A_{{i + 1}} }}{{v_{{i + 1}} - v_{i} }}\varphi _{i} - \frac{{A_{{i + 1}} }}{{v_{{i + 1}} - v_{i} }}\varphi _{{i + 1}} + \frac{{\partial \varphi _{{i + 1}} }}{{\partial v_{{i + 1}} }}A_{{i + 1}} = {\left({\frac{{\varphi _{i} - \varphi _{{i + 1}} }}{{v_{{i + 1}} - v_{i} }} + \frac{{\partial \varphi _{{i + 1}} }}{{\partial v_{{i + 1}} }}} \right)}A_{{i + 1}} $$

Depending on the order of the polynomial ∂φ i (x)/∂v i is specified as follows

$$\begin{aligned} 0{{{\text{th}}}} \;{\text{order}}:\;\frac{{\partial \varphi _{i}(x)}}{{\partial v_{i} }} & = \frac{\partial }{{\partial v_{i} }}a_{{0i}} = 0 \\ 1{{{\text{st}}}} \;{\text{order}}:\;\frac{{\partial \varphi _{i}(x)}}{{\partial v_{i} }} & = \frac{\partial }{{\partial v_{i} }}{\left({a_{{0i}} + a_{{1i}} (x - v_{i})} \right)}\, = - a_{{1i}} \\ 2{{{\text{nd}}}} \;{\text{order}}:\;\frac{{\partial \varphi _{i}(x)}}{{\partial v_{i} }} & = \frac{\partial }{{\partial v_{i} }}{\left({a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} } \right)}\, = - a_{{1i}} - 2a_{{2i}} (x - v_{i}) \\ 3{{{\text{rd}}}} \;{\text{order}}:\;\frac{{\partial \varphi _{i}(x)}}{{\partial v_{i} }} & = \frac{\partial }{{\partial v_{i} }}{\left({a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + a_{{3i}} (x - v_{i})^{3} } \right)} \\ & = - a_{{1i}} - 2a_{{2i}} (x - v_{i}) - 3a_{{3i}} (x - v_{i})^{2} \\ 4{{{\text{th}}}} \;{\text{order}}:\;\frac{{\partial \varphi _{i}(x)}}{{\partial v_{i} }} & = \frac{\partial }{{\partial v_{i} }}{\left({a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + a_{{3i}} (x - v_{i})^{3} + a_{{4i}} (x - v_{i})^{4} } \right)} \\ & = - a_{{1i}} - 2a_{{2i}} (x - v_{i}) - 3a_{{3i}} (x - v_{i})^{2} - 4a_{{4i}} (x - v_{i})^{3} \\ 5{{{\text{th}}}} \;{\text{order}}:\;\frac{{\partial \varphi _{i}(x)}}{{\partial v_{i} }} & = \frac{\partial }{{\partial v_{i} }}{\left({a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + a_{{3i}} (x - v_{i})^{3} + a_{{4i}} (x - v_{i})^{4} + a_{{5i}} (x - v_{i})^{5} } \right)} \\ & = - a_{{1i}} - 2a_{{2i}} (x - v_{i}) - 3a_{{3i}} (x - v_{i})^{2} - 4a_{{4i}} (x - v_{i})^{3} - 5a_{{5i}} (x - v_{i})^{4} \\ \end{aligned} $$

Here, φ i , φ i+1, and ∂φ i /∂v i stand for the order of polynomial of the conclusion of the ith rule.

Finally, the expressions for Δv i are formed as

  1. i)

    For x < v 1 or for v c x, (the value of A i is 1 while others are equal to 0),

    $$\Delta v_{i} = - \eta\frac{{\partial E_{p}}}{{\partial v_{i}}} = 2\eta(y_{p} - \hat{y}_{p})\frac{{\partial \varphi _{i}}}{{\partial v_{i}}}\quad (i=1\;\hbox{or}\;c)$$
  2. ii)

    For v i x < v i+1

    $$\begin{aligned} \Delta v_{i} &= - \eta\frac{{\partial E_{p}}}{{\partial v_{i}}} = 2\eta(y_{p} - \hat{y}_{p}){\left({\frac{{\varphi _{i} - \varphi _{{i + 1}}}}{{v_{{i + 1}} - v_{i}}} + \frac{{\partial \varphi _{i}}}{{\partial v_{i}}}} \right)}A_{i} \\ \Delta v_{{i + 1}} &= - \eta\frac{{\partial E_{p}}}{{\partial v_{{i + 1}}}} = 2\eta(y_{p} - \hat{y}_{p}){\left({\frac{{\varphi _{i} - \varphi _{{i + 1}}}}{{v_{{i + 1}} - v_{i}}} + \frac{{\partial \varphi _{{i + 1}}}}{{\partial v_{{i + 1}}}}} \right)}A_{{i + 1}} \\ \end{aligned}$$

Quite commonly to accelerate convergence, a momentum term is being added to the learning formula. The complete update formulas combining the momentum components arises in the form

  1. i)

    For A i =1 (x < v 1 or v c x, so, i=1 or c)

    $$\Delta v_{i} (t + 1) = 2\eta(y_{p} - \hat{y}_{p})\frac{{\partial \varphi _{i}}}{{\partial v_{i}}} + \alpha\Delta v_{i} (t)$$
  2. ii)

    For v i x < v i+1

    $$\begin{aligned} \Delta v_{i} (t + 1) &= 2\eta(y_{p} - \hat{y}_{p}){\left({\frac{{\varphi _{i} - \varphi _{{i + 1}}}}{{v_{{i + 1}} - v_{i}}} + \frac{{\partial \varphi _{i}}}{{\partial v_{i}}}} \right)}A_{i} + \alpha\Delta v_{i} (t)\\ \Delta v_{{i + 1}} (t + 1) &= 2\eta \cdot (y_{p} - \hat{y}_{p}){\left({\frac{{\varphi _{i} - \varphi _{{i + 1}}}}{{v_{{i + 1}} - v_{i}}} + \frac{{\partial \varphi _{{i + 1}}}}{{\partial v_{{i + 1}}}}} \right)}A_{{i + 1}} + \alpha\Delta v_{{i + 1}} (t)\\ \end{aligned}$$

Where Δv i (t)=v i (t) − v i (t − 1). η is the learning rate, α denotes a momentum coefficient, we confine the values of these two to the unit interval.

Appendix 2

The determination of the parameters of the conclusion is completed through the gradient-based learning and follows a general scheme similar to that outlined in Appendix 1.

For (12), we have

$$\begin{aligned} - \frac{{\partial E_{p}}}{{\partial \user2{a}_{i}}} &= - \frac{{\partial E_{p}}}{{\partial \hat{y}_{p}}} \cdot \frac{{\partial \hat{y}_{p}}}{{\partial \varphi _{i}}} \cdot \frac{{\partial \varphi _{i}}}{{\partial \user2{a}_{i}}}\\ - \frac{{\partial E_{p}}}{{\partial \hat{y}_{p}}} &= - \frac{\partial}{{\partial \hat{y}_{p}}}(y_{p} - \hat{y}_{p})^{2} = - 2(y_{p} - \hat{y}_{p})(- 1) = 2(y_{p} - \hat{y}_{p})\\ \frac{{\partial \hat{y}_{p}}}{{\partial \varphi _{i}}} &= \frac{\partial}{{\partial \varphi _{i}}}(A_{i}\varphi _{i} + A_{{i + 1}}\varphi _{{i + 1}}) = A_{i}\\ \end{aligned}$$

For the polynomial φ i (x)=a 0i +a 1i (xv i ) +a 2i (xv i )2+⋯+a 5i (xv i )5, the following relationships holds

$$\begin{aligned} \frac{{\partial \varphi _{i} }}{{\partial {\user2{a}}_{i} }}:\quad \frac{{\partial \varphi _{i} }}{{\partial a_{{0i}} }} & = \frac{\partial }{{\partial a_{{0i}} }}(a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + \cdots + a_{{5i}} (x - v_{i})^{5}) = 1 \\ \frac{{\partial \varphi _{i} }}{{\partial a_{{1i}} }} & = \frac{\partial }{{\partial a_{{1i}} }}(a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + \cdots + a_{{5i}} (x - v_{i})^{5}) = (x - v_{i}) \\ \frac{{\partial \varphi _{i} }}{{\partial a_{{2i}} }} & = \frac{\partial }{{\partial a_{{2i}} }}(a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + \cdots + a_{{5i}} (x - v_{i})^{5}) = (x - v_{i})^{2} \\ \frac{{\partial \varphi _{i} }}{{\partial a_{{3i}} }} & = \frac{\partial }{{\partial a_{{3i}} }}(a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + \cdots + a_{{5i}} (x - v_{i})^{5}) = (x - v_{i})^{3} \\ \frac{{\partial \varphi _{i} }}{{\partial a_{{4i}} }} & = \frac{\partial }{{\partial a_{{4i}} }}(a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + \cdots + a_{{5i}} (x - v_{i})^{5}) = (x - v_{i})^{4} \\ \frac{{\partial \varphi _{i} }}{{\partial a_{{5i}} }} & = \frac{\partial }{{\partial a_{{5i}} }}(a_{{0i}} + a_{{1i}} (x - v_{i}) + a_{{2i}} (x - v_{i})^{2} + \cdots + a_{{5i}} (x - v_{i})^{5}) = (x - v_{i})^{5} \\ \end{aligned} $$

Depending upon the order of the polynomial, the detailed expressions are obtained

$$\begin{aligned} \Delta a_{{0i}} & = - \eta\frac{{\partial E_{p} }}{{\partial a_{{0i}} }} = 2\eta(y_{p} - \hat{y}_{p})A_{i} \\ \Delta a_{{1i}} & = - \eta\frac{{\partial E_{p} }}{{\partial a_{{1i}} }} = 2\eta(y_{p} - \hat{y}_{p})A_{i}(x - v_{i}) \\ \Delta a_{{2i}} & = - \eta\frac{{\partial E_{p} }}{{\partial a_{{2i}} }} = 2\eta(y_{p} - \hat{y}_{p})A_{i}(x - v_{i})^{2} \\ \Delta a_{{3i}} & = - \eta\frac{{\partial E_{p} }}{{\partial a_{{3i}} }} = 2\eta(y_{p} - \hat{y}_{p})A_{i}(x - v_{i})^{3} \\ \Delta a_{{4i}} & = - \eta\frac{{\partial E_{p} }}{{\partial a_{{4i}} }} = 2\eta(y_{p} - \hat{y}_{p})A_{i}(x - v_{i})^{4} \\ \Delta a_{{5i}} & = - \eta\frac{{\partial E_{p} }}{{\partial a_{{5i}} }} = 2\eta(y_{p} - \hat{y}_{p})A_{i}(x - v_{i})^{5} \\ \end{aligned} $$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Park, BJ., Pedrycz, W. & Oh, SK. Fuzzy polynomial neurons as neurofuzzy processing units. Neural Comput & Applic 15, 310–327 (2006). https://doi.org/10.1007/s00521-006-0033-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-006-0033-2

Keywords

Navigation