next up previous
Next: Extending the NEFCLASS System Up: Introduction Previous: Classification of Primitive Objects

 

The NEFCLASS Model

   figure47
Figure 2: 3-layer structure of NEFCLASS

This section describes the original NEFCLASS model. NEFCLASS can be viewed as a special 3-layer feed-forward neural network (see figure 2). The units in this network use t-norms or t-conorms instead of the activation functions common to neural networks. The first layer represents input variables, the middle (hidden) layer represents fuzzy rules and the third layer represents output variables. Fuzzy sets are encoded as (fuzzy) connection weights. To enable the use of the same linguistic terms in the antecedents of different rules the weights can be coupled and thus get changed together. NEFCLASS represents always (i.e. before, during and after learning) set of fuzzy classification rules like

tabular58

where the tex2html_wrap_inline418 are fuzzy sets.

The inputs are real-valued. By propagation through the coupled fuzzy weights their membership degrees to the individual antecedents is determined. The rule units accumulate these membership degrees with a t-norm (usually the minimum is used). Each rule unit is connected to exactly one output unit (i.e. the consequent of the represented rule). The output units use a t-conorm to accumulate the rules' activations, where the t-conorm is usually the maximum. The predicted class is determined by a winner-takes-all principle between the output units.

It is possible to create a NEFCLASS rule base from scratch using training data, or to initialize it by prior knowledge in form of fuzzy rules. If a classifier is learned from data, two phases are needed:

1st learning phase: the rule creation

During rule creation an initial fuzzy partitioning for each variable is created. This is given by a fixed number of equally spaced triangular membership functions. The combination of the fuzzy sets forms a grid in the data space, i.e. equally distributed overlapping hyper boxes. Then the training data is processed, and those clusters, that cover areas where data is located are added as rules into the rule base of the classifier [13].

2nd learning phase: the fine-tuning of fuzzy sets

After the rule base has been created, the membership functions are fine-tuned by a simple heuristic. For each rule a classification error is determined and used to modify that membership function that is responsible for the rule activation (i.e. which yields the minimal membership degree of all fuzzy sets in the rule's antecedent). The modification results in shifting the fuzzy set, and enlarging or reducing its support, such that a larger or smaller membership degree is obtained depending on the current error. The learning procedure takes into account the semantic properties of the underlying fuzzy system. This results in constraints on possible modifications applicable to the system parameters. NEFCLASS allows to impose different restrictions on the learning algorithm, e.g., membership functions must not pass its neighbors, must stay symmetrical, or membership degrees must add to 1 [8].

NEFCLASS has recently been extended by different pruning techniques to reduce the number of rules initially found. This can help to keep the rule base small and thus readable and interpretable [9, 4].


next up previous
Next: Extending the NEFCLASS System Up: Introduction Previous: Classification of Primitive Objects

Aljoscha Klose
Mon Nov 29 17:03:10 MET 1999