Figure 3: (a) 37,659 lines extracted from SAR image by Burns' edge detector (b) runway found by production net (c) the 20 lines for the construction of the runway (d) 3,281 lines classified as positive by modified NEFCLASS
For our studies we used a set of 17 SAR images depicting 5 different airports. Each of the images was analyzed by the production net to detect the runway(s) and the lines were labeled as positive or negative. 4 of the 17 images form the training dataset used to train NEFCLASS. The training set contains 253 runway lines and 31,330 negatives.
The lines from the remaining 13 images were used as test data. The assessment of the classification results takes into account the specific requirements of the application. The ideal classifier for this problem should find all edges which were used to assemble a runway. Because missing edges can interrupt the construction process and by this, essential intermediate results are missed and the analysis will fail. On the other hand the classifier should assign as few negatives as possible to the positive class to reduce the processing time. The behavior of a classifier could be characterized by a detection and a reduction rate:
In this application the misclassification costs cannot be specified exactly, as the costs of false positives and negatives are barely comparable. We empirically set the costs of false negatives to some hundred times the costs of false positives. This results in a detection rate of on the training set after learning.
The average of the detection and reduction rate on the unseen 13 images is and , respectively. The detection rates on the single images vary from to . Fig. 3d shows the lines NEFCLASS classified as positives in this image (which was one of the unseen images). Apparently this image allows rather high detection and extremely good reduction rates. But also for most of the lower detection rates the higher levels of image analysis were successful, as the missed lines are mainly shorter and less important.
By the use of the modified pruning techniques, the number of rules could be decreased from initially over 500 to under 20. We obtained the best result with 16 rules, but even with 7 rules the error increased only slightly. With a reduced number of rules, the difference between performance on training and test data decreased. This shows that generalization ability increased through pruning.