Loading...
4 results
Search Results
Now showing 1 - 4 of 4
- Improving SeNA-CNN by Automating Task RecognitionPublication . Zacarias, Abel; Alexandre, LuísCatastrophic forgetting arises when a neural network is not capable of preserving the past learned task when learning a new task. There are already some methods proposed to mitigate this problem in arti cial neural networks. In this paper we propose to improve upon our previous state-of-the-art method, SeNA-CNN, such as to enable the automatic recognition in test time of the task to be solved and we experimentally show that it has excellent results. The experiments show the learning of up to 4 di erent tasks with a single network, without forgetting how to solve previous learned tasks.
- Image Normalization Influence in Mammographic Classification with CNNsPublication . Perre, Ana Catarina; Alexandre, Luís; Freire, Luís C.In order to improve the performance of Convolutional Neural Networks (CNN) in the classification of mammographic images, many researchers choose to apply a normalization method during the pre-processing stage. In this work, we aimed to assess the impact of 6 different normalization methods in the classification performance of 2 CNNs. We have also explored 5 classifiers, being the first one the CNN itself. The other 4 correspond to Support Vector Machine (SVM), Random Forest (RF), Simple Logistic (SL) and Voted Perceptron (VP) classifiers, all of them fed with features extracted from one of the layers - comprised between the sixteenth and the nineteenth - of the CNN. The last 3 classifiers were tested with different options for data testing presentation, according to theWeka software: Supplied Test Set (STS), 10-fold Cross Validation (10-FCV) and Percentage Split (PS). Results indicate that the effect of image normalization in the performance of the CNNs depends on which network is chosen to make the classification; besides, the normalization method that seems to have the most positive impact is the one that subtracts to each image the corresponding image mean and divide it by the standard deviation (best AUC mean values were 0.786 for CNN-F and 0.790 for Caffe; the best run AUC values were, respectively, 0.793 and 0.791. Layer 1 freezing decreased the running time and did not harm the classification performance. Regarding the different classifiers, CNNs used alone with softmax yielded the best results, with the exception of the RF and SL classifiers, both using the 10-FCV and PS options; however, with these options, we cannot guarantee that the test set images are presented for the first time to the network.
- SeNA-CNN: Overcoming Catastrophic Forgetting in Convolutional Neural Networks by Selective Network AugmentationPublication . Zacarias, Abel; Alexandre, LuísLifelong learning aims to develop machine learning systems that can learn new tasks while preserving the performance on previous learned tasks. In this paper we present a method to overcome catastrophic forgetting on convolutional neural networks, that learns new tasks and preserves the performance on old tasks without accessing the data of the original model, by selective network augmentation. The experiment results showed that SeNA-CNN, in some scenarios, outperforms the state-of-art Learning without Forgetting algorithm. Results also showed that in some situations it is better to use SeNA-CNN instead of training a neural network using isolated learning.
- The Influence of Image Normalization in Mammographic Classification with CNNsPublication . Perre, Ana Catarina; Alexandre, Luís; Freire, LuísIn order to improve the performance of Convolutional Neural Networks (CNN) in the classification of mammographic images, many researchers choose to apply a normalization method during the pre-processing stage. In this work, we aim to assess the impact of six different normalization methods in the classification performance of two CNNs. Results allow us to concluded that the effect of image normalization in the performance of the CNNs depends of which network is chosen to make the lesion classification; besides, the normalization method that seems to have the most positive impact is the one that subtracts the image mean and divide it by the corresponding standard deviation (best AUC mean with CNN-F = 0.786 and with Caffe = 0.790; best run AUC result was 0.793 with CNN-F and 0.791 with Caffe).