Site Loader
Get a Quote
Rock Street, San Francisco

Analysis of Different ways for Improving the Speed and Accuracy of Image Classification
Abstract: Today, the focus of machine learning algorithms is for learning features from unla-beled data. Nowadays the size and complexity of the dataset increases leads to increase in speed and accuracy in learning the algorithm. There are different methods and classifiers for image classification. Support vector machine is one of the most widely used algorithm for image classification. But the time taken for image classification is with SVM is large. So for getting the faster results the use of GPU is very important. So with the help of GPU’s the training and image classification time is reduced. SVM mainly used for classification of data, and it con-structs the hyper planes of different labels. Another method for image classification is Extreme Learning Machine (ELM) it contains only 3 layers namely, one input layer, one hidden layer and one output layer. In this paper consist of analysis of two different classifiers namely, SVM and ELM for image classification and then the methodologies to implement those classifiers and at the end the comparison between those classifiers.
Keywords: High Performance Computing, Unsupervised Feature Learning (UFL), Extreme Learning Machine (ELM), Radial Basis Function (RBF), Support Vector Ma-chine (SVM).
1 Introduction
There are different ways for image classification such as minimum dis-tance, maximum likelihood, neural network, support vector machine which gives the classification of data. There are some unsupervised classifiers as well which uses clustering based algorithm they are K-Means, k-NN, K-Medoid, ISODATA etc.1 For image classification using neural network applying the appropriate classification technique is very important to get the faster results. The Support vector machine (SVM) based on kernel is an effective technique for categorization of the images. This classifier is used in many of the applications like recognition in remote sensing applications. But when we use the individual classifier for classification it gives nor-mal results. As the size of the dataset increases the time required for classification also increases. So to get the better results with large dataset nowadays there is a trend to use multiple classifiers together. Some of the combined classifiers are Neural Net-work classifiers and support vector machine classifier.
So, aim of this paper is to analyze the different combinations of classifiers and use for image classification.
2 Literature Survey
According to Le Hoang Thai, Tran Son Hai, Nguyen Thanh Thuy 1 In terms of speed and accuracy, the ANN and SVM collectively produce much better classifica-tion results. The paper uses image feature extraction is the fundamental step in image classification. This classification technique consist of two layers. First layer consist of k-ANNs which gives the classification results based on feature vector. The main work of the second layer is to collect all the results from first layer and SVM classifier is used to integrate all the results from first layer and give the classification result.
Mahmood, Yousefi-Azar, Mark D. McDonnell 2 supervised and unsupervised techniques together forms the cluster of the images it uses the k-means clustering algorithm and the algorithm not only restricted to RGB colors but also for Lab color representation. The combination of unsupervised feature learning algorithm with ex-treme machine learning outperforms the other traditional methods.
According to Dao Lam, Donald Wunsch 3 UFL-ELM classification gives better results than SVM and other approaches. In UFL-ELM the features are extricate from data only rather than other traditional methods. Then classifier is trained using ELM for getting desired solution. This method is easy to use and gives speed of training of the data.
Zuo Bai, Guang-Bin Huang, Danwei Wang, Han Wang, and M. Brandon Westover 4 Traditional methods for classification takes large storage space and testing time to reduce that time sparse ELM method is developed. In this method new algorithm is developed for efficient training of data. Because of this the time and complexity is reduced significantly. This sparse ELM gives faster speed of training than other methods.
Dao Lam, Donald Wunsch 5 suggested the better way and faster result for image classification. Unsupervised feature learning algorithm is used for learning the fea-tures in this method. And RBF-ELM is used for further classification of the data. When features are derived from the algorithm then those features are given to the RBF-ELM. So this approach gives the better result, but to improve the training and testing time of the data a new parallel approach is suggested which is implementation of the CUDA kernel. So with the help of CUDA kernel it gives 20 time’s faster results than CPU and other parallel approach.
3 Methodology
There are different ways or methodologies used to classify the images using neural network architecture like using SVM, Sparse-ELM, UFL-ELM etc. But the most promising methodology for classification of image is RBF-ELM with the parallel architecture like CUDA kernel.
The important thing to use the UFL is that it gives far better results than the tradi-tional methods. A classifier gives better results only when it has lots of data for train-ing and testing. This methodology uses the large amount of data for training and test-ing as well as it uses the GPU architecture for more speed and accuracy in the result 5.
So, the first task of the image classification is inputting the unlabeled image da-taset and derives the features from it. For deriving the features a well- known UFL algorithm is used which is k-Means UFL. For deriving the features needs to extract the patches from the dataset. After extracting the patches needs to preprocess those patches and then k-means algorithm is applied for obtaining the centroids.9
RBF-ELM algorithm is used to improve the performance of the classification and any radial basis function (such as Gaussian function) is used as the activation function for the hidden layer into the neural network. ELM uses only 3 layers to get the output from the neural network they are one input layer, only one hidden layer and one out-put layer. Depending on the dataset the input is randomly assigned and output can be generated from the hidden layer output 35.
So, with the help of this methodology the image classification gives the better re-sults. But still it takes considerable time to train and test the data into the neural net-work.
The use of GPU for image classification with neural network gives far better re-sults than the other traditional methods. There are different mechanisms for parallel-ization one is use of multiple cores of the system and other one is use of explicit par-allel programming architecture.
While using the CUDA architecture memory management and right portion of the program to be executed on to the GPU is very important. Detect the part of the pro-gram to be parallelized and then apply proper parallelization technique to improve the performance. Finally, this CUDA kernel RBF-ELM architecture for image classifica-tion gives 20 times faster results than that of other approaches.
4 Conclusion
This review paper shows analysis of different image classification techniques with their working in short and which technique is best amongst them. The RBF-ELM uses only three layers one input, one hidden and one output with randomized input gives faster results than traditional methods.There are some popular algorithms such as SVM but the RBF-ELM with CUDA kernel gives better performance with im-proved speed and accuracy.
1. Thai, Le Hoang, et al. “Image Classification Using Support Vector Machine and Artificial Neural Network.” International Journal of Information Technology and Computer Sci-ence, no. 5, Feb. 2012, pp. 32-38.
2. Yousefi-Azar, Mahmood, and Mark D. Mcdonnell. “Semi-Supervised Convolutional Ex-treme Learning Machine.” 2017 International Joint Conference on Neural Networks (IJCNN), 2017, pp. 1-7.
3. Lam, Dao, and Donald Wunsch. “Unsupervised Feature Learning Classification Using an Extreme Learning Machine.” The 2013 International Joint Conference on Neural Net-works (IJCNN), 2013.
4. Bai, Zuo, et al. “Sparse Extreme Learning Machine for Classification.” IEEE Transactions on Cybernetics, no. 10, 2014.
5. Lam, Dao, and Donald Wunsch. “Unsupervised Feature Learning Classification With Ra-dial Basis Function Extreme Learning Machine Using Graphic Processors.” IEEE Trans-actions on Cybernetics, no. 1, 2017.
6. Huang, Guang-Bin, et al. “Extreme Learning Machine: a New Learning Scheme of Feed-forward Neural Networks.” 2004 IEEE International Joint Conference on Neural Net-works, 2004.
7. Ranzato, Marc’aurelio, et al. “Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition.” 2007 IEEE Conference on Computer Vision and Pat-tern Recognition, 2007.
8. Le, Quoc V. “Building High-Level Features Using Large Scale Unsupervised Learning.” 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013.
9. A. Coates, A. Y. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” International Conference on Artificial Intelligence Sta-tistics Fort Lauderdale, 2011, pp. 215–223

Post Author: admin