List of articles (by subject) Pattern Recognition


    • Open Access Article

      1 - Facial Images Quality Assessment based on ISO/ICAO Standard Compliance Estimation by HMAX Model
      Azamossadat Nourbakhsh Mohammad-Shahram Moin Arash  Sharifi
      Facial images are the most popular biometrics in automated identification systems. Different methods have been introduced to evaluate the quality of these images. FICV is a common benchmark to evaluate facial images quality using ISO / ICAO compliancy assessment algorit More
      Facial images are the most popular biometrics in automated identification systems. Different methods have been introduced to evaluate the quality of these images. FICV is a common benchmark to evaluate facial images quality using ISO / ICAO compliancy assessment algorithms. In this work, a new model has been introduced based on brain functionality for Facial Image Quality Assessment, using Face Image ISO Compliance Verification (FICV) benchmark. We have used the Hierarchical Max-pooling (HMAX) model for brain functionality simulation and evaluated its performance. Based on the accuracy of compliancy verification, Equal Error Rate of ICAO requirements, has been classified and from those with higher error rate in the past researches, nine ICAO requirements have been used to assess the compliancy of the face images quality to the standard. To evaluate the quality of facial images, first, image patches were generated for key and non-key face components by using Viola-Jones algorithm. For simulating the brain function, HMAX method has been applied to these patches. In the HMAX model, a multi-resolution spatial pooling has been used, which encodes local and public spatial information for generating image discriminative signatures. In the proposed model, the way of storing and fetching information is similar to the function of the brain. For training and testing the model, AR and PUT databases were used. The results has been evaluated by FICV assessment factors, showing lower Equal Error Rate and rejection rate, compared to the existing methods. Manuscript profile
    • Open Access Article

      2 - An Efficient Method for Handwritten Kannada Digit Recognition based on PCA and SVM Classifier
      Ramesh G Prasanna  G B Santosh  V Bhat Chandrashekar  Naik Champa  H N
      Handwritten digit recognition is one of the classical issues in the field of image grouping, a subfield of computer vision. The event of the handwritten digit is generous. With a wide opportunity, the issue of handwritten digit recognition by using computer vision and m More
      Handwritten digit recognition is one of the classical issues in the field of image grouping, a subfield of computer vision. The event of the handwritten digit is generous. With a wide opportunity, the issue of handwritten digit recognition by using computer vision and machine learning techniques has been a well-considered upon field. The field has gone through an exceptional turn of events, since the development of machine learning techniques. Utilizing the strategy for Support Vector Machine (SVM) and Principal Component Analysis (PCA), a robust and swift method to solve the problem of handwritten digit recognition, for the Kannada language is introduced. In this work, the Kannada-MNIST dataset is used for digit recognition to evaluate the performance of SVM and PCA. Efforts were made previously to recognize handwritten digits of different languages with this approach. However, due to the lack of a standard MNIST dataset for Kannada numerals, Kannada Handwritten digit recognition was left behind. With the introduction of the MNIST dataset for Kannada digits, we budge towards solving the problem statement and show how applying PCA for dimensionality reduction before using the SVM classifier increases the accuracy on the RBF kernel. 60,000 images are used for training and 10,000 images for testing the model and an accuracy of 99.02% on validation data and 95.44% on test data is achieved. Performance measures like Precision, Recall, and F1-score have been evaluated on the method used. Manuscript profile
    • Open Access Article

      3 - A Threshold-based Brain Tumour Segmentation from MR Images using Multi-Objective Particle Swarm Optimization
      Katkoori Arun  Kumar Ravi  Boda
      The Pareto optimal solution is unique in single objective Particle Swarm Optimization (SO-PSO) problems as the emphasis is on the variable space of the decision. A multi-objective-based optimization technique called Multi-Objective Particle Swarm Optimization (MO-PSO) i More
      The Pareto optimal solution is unique in single objective Particle Swarm Optimization (SO-PSO) problems as the emphasis is on the variable space of the decision. A multi-objective-based optimization technique called Multi-Objective Particle Swarm Optimization (MO-PSO) is introduced in this paper for image segmentation. The multi-objective Particle Swarm Optimization (MO-PSO) technique extends the principle of optimization by facilitating simultaneous optimization of single objectives. It is used in solving various image processing problems like image segmentation, image enhancement, etc. This technique is used to detect the tumour of the human brain on MR images. To get the threshold, the suggested algorithm uses two fitness(objective) functions- Image entropy and Image variance. These two objective functions are distinct from each other and are simultaneously optimized to create a sequence of pareto-optimal solutions. The global best (Gbest) obtained from MO-PSO is treated as threshold. The MO-PSO technique tested on various MRI images provides its efficiency with experimental findings. In terms of “best, worst, mean, median, standard deviation” parameters, the MO-PSO technique is also contrasted with the existing Single-objective PSO (SO-PSO) technique. Experimental results show that Multi Objective-PSO is 28% advanced than SO-PSO for ‘best’ parameter with reference to image entropy function and 92% accuracy than Single Objective-PSO with reference to image variance function. Manuscript profile
    • Open Access Article

      4 - Optimized kernel Nonparametric Weighted Feature Extraction for Hyperspectral Image Classification
      Mohammad Hasheminejad
      Hyperspectral image (HSI) classification is an essential means of the analysis of remotely sensed images. Remote sensing of natural resources, astronomy, medicine, agriculture, food health, and many other applications are examples of possible applications of this techni More
      Hyperspectral image (HSI) classification is an essential means of the analysis of remotely sensed images. Remote sensing of natural resources, astronomy, medicine, agriculture, food health, and many other applications are examples of possible applications of this technique. Since hyperspectral images contain redundant measurements, it is crucial to identify a subset of efficient features for modeling the classes. Kernel-based methods are widely used in this field. In this paper, we introduce a new kernel-based method that defines Hyperplane more optimally than previous methods. The presence of noise data in many kernel-based HSI classification methods causes changes in boundary samples and, as a result, incorrect class hyperplane training. We propose the optimized kernel non-parametric weighted feature extraction for hyperspectral image classification. KNWFE is a kernel-based feature extraction method, which has promising results in classifying remotely-sensed image data. However, it does not take the closeness or distance of the data to the target classes. Solving the problem, we propose optimized KNWFE, which results in better classification performance. Our extensive experiments show that the proposed method improves the accuracy of HSI classification and is superior to the state-of-the-art HIS classifiers. Manuscript profile
    • Open Access Article

      5 - A Corpus for Evaluation of Cross Language Text Re-use Detection Systems
      Salar Mohtaj Habibollah Asghari
      In recent years, the availability of documents through the Internet along with automatic translation systems have increased plagiarism, especially across languages. Cross-lingual plagiarism occurs when the source or original text is in one language and the plagiarized o More
      In recent years, the availability of documents through the Internet along with automatic translation systems have increased plagiarism, especially across languages. Cross-lingual plagiarism occurs when the source or original text is in one language and the plagiarized or re-used text is in another language. Various methods for automatic text re-use detection across languages have been developed whose objective is to assist human experts in analyzing documents for plagiarism cases. For evaluating the performance of these systems and algorithms, standard evaluation resources are needed. To construct cross lingual plagiarism detection corpora, the majority of earlier studies have paid attention to English and other European language pairs, and have less focused on low resource languages. In this paper, we investigate a method for constructing an English-Persian cross-language plagiarism detection corpus based on parallel bilingual sentences that artificially generate passages with various degrees of paraphrasing. The plagiarized passages are inserted into topically related English and Persian Wikipedia articles in order to have more realistic text documents. The proposed approach can be applied to other less-resourced languages. In order to evaluate the compiled corpus, both intrinsic and extrinsic evaluation methods were employed. So, the compiled corpus can be suitably included into an evaluation framework for assessing cross-language plagiarism detection systems. Our proposed corpus is free and publicly available for research purposes. Manuscript profile