• List of Articles


      • Open Access Article

        1 - Deep Transformer-based Representation for Text Chunking
        Parsa Kavehzadeh Mohammad Mahdi  Abdollah Pour Saeedeh Momtazi
        Text chunking is one of the basic tasks in natural language processing. Most proposed models in recent years were employed on chunking and other sequence labeling tasks simultaneously and they were mostly based on Recurrent Neural Networks (RNN) and Conditional Random F More
        Text chunking is one of the basic tasks in natural language processing. Most proposed models in recent years were employed on chunking and other sequence labeling tasks simultaneously and they were mostly based on Recurrent Neural Networks (RNN) and Conditional Random Field (CRF). In this article, we use state-of-the-art transformer-based models in combination with CRF, Long Short-Term Memory (LSTM)-CRF as well as a simple dense layer to study the impact of different pre-trained models on the overall performance in text chunking. To this aim, we evaluate BERT, RoBERTa, Funnel Transformer, XLM, XLM-RoBERTa, BART, and GPT2 as candidates of contextualized models. Our experiments exhibit that all transformer-based models except GPT2 achieved close and high scores on text chunking. Due to the unique unidirectional architecture of GPT2, it shows a relatively poor performance on text chunking in comparison to other bidirectional transformer-based architectures. Our experiments also revealed that adding a LSTM layer to transformer-based models does not significantly improve the results since LSTM does not add additional features to assist the model to achieve more information from the input compared to the deep contextualized models. Manuscript profile
      • Open Access Article

        2 - Deep Learning-based Educational User Profile and User Rating Recommendation System for E-Learning
        Pradnya Vaibhav  Kulkarni Sunil Rai Rajneeshkaur Sachdeo Rohini Kale
        In the current era of online learning, the recommendation system for the eLearning process is quite important. Since the COVID-19 pandemic, eLearning has undergone a complete transformation. Existing eLearning Recommendation Systems worked on collaborative filtering or More
        In the current era of online learning, the recommendation system for the eLearning process is quite important. Since the COVID-19 pandemic, eLearning has undergone a complete transformation. Existing eLearning Recommendation Systems worked on collaborative filtering or content-based filtering based on historical data, students’ previous grade, results, or user profiles. The eLearning system selected courses based on these parameters in a generalized manner rather than on a personalized basis. Personalized recommendations, information relevancy, choosing the proper course, and recommendation accuracy are some of the issues in eLearning recommendation systems. In this paper, existing conventional eLearning and course recommendation systems are studied in detail and compared with the proposed approach. We have used, the dataset of User Profile and User Rating for a recommendation of the course. K Nearest Neighbor, Support Vector Machine, Decision Tree, Random Forest, Nave Bayes, Linear Regression, Linear Discriminant Analysis, and Neural Network were among the Machine Learning techniques explored and deployed. The accuracy achieved for all these algorithms ranges from 0.81 to 0.97. The proposed algorithm uses a hybrid approach by combining collaborative filtering and deep learning. We have improved accuracy to 0.98 which indicate that the proposed model can provide personalized and accurate eLearning recommendation for the individual user. Manuscript profile
      • Open Access Article

        3 - Implementation of Machine Learning Algorithms for Customer Churn Prediction
        Manal Loukili Fayçal Messaoudi Raouya El Youbi
        Churn prediction is one of the most critical issues in the telecommunications industry. The possibilities of predicting churn have increased considerably due to the remarkable progress made in the field of machine learning and artificial intelligence. In this context, w More
        Churn prediction is one of the most critical issues in the telecommunications industry. The possibilities of predicting churn have increased considerably due to the remarkable progress made in the field of machine learning and artificial intelligence. In this context, we propose the following process which consists of six stages. The first phase consists of data pre-processing, followed by feature analysis. In the third phase, the selection of features. Then the data was divided into two parts: the training set and the test set. In the prediction process, the most popular predictive models were adopted, namely random forest, k-nearest neighbor, and support vector machine. In addition, we used cross-validation on the training set for hyperparameter tuning and to avoid model overfitting. Then, the results obtained on the test set were evaluated using the confusion matrix and the AUC curve. Finally, we found that the models used gave high accuracy values (over 79%). The highest AUC score, 84%, is achieved by the SVM and bagging classifiers as an ensemble method which surpasses them. Manuscript profile
      • Open Access Article

        4 - Recognition of Facial and Vocal Emotional Expressions by SOAR Model
        Matin Ramzani Shahrestani Sara Motamed Mohammadreza Yamaghani
        Todays, facial and vocal emotional expression recognition is considered one of the most important ways of human communication and responding to the ambient and the attractive fields of machine vision. This application can be used in different cases, including emotion an More
        Todays, facial and vocal emotional expression recognition is considered one of the most important ways of human communication and responding to the ambient and the attractive fields of machine vision. This application can be used in different cases, including emotion analysis. This article uses six basic emotional expressions (anger, disgust, fear, happiness, sadness, and surprise), and its main goal is to present a new method in cognitive science, based on the functioning of the human brain system. The stages of the proposed model include four main parts: pre-processing, feature extraction, feature selection, and classification. In the pre-processing stage, facial images and verbal signals are extracted from videos taken from the enterface’05 dataset, noise removal and resizing is performed on them. In the feature extraction stage, PCA is applied to the images, and the 3D-CNN network is used to find the best features of the images. Moreover, MFCC is applied to emotional verbal signals, and the CNN Network will also be applied to find the best features. Then, fusion is performed on the resulted features and finally Soar classification will be applied to the fused features, to calculate the recognition rate of emotional expression based on face and speech. This model will be compared with competing models in order to examine the performance of the proposed model. The highest rate of recognition based on audio-image was related to the emotional expression of disgust with a rate of 88.1%, and the lowest rate of recognition was related to fear with a rate of 73.8%. Manuscript profile
      • Open Access Article

        5 - Long-Term Software Fault Prediction Model with Linear Regression and Data Transformation
        Momotaz  Begum Jahid Hasan Rony Md. Rashedul Islam Jia Uddin
        The validation performance is obligatory to ensure the software reliability by determining the characteristics of an implemented software system. To ensure the reliability of software, not only detecting and solving occurred faults but also predicting the future fault i More
        The validation performance is obligatory to ensure the software reliability by determining the characteristics of an implemented software system. To ensure the reliability of software, not only detecting and solving occurred faults but also predicting the future fault is required. It is performed before any actual testing phase initiates. As a result, various works on software fault prediction have been done. In this paper presents, we present a software fault prediction model where different data transformation methods are applied with Poisson fault count data. For data pre-processing from Poisson data to Gaussian data, Box-Cox power transformation (Box-Cox_T), Yeo-Johnson power transformation (Yeo-Johnson_T), and Anscombe transformation (Anscombe_T) are used here. And then, to predict long-term software fault prediction, linear regression is applied. Linear regression shows the linear relationship between the dependent and independent variable correspondingly relative error and testing days. For synthesis analysis, three real software fault count datasets are used, where we compare the proposed approach with Naïve gauss, exponential smoothing time series forecasting model, and conventional method software reliability growth models (SRGMs) in terms of data transformation (With_T) and non-data transformation (Non_T). Our datasets contain days and cumulative software faults represented in (62, 133), (181, 225), and (114, 189) formats, respectively. Box-Cox power transformation with linear regression (L_Box-Cox_T) method, has outperformed all other methods with regard to average relative error from the short to long term. Manuscript profile
      • Open Access Article

        6 - A survey on NFC Payment: Applications, Research Challenges, and Future Directions
        Mehdi Sattarivand Shahram Babaie Amir Masoud  Rahmani
        Near Field Communication (NFC), as a short-range wireless connectivity technology, makes it easier for electronic devices to stay in touch. This technology, due to its advantages such as secure access, compatibility, and ease of use, can be utilized in multiple applicat More
        Near Field Communication (NFC), as a short-range wireless connectivity technology, makes it easier for electronic devices to stay in touch. This technology, due to its advantages such as secure access, compatibility, and ease of use, can be utilized in multiple applications in various domains such as banking, file transferring reservations, booking tickets, redeeming, entry/exit passes, and payment. In this survey paper, various aspects of this technology, including operating modes, their protocol stacks, and standard message format are investigated. Moreover, future direction of NFC in terms of design, improvement, and user-friendliness is presented for further research. In addition, due to the disadvantages of banknote-based payment methods such as the high temptation to steal and the need for a safe, mobile payments, which include mobile wallets and mobile money transfers, are explored as a new alternative to these methods. In addition, the traditional payment methods and their limitations are surveyed along with NFC payment as a prominent application of this technology. Furthermore, security threats of NFC payment along with future research directions for NFC payment and its challenges, including protocols and standards, and NFC payment security requirements are addressed in this paper. It is hoped that effective policies for NFC payment development will be provided by addressing the important challenges and formulating appropriate standards. Manuscript profile
      • Open Access Article

        7 - Content-based Retrieval of Tiles and Ceramics Images based on Grouping of Images and Minimal Feature Extraction
        Simin RajaeeNejad Farahnaz Mohanna
        One of the most important databases in the e-commerce is tile and ceramic database, for which no specific retrieval method has been provided so far. In this paper, a method is proposed for the content-based retrieval of digital images of tiles and ceramics databases. Fi More
        One of the most important databases in the e-commerce is tile and ceramic database, for which no specific retrieval method has been provided so far. In this paper, a method is proposed for the content-based retrieval of digital images of tiles and ceramics databases. First, a database is created by photographing different tiles and ceramics on the market from different angles and directions, including 520 images. Then a query image and the database images are divided into nine equal sub-images and all are grouped based on their sub-images. Next, the selected color and texture features are extracted from the sub-images of the database images and query image, so, each image has a feature vector. The selected features are the minimum features that are required to reduce the amount of computations and information stored, as well as speed up the retrieval. Average precision is calculated for the similarity measure. Finally, comparing the query feature vector with the feature vectors of all database images leads to retrieval. According to the retrieving results by the proposed method, its accuracy and speed are improved by 16.55% and 23.88%, respectively, compared to the most similar methods. Manuscript profile
      • Open Access Article

        8 - Spectrum Sensing of OFDM Signals Utilizing Higher Order Statistics under Noise Uncertainty Environments in Cognitive Radio Systems
        MOUSUMI HAQUE Tetsuya Shimamura
        Cognitive radio (CR) is an important issue to solve the spectrum scarcity problem for modern and forthcoming wireless communication systems. Spectrum sensing is the ability of the CR systems to sense the primary user signal to detect an ideal portion of the radio spectr More
        Cognitive radio (CR) is an important issue to solve the spectrum scarcity problem for modern and forthcoming wireless communication systems. Spectrum sensing is the ability of the CR systems to sense the primary user signal to detect an ideal portion of the radio spectrum. Spectrum sensing is mandatory to solve the spectrum scarcity problem and the interference problem of the primary user. Noise uncertainty consideration for orthogonal frequency division multiplexing (OFDM) transmitted signals in severe noise environments is a challenging issue for measuring the performance of spectrum sensing. This paper proposed a method using higher order statistics (HOS) functions including skewness and kurtosis for improving the sensing performance of a cyclic prefix (CP) based OFDM transmitted signal for noise uncertainty. The detection performance of OFDM systems is measured for various CP sizes using a higher order digital modulation technique over a multipath Rayleigh fading channel for low signal-to-noise ratio (SNR) cases. In the proposed method, the CP-based OFDM transmitted signal sensing performance is measured and compared with the conventional methods under noise uncertainty environments. Through comprehensive evaluation of simulation, it is demonstrated that the sensing performance of this method significantly outperforms conventional schemes in the case of noise uncertainty in severe noise environments. Manuscript profile
      • Open Access Article

        9 - Trip Timing Algorithm for GTFS Data with Redis Model to Improve the Performance
        Mustafa Alzaidi Aniko Vagner
        Accessing public transport plays an essential role in the daily life productivity of people in urban regions. Therefore, it is necessary to represent the spatiotemporal diversity of transit services to evaluate public transit accessibility appropriately. That can be acc More
        Accessing public transport plays an essential role in the daily life productivity of people in urban regions. Therefore, it is necessary to represent the spatiotemporal diversity of transit services to evaluate public transit accessibility appropriately. That can be accomplished by determining the shortest path or shortest travel time trip plan. Many applications like ArcGIS provide tools to estimate the trip time using GTFS data. They can perform well in finding travel time. Still, they can be computationally inefficient and impractical with increasing the data dimensions like searching all day time or in case of huge data. Some research proposed recently provides more computationally efficient algorithms to solve the problem. This paper presents a new algorithm to find the timing information for a trip plan between two start and destination points. Also, we introduce RMH (Range Mapping Hash) as a new approach using Redis NoSQL to find and calculate the accessibility of a trip plan with fixed time complexity of O(2) regardless of the city size (GTFS size). We experimented with the performance of this approach and compared it with the traditional run-time algorithm using GTFS data of Debrecen and Budapest. This Redis model can be applied to similar problems where input can be divided into ranges with the same output. Manuscript profile