• OpenAccess
  • About the journal

     The Journal of Information Systems and Telecommunication (JIST) accepts and publishes papers containing original researches and/or development results, representing an effective and novel contribution for knowledge in the area of information systems and Telecommunication. Contributions are accepted in the form of Regular papers or Correspondence. Regular papers are the ones with a well-rounded treatment of a problem area, whereas Correspondence focus on a point of a defined problem area. Under the permission of the editorial board, other kinds of papers may be published if they are found to be relevant or of interest to the readers. Responsibility for the content of the papers rests upon the Authors only. The Journal is aimed at not only a national target community, but also international audiences is taken into consideration. For this reason, authors are supposed to write in English.

    This Journal is Published under scientific support of Advanced Information Systems (AIS) Research Group and Digital & Signal Processing Group, ICTRC

    For further information on Article Processing Charges (APCs) policies, please visit our APC page or contact us infojist@gmail.com. 

     


    Recent Articles

    • Open Access Article

      1 - An Efficient Sentiment Analysis Model for Crime Articles’ Comments using a Fine-tuned BERT Deep Architecture and Pre-Processing Techniques
      Sovon Chakraborty Muhammad Borhan Uddin Talukdar Portia  Sikdar Jia Uddin
      Issue 45 , Vol. 12 , Winter 2024
      The prevalence of social media these days allows users to exchange views on a multitude of events. Public comments on the talk-of-the-country crimes can be analyzed to understand how the overall mass sentiment changes over time. In this paper, a specialized dataset has More
      The prevalence of social media these days allows users to exchange views on a multitude of events. Public comments on the talk-of-the-country crimes can be analyzed to understand how the overall mass sentiment changes over time. In this paper, a specialized dataset has been developed and utilized, comprising public comments from various types of online platforms, about contemporary crime events. The comments are later manually annotated with one of the three polarity values- positive, negative, and neutral. Before feeding the model with the data, some pre-processing tasks are applied to eliminate the dispensable parts each comment contains. In this study, A deep Bidirectional Encoder Representation from Transformers (BERT) is utilized for sentiment analysis from the pre-processed crime data. In order the evaluate the performance that the model exhibits, F1 score, ROC curve, and Heatmap are used. Experimental results demonstrate that the model shows F1 Score of 89% for the tested dataset. In addition, the proposed model outperforms the other state-of-the-art machine learning and deep learning models by exhibiting higher accuracy with less trainable parameters. As the model requires less trainable parameters, and hence the complexity is lower compared to other models, it is expected that the proposed model may be a suitable option for utilization in portable IoT devices. Manuscript profile

    • Open Access Article

      2 - Fear Recognition Using Early Biologically Inspired Features Model
      Elham  Askari
      Issue 45 , Vol. 12 , Winter 2024
      Facial expressions determine the inner emotional states of people. Different emotional states such as anger, fear, happiness, etc. can be recognized on people's faces. One of the most important emotional states is the state of fear because it is used to diagnose many di More
      Facial expressions determine the inner emotional states of people. Different emotional states such as anger, fear, happiness, etc. can be recognized on people's faces. One of the most important emotional states is the state of fear because it is used to diagnose many diseases such as panic syndrome, post-traumatic stress disorder, etc. The face is one of the biometrics that has been proposed to detect fear because it contains small features that increase the recognition rate. In this paper, a biological model inspired an early biological model is proposed to extract effective features for optimal fear detection. This model is inspired by the model of the brain and nervous system involved with the human brain, so it shows a similar function compare to brain. In this model, four computational layers were used. In the first layer, the input images will be pyramidal in six scales from large to small. Then the whole pyramid entered the next layer and Gabor filter was applied for each image and the results entered the next layer. In the third layer, a later reduction in feature extraction is performed. In the last layer, normalization will be done on the images. Finally, the outputs of the model are given to the svm classifier to perform the recognition operation. Experiments will be performed on JAFFE database images. In the experimental results, it can be seen that the proposed model shows better performance compared to other competing models such as BEL and Naive Bayes model with recognition accuracy, precision and recall of 99.33%, 99.71% and 99.5%, respectively Manuscript profile

    • Open Access Article

      3 - Optimization of Query Processing in Versatile Database Using Ant Colony Algorithm
      hasan Asil
      Issue 45 , Vol. 12 , Winter 2024
      Nowadays, with the advancement of database information technology, databases has led to large-scale distributed databases. According to this study, database management systems are improved and optimized so that they provide responses to customer questions with lower co More
      Nowadays, with the advancement of database information technology, databases has led to large-scale distributed databases. According to this study, database management systems are improved and optimized so that they provide responses to customer questions with lower cost. Query processing in database management systems is one of the important topics that grabs attentions. Until now, many techniques have been implemented for query processing in database system. The purpose of these methods is to optimize query processing in the database. The main topics that is interested in query processing in the database makes run-time adjustments of processing or summarizing topics by using the new approaches. The aim of this research is to optimize processing in the database by using adaptive methods. Ant Colony Algorithm (ACO) is used for solving optimization problems. ACO relies on the created pheromone to select the optimal solution. In this article, in order to make adaptive hybrid query processing. The proposed algorithm is fundamentally divided into three parts: separator, replacement policy, and query similarity detector. In order to improve the optimization and frequent adaption and correct selection in queries, the Ant Colony Algorithm has been applied in this research. In this algorithm, based on Versatility (adaptability) scheduling, Queries sent to the database have been attempted be collected. The simulation results of this method demonstrate that reduce spending time in the database. According to the proposed algorithm, one of the advantages of this method is to identify frequent queries in high traffic times and minimize the time and the execution time. This optimization method reduces the system load during high traffic load times for adaptive query Processing and generally reduces the execution runtime and aiming to minimize cost. The rate of reduction of query cost in the database with this method is 2.7%. Due to the versatility of high-cost queries, this improvement is manifested in high traffic times. In the future Studies, by adapting new system development methods, distributed databases can be optimized. Manuscript profile

    • Open Access Article

      4 - TPALA: Two Phase Adaptive Algorithm based on Learning Automata for job scheduling in cloud Environment
      Abolfazl Esfandi Javad Akbari Torkestani Abbas Karimi Faraneh Zarafshan
      Issue 45 , Vol. 12 , Winter 2024
      Due to the completely random and dynamic nature of the cloud environment, as well as the high volume of jobs, one of the significant challenges in this environment is proper online job scheduling. Most of the algorithms are presented based on heuristic and meta-heuristi More
      Due to the completely random and dynamic nature of the cloud environment, as well as the high volume of jobs, one of the significant challenges in this environment is proper online job scheduling. Most of the algorithms are presented based on heuristic and meta-heuristic approaches, which result in their inability to adapt to the dynamic nature of resources and cloud conditions. In this paper, we present a distributed online algorithm with the use of two different learning automata for each scheduler to schedule the jobs optimally. In this algorithm, the placed workload on every virtual machine is proportional to its computational capacity and changes with time based on the cloud and submitted job conditions. In proposed algorithm, two separate phases and two different LA are used to schedule jobs and allocate each job to the appropriate VM, so that a two phase adaptive algorithm based on LA is presented called TPALA. To demonstrate the effectiveness of our method, several scenarios have been simulated by CloudSim, in which several main metrics such as makespan, success rate, average waiting time, and degree of imbalance will be checked plus their comparison with other existing algorithms. The results show that TPALA performs at least 4.5% better than the closest measured algorithm. Manuscript profile

    • Open Access Article

      5 - Image Fake News Detection using Efficient NetB0 Model
      Yasmine Almsrahad Nasrollah  Moghaddam Charkari
      Issue 45 , Vol. 12 , Winter 2024
      Today, social networks have become a prominent source of news, significantly altering the way people obtain news from traditional media sources to social media. Alternatively, social media platforms have been plagued by unauthenticated and fake news in recent years. How More
      Today, social networks have become a prominent source of news, significantly altering the way people obtain news from traditional media sources to social media. Alternatively, social media platforms have been plagued by unauthenticated and fake news in recent years. However, the rise of fake news on these platforms has become a challenging issue. Fake news dissemination, especially through visual content, poses a significant threat as people tend to share information in image format. Consequently, detecting and combating fake news has become crucial in the realm of social media. In this paper, we propose an approach to address the detection of fake image news. Our method incorporates the error level analysis (ELA) technique and the explicit convolutional neural network of the EfficientNet model. By converting the original image into an ELA image, it is possible to effectively highlight any manipulations or discrepancies within the image. The ELA image is further processed by the EfficientNet model, which captures distinctive features used to detect fake image news. Visual features extracted from the model are passed through a dense layer and a sigmoid function to predict the image type. To evaluate the efficacy of the proposed method, we conducted experiments using the CASIA 2.0 dataset, a widely adopted benchmark dataset for fake image detection. The experimental results demonstrate an accuracy rate of 96.11% for the CASIA dataset. The results outperform in terms of accuracy and computational efficiency, with a 6% increase in accuracy and a 5.2% improvement in the F-score compared with other similar methods. Manuscript profile

    • Open Access Article

      6 - An Analysis of the Signal-to-Interference Ratio in UAV-based Telecommunication Networks
      hamid jafaripour Mohammad Fathi
      Issue 45 , Vol. 12 , Winter 2024
      One of the most important issues in wireless telecommunication systems is to study coverage efficiency in urban environments. Coverage efficiency means improving the signal-to-interference ratio (SIR) by providing a maximum telecommunication coverage and establishing hi More
      One of the most important issues in wireless telecommunication systems is to study coverage efficiency in urban environments. Coverage efficiency means improving the signal-to-interference ratio (SIR) by providing a maximum telecommunication coverage and establishing high-quality communication for users. In this paper, we use unmanned aerial vehicle (UAVs) as air base stations (BS) to investigate and improve the issue of maximizing coverage with minimal interference. First, we calculate the optimal height of the UAVs for the coverage radius of 400, 450, 500, 550, and 600 meters. Then, using simulation, we calculate and examine the value and status of SIR in UAVs with omnidirectional and directional antenna modes in symmetric and asymmetric altitude conditions, with and without considering the height of the UAVs. The best SIR is the UAV system with a directional antenna in asymmetric altitude conditions where the SIR range varies from 4.44db (the minimum coverage) to 52.11dB (maximum coverage). The worst SIR is the UAV system with an omnidirectional antenna in symmetrical height conditions without considering the height of the UAV. We estimate the range of SIR changes for different coverage ranges between 1.39 and 28dB. Factors affecting the SIR values from the most effective to the least, respectively, are coverage range and the antenna type, symmetrical and asymmetric height, and finally, considering or not considering the height of the UAV. Manuscript profile

    • Open Access Article

      7 - Proposing an FCM-MCOA Clustering Approach Stacked with Convolutional Neural Networks for Analysis of Customers in Insurance Company
      Motahareh Ghavidel meisam Yadollahzadeh tabari Mehdi Golsorkhtabaramiri
      Issue 45 , Vol. 12 , Winter 2024
      To create a customer-based marketing strategy, it is necessary to perform a proper analysis of customer data so that customers can be separated from each other or predict their future behavior. The datasets related to customers in any business usually are high-dimension More
      To create a customer-based marketing strategy, it is necessary to perform a proper analysis of customer data so that customers can be separated from each other or predict their future behavior. The datasets related to customers in any business usually are high-dimensional with too many instances and include both supervised and unsupervised ones. For this reason, companies today are trying to satisfy their customers as much as possible. This issue requires careful consideration of customers from several aspects. Data mining algorithms are one of the practical methods in businesses to find the required knowledge from customer’s both demographic and behavioral. This paper presents a hybrid clustering algorithm using the Fuzzy C-Means (FCM) method and the Modified Cuckoo Optimization Algorithm (MCOA). Since customer data analysis has a key role in ensuring a company's profitability, The Insurance Company (TIC) dataset is utilized for the experiments and performance evaluation. We compare the convergence of the proposed FCM-MCOA approach with some conventional optimization methods, such as Genetic Algorithm (GA) and Invasive Weed Optimization (IWO). Moreover, we suggest a customer classifier using the Convolutional Neural Networks (CNNs). Simulation results reveal that the FCM-MCOA converges faster than conventional clustering methods. In addition, the results indicate that the accuracy of the CNN-based classifier is more than 98%. CNN-based classifier converges after some couples of iterations, which shows a fast convergence in comparison with the conventional classifiers, such as Decision Tree (DT), Support Vector Machine (SVM), K-Nearest Neighborhood (KNN), and Naive Bayes (NB) classifiers. Manuscript profile

    • Open Access Article

      8 - Persian Ezafe Recognition Using Neural Approaches
      Habibollah Asghari Heshaam Faili
      Issue 45 , Vol. 12 , Winter 2024
      Persian Ezafe Recognition aims to automatically identify the occurrences of Ezafe (short vowel /e/) which should be pronounced but usually is not orthographically represented. This task is similar to the task of diacritization and vowel restoration in Arabic. Ezafe reco More
      Persian Ezafe Recognition aims to automatically identify the occurrences of Ezafe (short vowel /e/) which should be pronounced but usually is not orthographically represented. This task is similar to the task of diacritization and vowel restoration in Arabic. Ezafe recognition can be used in spelling disambiguation in Text to Speech Systems (TTS) and various other language processing tasks such as syntactic parsing and semantic role labeling. In this paper, we propose two neural approaches for the automatic recognition of Ezafe markers in Persian texts. We have tackled the Ezafe recognition task by using a Neural Sequence Labeling method and a Neural Machine Translation (NMT) approach as well. Some syntactic features are proposed to be exploited in the neural models. We have used various combinations of lexical features such as word forms, Part of Speech Tags, and ending letter of the words to be applied to the models. These features were statistically derived using a large annotated Persian text corpus and were optimized by a forward selection method. In order to evaluate the performance of our approaches, we examined nine baseline models including state-of-the-art approaches for recognition of Ezafe markers in Persian text. Our experiments on Persian Ezafe recognition based on neural approaches employing some optimized features into the models show that they can drastically improve the results of the baselines. They can also achieve better results than the Conditional Random Field method as the best-performing baseline. On the other hand, although the results of the NMT approach show a better performance compared to other baseline approaches, it cannot achieve better performance than the Neural Sequence Labeling method. The best achieved F1-measure based on neural sequence labeling is 96.29% Manuscript profile
    Most Viewed Articles

    • Open Access Article

      1 - Privacy Preserving Big Data Mining: Association Rule Hiding
      Golnar Assadat Afzali shahriyar mohammadi
      Issue 14 , Vol. 4 , Spring 2016
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to More
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to cover relationships between seemingly unrelated data in a data base.. Association rule hiding is a research area in privacy preserving data mining (PPDM) which addresses a solution for hiding sensitive rules within the data problem. Many researches have be done in this area, but most of them focus on reducing undesired side effect of deleting sensitive association rules in static databases. However, in the age of big data, we confront with dynamic data bases with new data entrance at any time. So, most of existing techniques would not be practical and must be updated in order to be appropriate for these huge volume data bases. In this paper, data anonymization technique is used for association rule hiding, while parallelization and scalability features are also embedded in the proposed model, in order to speed up big data mining process. In this way, instead of removing some instances of an existing important association rule, generalization is used to anonymize items in appropriate level. So, if necessary, we can update important association rules based on the new data entrances. We have conducted some experiments using three datasets in order to evaluate performance of the proposed model in comparison with Max-Min2 and HSCRIL. Experimental results show that the information loss of the proposed model is less than existing researches in this area and this model can be executed in a parallel manner for less execution time Manuscript profile

    • Open Access Article

      2 - Instance Based Sparse Classifier Fusion for Speaker Verification
      Mohammad Hasheminejad Hassan Farsi
      Issue 15 , Vol. 4 , Summer 2016
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers More
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers. A speaker verification system gets an input utterance and an identity claim, then verifies the claim in terms of a matching score. This score determines the resemblance of the input utterance and pre-enrolled target speakers. Since there is a variety of information in a speech signal, state-of-the-art speaker verification systems use a set of complementary classifiers to provide a reliable decision about the verification. Such a system receives some scores as input and takes a binary decision: accept or reject the claimed identity. Most of the recent studies on the classifier fusion for speaker verification used a weighted linear combination of the base classifiers. The corresponding weights are estimated using logistic regression. Additional researches have been performed on ensemble classification by adding different regularization terms to the logistic regression formulae. However, there are missing points in this type of ensemble classification, which are the correlation of the base classifiers and the superiority of some base classifiers for each test instance. We address both problems, by an instance based classifier ensemble selection and weight determination method. Our extensive studies on NIST 2004 speaker recognition evaluation (SRE) corpus in terms of EER, minDCF and minCLLR show the effectiveness of the proposed method. Manuscript profile

    • Open Access Article

      3 - Node Classification in Social Network by Distributed Learning Automata
      Ahmad Rahnama Zadeh meybodi meybodi Masoud Taheri Kadkhoda
      Issue 18 , Vol. 5 , Spring 2017
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitio More
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitioned according to the labeled nodes and a network of Distributed Learning Automata is corresponded on each partition. In each partition the maximal spanning tree is determined using DLA. Finally nodes are labeled according to the rewards of DLA. We have tested this algorithm on three real social network datasets, and results show that the expected accuracy of presented algorithm is achieved. Manuscript profile

    • Open Access Article

      4 - COGNISON: A Novel Dynamic Community Detection Algorithm in Social Network
      Hamideh Sadat Cheraghchi Ali Zakerolhossieni
      Issue 14 , Vol. 4 , Spring 2016
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social More
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social network analysis, we present a novel dynamic community detection algorithm called COGNISON inspired mainly by social theories. To be specific, we take inspiration from prototype theory and cognitive consistency theory to recognize the best community for each member by formulating community detection algorithm by human analogy disciplines. COGNISON is placed in representative based algorithm category and hints to further fortify the pure mathematical approach to community detection with stabilized social science disciplines. The proposed model is able to determine the proper number of communities by high accuracy in both weighted and binary networks. Comparison with the state of art algorithms proposed for dynamic community discovery in real datasets shows higher performance of this method in different measures of Accuracy, NMI, and Entropy for detecting communities over times. Finally our approach motivates the application of human inspired models in dynamic community detection context and suggest the fruitfulness of the connection of community detection field and social science theories to each other. Manuscript profile

    • Open Access Article

      5 - A Bio-Inspired Self-configuring Observer/ Controller for Organic Computing Systems
      Ali Tarihi Hassan Haghighi Fereidoon Shams Aliee
      Issue 15 , Vol. 4 , Summer 2016
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the More
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the emergence of life-like properties, called self-* in general which suits them well for pervasive computing. Achievement of these properties in organic computing systems is closely related to a proposed general feedback architecture, called the observer/controller architecture, which supports the mentioned properties through interacting with the system components and keeping their behavior under control. As one of these properties, self-configuration is desirable in the application of organic computing systems as it enables by enabling the adaptation to environmental changes. However, the adaptation in the level of architecture itself has not yet been studied in the literature of organic computing systems. This limits the achievable level of adaptation. In this paper, a self-configuring observer/controller architecture is presented that takes the self-configuration to the architecture level. It enables the system to choose the proper architecture from a variety of possible observer/controller variants available for a specific environment. The validity of the proposed architecture is formally demonstrated. We also show the applicability of this architecture through a known case study. Manuscript profile

    • Open Access Article

      6 - Publication Venue Recommendation Based on Paper’s Title and Co-authors Network
      Ramin Safa Seyed Abolghassem Mirroshandel Soroush Javadi Mohammad Azizi
      Issue 21 , Vol. 6 , Winter 2018
      Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying More
      Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying them in scientific applications. Applying recommender systems to scientific domain, such as paper recommendation, expert recommendation, citation recommendation and reviewer recommendation, are new and developing topics. With the significant growth of the number of scientific events and journals, one of the most important issues is choosing the most suitable venue for publishing papers, and the existence of a tool to accelerate this process is necessary for researchers. Despite the importance of these systems in accelerating the publication process and decreasing possible errors, this problem has been less studied in related works. So in this paper, an efficient approach will be suggested for recommending related conferences or journals for a researcher’s specific paper. In other words, our system will be able to recommend the most suitable venues for publishing a written paper, by means of social network analysis and content-based filtering, according to the researcher’s preferences and the co-authors’ publication history. The results of evaluation using real-world data show acceptable accuracy in venue recommendations. Manuscript profile

    • Open Access Article

      7 - Using Residual Design for Key Management in Hierarchical Wireless Sensor Networks
      Vahid Modiri Hamid Haj Seyyed Javadi Amir Masoud Rahmani Mohaddese Anzani
      Issue 29 , Vol. 8 , Winter 2020
      Combinatorial designs are powerful structures for key management in wireless sensor networks to address good connectivity and also security against external attacks in large scale networks. Many researchers have used key pre-distribution schemes using combinatorial stru More
      Combinatorial designs are powerful structures for key management in wireless sensor networks to address good connectivity and also security against external attacks in large scale networks. Many researchers have used key pre-distribution schemes using combinatorial structures in which key-rings, are pre-distributed to each sensor node before deployment in a real environment. Regarding the restricted resources, key distribution is a great engagement and challenging issue in providing sufficient security in wireless sensor networks. To provide secure communication, a unique key should be found from their stored key-rings. Most of the key pre-distribution protocols based on public-key mechanisms could not support highly scalable networks due to their key storage overhead and communication cost that linearly increasing. In this paper, we introduce a new key distribution approach for hierarchical clustered wireless sensor networks. Each cluster has a construction that contains new points or that reinforces and builds upon similar ideas of their head clusters. Based on Residual Design as a powerful algebraic combinatorial architecture and hierarchical network model, our approach guarantees good connectivity between sensor nodes and also cluster heads. Compared with similar existing schemes, our approach can provide sufficient security no matter if the cluster head or normal sensor node is compromised Manuscript profile

    • Open Access Article

      8 - Short Time Price Forecasting for Electricity Market Based on Hybrid Fuzzy Wavelet Transform and Bacteria Foraging Algorithm
      keyvan Borna Sepideh Palizdar
      Issue 16 , Vol. 4 , Autumn 2016
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, lin More
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, linear prediction methods and neural networks and fuzzy logic have been studied and emulated. An optimized fuzzy-wavelet prediction method is proposed to predict the price of electricity. In this method, in order to have a better prediction, the membership functions of the fuzzy regression along with the type of the wavelet transform filter have been optimized using the E.Coli Bacterial Foraging Optimization Algorithm. Then, to better compare this optimal method with other prediction methods including conventional linear prediction and neural network methods, they were analyzed with the same electricity price data. In fact, our fuzzy-wavelet method has a more desirable solution than previous methods. More precisely by choosing a suitable filter and a multiresolution processing method, the maximum error has improved by 13.6%, and the mean squared error has improved about 17.9%. In comparison with the fuzzy prediction method, our proposed method has a higher computational volume due to the use of wavelet transform as well as double use of fuzzy prediction. Due to the large number of layers and neurons used in it, the neural network method has a much higher computational volume than our fuzzy-wavelet method. Manuscript profile

    • Open Access Article

      9 - The Innovation Roadmap and Value Creation for Information Goods Pricing as an Economic Commodity
      Hekmat Adelnia Najafabadi Ahmadreza Shekarchizadeh Akbar Nabiollahi Naser Khani Hamid Rastegari
      Issue 26 , Vol. 7 , Spring 2019
      Nowadays, most books and information resources or even movies and application programs are produced and reproduced as information goods. Regarding characteristics of information goods, its cost structure and market, the usual and traditional pricing methods for such com More
      Nowadays, most books and information resources or even movies and application programs are produced and reproduced as information goods. Regarding characteristics of information goods, its cost structure and market, the usual and traditional pricing methods for such commodity are not useful and the information goods pricing has undergone innovative approaches. The purpose of product pricing is to find an optimal spot for maximizing manufacturers' profits and consumers' desirability. Undoubtedly, in order to achieve this goal, it is necessary to adopt appropriate strategies and implement innovative strategies. Innovative strategies and tactics reflect the analysis of market share, customer behavior change, pattern of cost, customer preferences, quick response to customer needs, market forecast, appropriate response to market changes, customer retention, discovery of their specific requirements, cost reduction and customer satisfaction increase. In this research, 32 papers have been selected among 540 prestigious articles to create a canvas containing more than 20 possible avenues for innovations in the field of information goods pricing, which can be used in the companies producing information goods, regardless of their size, nationality, and type of information goods they produce. Introduction of some key ideas on how to increase both profits and customer satisfaction and also three open issues for future research in the field of information goods pricing is one of the achievements of this research. Manuscript profile

    • Open Access Article

      10 - DBCACF: A Multidimensional Method for Tourist Recommendation Based on Users’ Demographic, Context and Feedback
      Maral Kolahkaj Ali Harounabadi Alireza Nikravan shalmani Rahim Chinipardaz
      Issue 24 , Vol. 6 , Autumn 2018
      By the advent of some applications in the web 2.0 such as social networks which allow the users to share media, many opportunities have been provided for the tourists to recognize and visit attractive and unfamiliar Areas-of-Interest (AOIs). However, finding the appropr More
      By the advent of some applications in the web 2.0 such as social networks which allow the users to share media, many opportunities have been provided for the tourists to recognize and visit attractive and unfamiliar Areas-of-Interest (AOIs). However, finding the appropriate areas based on user’s preferences is very difficult due to some issues such as huge amount of tourist areas, the limitation of the visiting time, and etc. In addition, the available methods have yet failed to provide accurate tourist’s recommendations based on geo-tagged media because of some problems such as data sparsity, cold start problem, considering two users with different habits as the same (symmetric similarity), and ignoring user’s personal and context information. Therefore, in this paper, a method called “Demographic-Based Context-Aware Collaborative Filtering” (DBCACF) is proposed to investigate the mentioned problems and to develop the Collaborative Filtering (CF) method with providing personalized tourist’s recommendations without users’ explicit requests. DBCACF considers demographic and contextual information in combination with the users' historical visits to overcome the limitations of CF methods in dealing with multi- dimensional data. In addition, a new asymmetric similarity measure is proposed in order to overcome the limitations of symmetric similarity methods. The experimental results on Flickr dataset indicated that the use of demographic and contextual information and the addition of proposed asymmetric scheme to the similarity measure could significantly improve the obtained results compared to other methods which used only user-item ratings and symmetric measures. Manuscript profile
    Upcoming Articles

    Word Cloud

  • Affiliated to
    Iranian Academic Center for Education,Culture and Research
    Director-in-Charge
    Habibollah Asghari (Research Institute for Information and Communication Technology, ACECR)
    Editor-in-Chief
    Masood Shafiei (Amirkabir University)
    Executive Manager
    Shirin Gilaki (Research Institute for Information and Communication Technology, ACECR)
    Editorial Board
    Abdolali Abdipour (Amirkabir University of Technology) Aliakbar Jalali (University of Maryland) Ali Mohammad Djafari (Le Centre National de la Recherche Scientifique (CNRS)) Alireza Montazemi (McMaster University) Hamidreza Sadegh Mohammadi (ACECR) Mahmoud Moghavemi (University of Malaya) Mehrnoush Shamsfard (Shahid Beheshti University) Omid Mahdi Ebadati (Kharazmi University) Ramazan Ali Sadeghzadeh (K. N. Toosi University of Technology) Rahim Saeidi (eaglegenomics) Saeed Ghazimaghrebi (Islamic Azad University, Shahr-e-Rey) Shaban Elahi (Vali-e-asr University of Rafsanjan) Shohreh Kasaei (Sharif University of Technology) Zabih Ghasemlooy ( University of Northumbria )
    Print ISSN: 2322-1437
    Online ISSN:2345-2773

    Publication period: Quarterly
    Email
    infojist@gmail.com , info.jist@acecr.org
    Address
    No.5, Saeedi Alley, Kalej Intersection., Enghelab Ave., Tehran, Iran.
    Phone
    +98 21 88930150
    Fax
    +9821-88930157
    Postal Code
    1599616313

    Search

    Indexed in

    Statistics

    Number of Volumes 12
    Number of Issues 46
    Printed Articles 347
    Number of Authors 3100
    Article Views 2026716
    Article Downloads 449319
    Number of Submitted Articles 1604
    Number of Rejected Articles 690
    Number of Accepted Articles 362
    Acceptance 19 %
    Time to Accept(day) 187
    Reviewer Count 944
    Last Update 6/22/2024