• List of Articles


      • Open Access Article

        1 - A New Non-Gaussian Performance Evaluation Method in Uncompensated Coherent Optical Transmission SystemsUniversity of Birjand
        Seyed Sadra Kashef paeez azmi
        In this paper, the statistical distribution of the received quadrature amplitude modulation (QAM) signal components is analyzed after propagation in a dispersion uncompensated coherent optical fiber link. Two Gaussian tests, the Anderson-Darling and the Jarque-Bera have More
        In this paper, the statistical distribution of the received quadrature amplitude modulation (QAM) signal components is analyzed after propagation in a dispersion uncompensated coherent optical fiber link. Two Gaussian tests, the Anderson-Darling and the Jarque-Bera have been used to measure the distance from the Gaussian distribution. By increasing the launch power, the received signal distribution starts to deviate from Gaussian. This deviation can have significant effects in system performance evaluation. The use of the Johnson s_U distribution is proposed for the performance evaluation of orthogonal frequency division multiplexing in an uncompensated coherent optical system. Here, the Johnson s_U is extended to predict the performance of multi-subcarrier and also single carrier systems with M-QAM signals. In particular, symbol error rate is derived based on the Johnson s_U distribution and performance estimations are verified through accurate Monte-Carlo simulations based on the split-step Fourier method. In addition, a new formulation for the calculation of signal to noise ratio is presented, which is more accurate than those proposed in the literature. In the linear region, the Johnson based estimations are the same as Gaussian; however, in the nonlinear region, Johnson s_U distribution power prediction is more accurate than the one obtained using the Gaussian approximation, which is verified by the numerical results. Manuscript profile
      • Open Access Article

        2 - BSFS: A Bidirectional Search Algorithm for Flow Scheduling in Cloud Data Centers
        Hasibeh Naseri Sadoon Azizi Alireza Abdollahpouri
        To support high bisection bandwidth for communication intensive applications in the cloud computing environment, data center networks usually offer a wide variety of paths. However, optimal utilization of this facility has always been a critical challenge in a data cent More
        To support high bisection bandwidth for communication intensive applications in the cloud computing environment, data center networks usually offer a wide variety of paths. However, optimal utilization of this facility has always been a critical challenge in a data center design. Flow-based mechanisms usually suffer from collision between elephant flows; while, packet-based mechanisms encounter packet re-ordering phenomenon. Both of these challenges lead to severe performance degradation in a data center network. To address these problems, in this paper, we propose an efficient mechanism for the flow scheduling problem in cloud data center networks. The proposed mechanism, on one hand, makes decisions per flow, thus preventing the necessity for rearrangement of packets. On the other hand, thanks do SDN technology and utilizing bidirectional search algorithm, our proposed method is able to distribute elephant flows across the entire network smoothly and with a high speed. Simulation results confirm the outperformance of our proposed method with the comparison of state-of-the-art algorithms under different traffic patterns. In particular, compared to the second-best result, the proposed mechanism provides about 20% higher throughput for random traffic pattern. In addition, with regard to flow completion time, the percentage of improvement is 12% for random traffic pattern Manuscript profile
      • Open Access Article

        3 - Balancing Agility and Stability of Wireless Link Quality Estimators
        MohammadJavad Tanakian Mehri Mehrjoo
        The performance of many wireless protocols is tied to a quick Link Quality Estimation (LQE). However, some wireless applications need the estimation to respond quickly only to the persistent changes and ignore the transient changes of the channel, i.e., be agile and sta More
        The performance of many wireless protocols is tied to a quick Link Quality Estimation (LQE). However, some wireless applications need the estimation to respond quickly only to the persistent changes and ignore the transient changes of the channel, i.e., be agile and stable, respectively. In this paper, we propose an adaptive fuzzy filter to balance the stability and agility of LQE by mitigating the transient variation of it. The heart of the fuzzy filter is an Exponentially Weighted Moving Average (EWMA) low-pass filter that its smoothing factor is changed dynamically with fuzzy rules. We apply the adaptive fuzzy filter and a non-adaptive one, i.e., an EWMA with a constant smoothing factor, to several types of channels from short-term to long-term transitive channels. The comparison of the filters outputs shows that the non-adaptive filter is stable for large values of the smoothing factor and is agile for small values of smoothing factor, while the proposed adaptive filter outperforms the other ones in terms of balancing the agility and stability measured by the settling time and coefficient of variation, respectively. Notably, the proposed adaptive fuzzy filter performs in real time and its complexity is low, because of using limited number of fuzzy rules and membership functions. Manuscript profile
      • Open Access Article

        4 - SSIM-Based Fuzzy Video Rate Controller for Variable Bit Rate Applications of Scalable HEVC
        Farhad Raufmehr Mehdi Rezaei
        Scalable High Efficiency Video Coding (SHVC) is the scalable extension of the latest video coding standard H.265/HEVC. Video rate control algorithm is out of the scope of video coding standards. Appropriate rate control algorithms are designed for various applications t More
        Scalable High Efficiency Video Coding (SHVC) is the scalable extension of the latest video coding standard H.265/HEVC. Video rate control algorithm is out of the scope of video coding standards. Appropriate rate control algorithms are designed for various applications to overcome practical constraints such as bandwidth and buffering constraints. In most of the scalable video applications, such as video on demand (VoD) and broadcasting applications, encoded bitstreams with variable bit rates are preferred to bitstreams with constant bit rates. In variable bit rate (VBR) applications, the tolerable delay is relatively high. Therefore, we utilize a larger buffer to allow more variations in bitrate to provide smooth and high visual quality of output video. In this paper, we propose a fuzzy video rate controller appropriate for VBR applications of SHVC. A fuzzy controller is used for each layer of scalable video to minimize the fluctuation of QP at the frame level while the buffering constraint is obeyed for any number of layers received by a decoder. The proposed rate controller utilizes the well-known structural similarity index (SSIM) as a quality metric to increase the visual quality of the output video. The proposed rate control algorithm is implemented in HEVC reference software and comprehensive experiments are executed to tune the fuzzy controllers and also to evaluate the performance of the algorithm. Experimental results show a high performance for the proposed algorithm in terms of rate control, visual quality, and rate-distortion performance. Manuscript profile
      • Open Access Article

        5 - DeepSumm: A Novel Deep Learning-Based Multi-Lingual Multi-Documents Summarization System
        Shima Mehrabi Seyed Abolghassem Mirroshandel Hamidreza  Ahmadifar
        With the increasing amount of accessible textual information via the internet, it seems necessary to have a summarization system that can generate a summary of information for user demands. Since a long time ago, summarization has been considered by natural language pro More
        With the increasing amount of accessible textual information via the internet, it seems necessary to have a summarization system that can generate a summary of information for user demands. Since a long time ago, summarization has been considered by natural language processing researchers. Today, with improvement in processing power and the development of computational tools, efforts to improve the performance of the summarization system is continued, especially with utilizing more powerful learning algorithms such as deep learning method. In this paper, a novel multi-lingual multi-document summarization system is proposed that works based on deep learning techniques, and it is amongst the first Persian summarization system by use of deep learning. The proposed system ranks the sentences based on some predefined features and by using a deep artificial neural network. A comprehensive study about the effect of different features was also done to achieve the best possible features combination. The performance of the proposed system is evaluated on the standard baseline datasets in Persian and English. The result of evaluations demonstrates the effectiveness and success of the proposed summarization system in both languages. It can be said that the proposed method has achieve the state of the art performance in Persian and English. Manuscript profile
      • Open Access Article

        6 - Social Groups Detection in Crowd by Using Automatic Fuzzy Clustering with PSO
        Ali Akbari Hassan Farsi Sajad Mohammadzadeh
        Detecting social groups is one of the most important and complex problems which has been concerned recently. This process and relation between members in the groups are necessary for human-like robots shortly. Moving in a group means to be a subsystem in the group. In o More
        Detecting social groups is one of the most important and complex problems which has been concerned recently. This process and relation between members in the groups are necessary for human-like robots shortly. Moving in a group means to be a subsystem in the group. In other words, a group containing two or more persons can be considered to be in the same direction of movement with the same speed of movement. All datasets contain some information about trajectories and labels of the members. The aim is to detect social groups containing two or more persons or detecting the individual motion of a person. For detecting social groups in the proposed method, automatic fuzzy clustering with Particle Swarm Optimization (PSO) is used. The automatic fuzzy clustering with the PSO introduced in the proposed method does not need to know the number of groups. At first, the locations of all people in frequent frames are detected and the average of locations is given to automatic fuzzy clustering with the PSO. The proposed method provides reliable results in valid datasets. The proposed method is compared with a method that provides better results while needs training data for the training step, but the proposed method does not require training at all. This characteristic of the proposed method increases the ability of its implementation for robots. The indexing results show that the proposed method can automatically find social groups without accessing the number of groups and requiring training data at all. Manuscript profile
      • Open Access Article

        7 - Facial Images Quality Assessment based on ISO/ICAO Standard Compliance Estimation by HMAX Model
        Azamossadat Nourbakhsh Mohammad-Shahram Moin Arash  Sharifi
        Facial images are the most popular biometrics in automated identification systems. Different methods have been introduced to evaluate the quality of these images. FICV is a common benchmark to evaluate facial images quality using ISO / ICAO compliancy assessment algorit More
        Facial images are the most popular biometrics in automated identification systems. Different methods have been introduced to evaluate the quality of these images. FICV is a common benchmark to evaluate facial images quality using ISO / ICAO compliancy assessment algorithms. In this work, a new model has been introduced based on brain functionality for Facial Image Quality Assessment, using Face Image ISO Compliance Verification (FICV) benchmark. We have used the Hierarchical Max-pooling (HMAX) model for brain functionality simulation and evaluated its performance. Based on the accuracy of compliancy verification, Equal Error Rate of ICAO requirements, has been classified and from those with higher error rate in the past researches, nine ICAO requirements have been used to assess the compliancy of the face images quality to the standard. To evaluate the quality of facial images, first, image patches were generated for key and non-key face components by using Viola-Jones algorithm. For simulating the brain function, HMAX method has been applied to these patches. In the HMAX model, a multi-resolution spatial pooling has been used, which encodes local and public spatial information for generating image discriminative signatures. In the proposed model, the way of storing and fetching information is similar to the function of the brain. For training and testing the model, AR and PUT databases were used. The results has been evaluated by FICV assessment factors, showing lower Equal Error Rate and rejection rate, compared to the existing methods. Manuscript profile