List of articles (by subject) Signal Processing


    • Open Access Article

      1 - Speech Intelligibility Improvement in Noisy Environments for Near-End Listening Enhancement
      Peyman Goli Mohammad Reza Karami-Mollaei
      A new speech intelligibility improvement method for near-end listening enhancement in noisy environments is proposed. This method improves speech intelligibility by optimizing energy correlation of one-third octave bands of clean speech and enhanced noisy speech without More
      A new speech intelligibility improvement method for near-end listening enhancement in noisy environments is proposed. This method improves speech intelligibility by optimizing energy correlation of one-third octave bands of clean speech and enhanced noisy speech without power increasing. The energy correlation is determined as a cost function based on frequency band gains of the clean speech. Interior-point algorithm which is an iterative procedure for the nonlinear optimization is used to determine the optimal points of the cost function because of nonlinearity and complexity of the energy correlation function. Two objective intelligibility measures, speech intelligibility index and short-time objective intelligibility measure, are employed to evaluate the noisy enhanced speech intelligibility. Furthermore, the speech intelligibility scores are compared with unprocessed speech and a baseline method under various noisy conditions. The results show large intelligibility improvements with the proposed method over the unprocessed noisy speech. Manuscript profile
    • Open Access Article

      2 - Multimodal Biometric Recognition Using Particle Swarm Optimization-Based Selected Features
      Sara Motamed Ali Broumandnia Azam sadat  Nourbakhsh
      Feature selection is one of the best optimization problems in human recognition, which reduces the number of features, removes noise and redundant data in images, and results in high rate of recognition. This step affects on the performance of a human recognition system More
      Feature selection is one of the best optimization problems in human recognition, which reduces the number of features, removes noise and redundant data in images, and results in high rate of recognition. This step affects on the performance of a human recognition system. This paper presents a multimodal biometric verification system based on two features of palm and ear which has emerged as one of the most extensively studied research topics that spans multiple disciplines such as pattern recognition, signal processing and computer vision. Also, we present a novel Feature selection algorithm based on Particle Swarm Optimization (PSO). PSO is a computational paradigm based on the idea of collaborative behavior inspired by the social behavior of bird flocking or fish schooling. In this method, we used from two Feature selection techniques: the Discrete Cosine Transforms (DCT) and the Discrete Wavelet Transform (DWT). The identification process can be divided into the following phases: capturing the image; pre-processing; extracting and normalizing the palm and ear images; feature extraction; matching and fusion; and finally, a decision based on PSO and GA classifiers. The system was tested on a database of 60 people (240 palm and 180 ear images). Experimental results show that the PSO-based feature selection algorithm was found to generate excellent recognition results with the minimal set of selected features. Manuscript profile
    • Open Access Article

      3 - Performance Analysis of SVM-Type Per Tone Equalizer Using Blind and Radius Directed Algorithms for OFDM Systems
      Babak Haji Bagher Naeeni
      In this paper, we present Support Vector Machine (SVM)-based blind per tone equalization for OFDM systems. Blind per tone equalization using Constant Modulus Algorithm (CMA) and Multi-Modulus Algorithm (MMA) are used as the comparison benchmark. The SVM-based cost funct More
      In this paper, we present Support Vector Machine (SVM)-based blind per tone equalization for OFDM systems. Blind per tone equalization using Constant Modulus Algorithm (CMA) and Multi-Modulus Algorithm (MMA) are used as the comparison benchmark. The SVM-based cost function utilizes a CMA-like error function and the solution is obtained by means of an Iterative Re-Weighted Least Squares Algorithm (IRWLS). Moreover, like CMA, the error function allows to extend the method to multilevel modulations. In this case, a dual mode algorithm is proposed. Dual mode equalization techniques are commonly used in communication systems working with multilevel signals. Practical blind algorithms for multilevel modulation are able to open the eye of the constellation, but they usually exhibit a high residual error. In a dual mode scheme, once the eye is opened by the blind algorithm, the system switches to another algorithm, which is able to obtain a lower residual error under a suitable initial ISI level. Simulation experiments show that the performance of blind per tone equalization using support vector machine has better than blind per tone equalization using CMA and MMA, from viewpoint of average Bit-Error Rate (BER). Manuscript profile
    • Open Access Article

      4 - High I/Q Imbalance Receiver Compensation and Decision Directed Frequency Selective Channel Estimation in an OFDM Receiver Employing Neural Network
      afalahati afalahati Sajjad Nasirpour
      The disparity introduced between In-phase and Quadrature components in a digital communication system receiver known as I/Q imbalance is a prime objective within the employment of direct conversion architectures. It reduces the performance of channel estimation and caus More
      The disparity introduced between In-phase and Quadrature components in a digital communication system receiver known as I/Q imbalance is a prime objective within the employment of direct conversion architectures. It reduces the performance of channel estimation and causes to receive the data symbol with errors. This imbalance phenomenon, at its lowest still can result very serious signal distortions at the reception of an OFDM multi-carrier system. In this manuscript, an algorithm based on neural network scenario, is proposed that deploys both Long Training Symbols (LTS) as well as data symbols, to jointly estimate the channel and to compensate parameters that are damaged by I/Q imbalanced receiver. In this algorithm, we have a tradeoff between these parameters. I.e. when the minimum CG mean value is required, the minimum CG mean value could be chosen without others noticing it, but in usual case we have to take into account other parameters too, the limited values for the aimed parameters must be known. It uses the first iterations to train the system to reach the suitable value of GC without error floor. In this present article, it is assumed that the correlation between subcarriers is low and a few numbers of training and data symbols are used. The simulation results show that the proposed algorithm can compensate the high I/Q imbalance values and estimate channel frequency response more accurately compared with to date existing methods. Manuscript profile
    • Open Access Article

      5 - Cyclic Correlation-Based Cooperative Detection for OFDM-Based Primary Users
      Hamed Sadeghi paeez azmi
      This paper develops a new robust cyclostationary detection technique for spectrum sensing of OFDM-based primary users (PUs). To do so, an asymptotically constant false alarm rate (CFAR) multi-cycle detector is proposed and its statistical behavior under null hypothesis More
      This paper develops a new robust cyclostationary detection technique for spectrum sensing of OFDM-based primary users (PUs). To do so, an asymptotically constant false alarm rate (CFAR) multi-cycle detector is proposed and its statistical behavior under null hypothesis is investigated. Furthermore, to achieve higher detection capability, a soft decision fusion rule for performing cooperative spectrum sensing (CSS) in secondary networks is established. The proposed CSS scheme aims to maximize the deflection criterion at the fusion center (FC), while the reporting channels are under Rayleigh fading. In order to be able to evaluate the performance of the cooperative detector, some analytic threshold approximation methods are provided for the cases where the FC has direct sensing capability or not. Through numerical simulations, the proposed local and CSS schemes are shown to significantly enhance CR network performance in terms of detection probability metric. Manuscript profile
    • Open Access Article

      6 - A New Method for Detecting the Number of Coherent Sources in the Presence of Colored Noise
      Shahriar Shirvani Moghaddam Somaye  Jalaei
      In this paper, a new method for determining the number of coherent/correlated signals in the presence of colored noise is proposed which is based on the Eigen Increment Threshold (EIT) method. First, we present a new approach which combines EIT criterion and eigenvalue More
      In this paper, a new method for determining the number of coherent/correlated signals in the presence of colored noise is proposed which is based on the Eigen Increment Threshold (EIT) method. First, we present a new approach which combines EIT criterion and eigenvalue correction. The simulation results show that the new method estimates the number of noncoherent signals in the presence of colored noise with higher detection probability respect to MDL, AIC, EGM and conventional EIT. In addition, to apply the proposed EIT algorithm to detect the number of sources in the case of coherent and/or correlated sources, a spatial smoothing preprocessing is added. In this case, simulation results show 100% detection probability for signal to noise ratios greater than -5dB. Final version of the proposed EIT-based method is a simple and efficient way to increase the detection probability of EIT method in the presence of colored noise considering either coherent/correlated or noncoherent sources. Manuscript profile
    • Open Access Article

      7 - Video Transmission Using New Adaptive Modulation and Coding Scheme in OFDM based Cognitive Radio
      Hassan Farsi Farid Jafarian
      As Cognitive Radio (CR) used in video applications, user-comprehended video quality practiced by secondary users is an important metric to judge effectiveness of CR technologies. We propose a new adaptive modulation and coding (AMC) scheme for CR, which is OFDM based sy More
      As Cognitive Radio (CR) used in video applications, user-comprehended video quality practiced by secondary users is an important metric to judge effectiveness of CR technologies. We propose a new adaptive modulation and coding (AMC) scheme for CR, which is OFDM based system that is compliant with the IEEE.802.16. The proposed CR alters its modulation and coding rate to provide high quality system. In this scheme, CR using its ability to consciousness of various parameters including knowledge of the white holes in the channel spectrum via channel sensing, SNR, carrier to interference and noise ratio (CINR), and Modulation order Product code Rate (MPR) selects an optimum modulation and coding rate. In this scheme, we model the AMC function using Artificial Neural Network (ANN). Since AMC is naturally a non-liner function, ANN is selected to model this function. In order to achieve more accurate model, Genetic algorithm (GA) and Particle Swarm Optimization (PSO) are selected to optimize the function representing relationship between inputs and outputs of ANN, i.e., AMC model. Inputs of ANN are CR knowledge parameters, and the outputs are modulation type and coding rate. Presenting a perfect AMC model is advantage of this scheme because of considering all impressive parameters including CINR, available bandwidth, SNR and MPR to select optimum modulation and coding rate. Also, we show that in this application, GA rather than PSO is better choice for optimization algorithm. Manuscript profile
    • Open Access Article

      8 - An Improved Method for TOA Estimation in TH-UWB System considering Multipath Effects and Interference
      Mahdieh Ghasemlou Saeid Nader Esfahani Vahid  Tabataba Vakili
      UWB ranging is usually based on the time-of-arrival (TOA) estimation of the first path. There are two major challenges in TOA estimation. One challenge is to deal with multipath channel, especially in indoor environments. The other challenge is the existence of interfer More
      UWB ranging is usually based on the time-of-arrival (TOA) estimation of the first path. There are two major challenges in TOA estimation. One challenge is to deal with multipath channel, especially in indoor environments. The other challenge is the existence of interference from other sources. In this paper, we propose a new method of TOA estimation, which is very robust against the interference. In this method, during the phase of TOA estimation, the transmitter sends its pulses in random positions within the frame. This makes the position of the interference relative to the main pulse to be random. Consequently, the energy of interference would be distributed, almost uniformly, along the frame. In energy detection methodes, a constant interference along the frame does not affect the detection of arrival time and only needs the adjustment of the threshold. Simulation results in IEEE.802.15.4a channels show that, even in presence of very strong interference, TOA estimation error of less than 3 nanoseconds is feasible with the proposed method. Manuscript profile
    • Open Access Article

      9 - GoF-Based Spectrum Sensing of OFDM Signals over Fading Channels
      Seyed Sadra Kashef paeez azmi Hamed Sadeghi
      Goodness-of-Fit (GoF) based spectrum sensing of orthogonal frequency-division multiplexing (OFDM) signals is investigated in this paper. To this end, some novel local sensing methods based on Shapiro-Wilk (SW), Shapiro-Francia (SF), and Jarque-Bera (JB) tests are first More
      Goodness-of-Fit (GoF) based spectrum sensing of orthogonal frequency-division multiplexing (OFDM) signals is investigated in this paper. To this end, some novel local sensing methods based on Shapiro-Wilk (SW), Shapiro-Francia (SF), and Jarque-Bera (JB) tests are first studied. In essence, a new threshold selection technique is proposed for SF and SW tests. Then, three studied methods are applied to spectrum sensing for the first time and their performance are analyzed. Furthermore, the computational complexity of the above methods is computed and compared to each other. Simulation results demonstrate that the SF detector outperforms other existing GoF-based methods over AWGN channels. Furthermore simulation results demonstrate the superiority of the proposed SF method in additive colored Gaussian noise channels and over fading channel in comparison with the conventional energy detector. Manuscript profile
    • Open Access Article

      10 - Joint Source and Channel Analysis for Scalable Video Coding Using Vector Quantization over OFDM System
      Farid Jafarian Hassan Farsi
      Conventional wireless video encoders employ variable-length entropy encoding and predictive coding to achieve high compression ratio but these techniques render the extremely sensitive encoded bit-stream to channel errors. To prevent error propagation, it is necessary t More
      Conventional wireless video encoders employ variable-length entropy encoding and predictive coding to achieve high compression ratio but these techniques render the extremely sensitive encoded bit-stream to channel errors. To prevent error propagation, it is necessary to employ various additional error correction techniques. In contrast, alternative technique, vector quantization (VQ), which doesn’t use variable-length entropy encoding, have the ability to impede such an error through the use of fix-length code-words. In this paper, we address the problem of analysis of joint source and channel for VQ based scalable video coding (VQ-SVC). We introduce intra-mode VQ-SVC and VQ-3D-DCT SVC, which offer similar compression performance to intra-mode H.264 and 3D-DCT respectively, while offering inherent error resilience. In intra-mode VQ-SVC, 2D-DCT and in VQ-3D-DCT SVC, 3D-DCT is applied on video frames to exploit DCT coefficients then VQ is employed to prepare the codebook of DCT coefficients. In this low bitrate video codecs, high level robustness is needed against the wireless channel fluctuations. To achieve such robustness, we propose and calculate optimal codebook of VQ-SVC and optimal channel code rate using joint source and channel coding (JSCC) technique. Next, the analysis is developed for transmission of video using an OFDM system over multipath Rayleigh fading and AWGN channel. Finally, we report the performance of these schemes to minimize end-to-end distortion over the wireless channel. Manuscript profile
    • Open Access Article

      11 - Tracking Performance of Semi-Supervised Large Margin Classifiers in Automatic Modulation Classification
      Hamidreza Hosseinzadeh Farbod Razzazi Afrooz Haghbin
      Automatic modulation classification (AMC) in detected signals is an intermediate step between signal detection and demodulation, and is also an essential task for an intelligent receiver in various civil and military applications. In this paper, we propose a semi-superv More
      Automatic modulation classification (AMC) in detected signals is an intermediate step between signal detection and demodulation, and is also an essential task for an intelligent receiver in various civil and military applications. In this paper, we propose a semi-supervised Large margin AMC and evaluate it on tracking the received signal to noise ratio (SNR) changes to classify all forms of signals in a cognitive radio environment. To achieve this objective, two structures for self-training of large margin classifiers were developed in additive white Gaussian noise (AWGN) channels with priori unknown SNR. A suitable combination of the higher order statistics and instantaneous characteristics of digital modulation are selected as effective features. Simulation results show that adding unlabeled input samples to the training set, improve the tracking capacity of the presented system to robust against environmental SNR changes. Manuscript profile
    • Open Access Article

      12 - Online Signature Verification: a Robust Approach for Persian Signatures
      Mohamamd Esmaeel Yahyatabar Yasser  Baleghi Mohammad Reza Karami-Mollaei
      In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have More
      In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have more curvature and end in a specific style. Usually, Persian signatures have special characteristics, in terms of speed, acceleration and pen pressure, during drawing curves. An experiment has been designed to determine the function indicating the most robust features of Persian signatures. Results obtained from this experiment are then used in feature extraction stage. To improve the performance of verification, a combination of shape based and dynamic extracted features is applied to Persian signature verification. To classify these signatures, Support Vector Machine (SVM) is applied. The proposed method is examined on two common Persian datasets, the new proposed Persian dataset in this paper (Noshirvani Dynamic Signature Dataset) and an international dataset (SVC2004). For three Persian datasets EER value are equal to 3, 3.93, 4.79, while for SVC2004 the EER value is 4.43. Manuscript profile
    • Open Access Article

      13 - Early Detection of Pediatric Heart Disease by Automated Spectral Analysis of Phonocardiogram
      Azra Rasouli Kenari
      Early recognition of heart disease is an important goal in pediatrics. Developing countries have a large population of children living with undiagnosed heart murmurs. As a result of an accompanying skills shortage, most of these children will not get the necessary treat More
      Early recognition of heart disease is an important goal in pediatrics. Developing countries have a large population of children living with undiagnosed heart murmurs. As a result of an accompanying skills shortage, most of these children will not get the necessary treatment. Taking into account that heart auscultation remains the dominant method for heart examination in the small health centers of the rural areas and generally in primary healthcare setups, the enhancement of this technique would aid significantly in the diagnosis of heart diseases. The detection of murmurs from phonocardiographic recordings is an interesting problem that has been addressed before using a wide variety of techniques. We designed a system for automatically detecting systolic murmurs due to a variety of conditions. This could enable health care providers in developing countries with tools to screen large amounts of children without the need for expensive equipment or specialist skills. For this purpose an algorithm was designed and tested to detect heart murmurs in digitally recorded signals. Cardiac auscultatory examinations of 93 children were recorded, digitized, and stored along with corresponding echocardiographic diagnoses, and automated spectral analysis using discrete wavelet transforms was performed. Patients without heart disease and either no murmur or an innocent murmur (n = 40) were compared to patients with a variety of cardiac diagnoses and a pathologic systolic murmur present (n = 53). A specificity of 100% and a sensitivity of 90.57% were achieved using signal processing techniques and a k-nn as classifier. Manuscript profile
    • Open Access Article

      14 - A Fast and Accurate Sound Source Localization Method using Optimal Combination of SRP and TDOA Methodologies
      Mohammad  Ranjkesh Eskolaki Reza Hasanzadeh
      This paper presents an automatic sound source localization approach based on combination of the basic time delay estimation sub method namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach More
      This paper presents an automatic sound source localization approach based on combination of the basic time delay estimation sub method namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach to find sound source location in long distances and reverberant environments and so sensitive in noisy situations, on the other hand the conventional SRP method is time consuming but successful approach to accurately find sound source location in noisy and reverberant environment. Also another SRP based method namely SRP Phase Transform (SRP-PHAT) has been suggested for better noise robustness and more accuracy of sound source localization. In this paper, based on the combination of TDOA and SRP based methods, two approaches proposed for sound source localization. In the first proposed approach which is named Classical TDOA-SRP, the TDOA method is used to find approximate sound source direction and then SRP based methods used to find the accurate location of sound source in the Field of View (FOV) which is obtained through the TDOA method. In the second proposed approach which is named Optimal TDOA-SRP, for more reduction of computational time of processing of SRP based methods and better noise robustness, a new criteria has been proposed to find the effective FOV which is obtained through the TDOA method. Experiments carried out under different conditions confirm the validity of the purposed approaches. Manuscript profile
    • Open Access Article

      15 - Application of Curve Fitting in Hyperspectral Data Classification and Compression
      S. Abolfazl  Hosseini
      Regarding to the high between-band correlation and large volumes of hyperspectral data, feature reduction (either feature selection or extraction) is an important part of classification process for this data type. A variety of feature reduction methods have been develop More
      Regarding to the high between-band correlation and large volumes of hyperspectral data, feature reduction (either feature selection or extraction) is an important part of classification process for this data type. A variety of feature reduction methods have been developed using spectral and spatial domains. In this paper, a feature extracting technique is proposed based on rational function curve fitting. For each pixel of a hyperspectral image, a specific rational function approximation is developed to fit the spectral response curve of that pixel. Coefficients of the numerator and denominator polynomials of these functions are considered as new extracted features. This new technique is based on the fact that the sequence discipline - ordinance of reflectance coefficients in spectral response curve - contains some information which has not been considered by other statistical analysis based methods, such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and their nonlinear versions. Also, we show that naturally different curves can be approximated by rational functions with equal form, but different amounts of coefficients. Maximum likelihood classification results demonstrate that the Rational Function Curve Fitting Feature Extraction (RFCF-FE) method provides better classification accuracies compared to competing feature extraction algorithms. The method, also, has the ability of lossy data compression. The original data can be reconstructed using the fitted curves. In addition, the proposed algorithm has the possibility to be applied to all pixels of image individually and simultaneously, unlike to PCA and other methods which need to know whole data for computing the transform matrix. Manuscript profile
    • Open Access Article

      16 - Acoustic Noise Cancellation Using an Adaptive Algorithm Based on Correntropy Criterion and Zero Norm Regularization
      Mojtaba Hajiabadi
      The least mean square (LMS) adaptive algorithm is widely used in acoustic noise cancellation (ANC) scenario. In a noise cancellation scenario, speech signals usually have high amplitude and sudden variations that are modeled by impulsive noises. When the additive noise More
      The least mean square (LMS) adaptive algorithm is widely used in acoustic noise cancellation (ANC) scenario. In a noise cancellation scenario, speech signals usually have high amplitude and sudden variations that are modeled by impulsive noises. When the additive noise process is nonGaussian or impulsive, LMS algorithm has a very poor performance. On the other hand, it is well-known that the acoustic channels usually have sparse impulse responses. When the impulse response of system changes from a non-sparse to a highly sparse one, conventional algorithms like the LMS based adaptive filters can not make use of the priori knowledge of system sparsity and thus, fail to improve their performance both in terms of transient and steady state. Impulsive noise and sparsity are two important features in the ANC scenario that have paid special attention, recently. Due to the poor performance of the LMS algorithm in the presence of impulsive noise and sparse systems, this paper presents a novel adaptive algorithm that can overcomes these two features. In order to eliminate impulsive disturbances from speech signal, the information theoretic criterion, that is named correntropy, is used in the proposed cost function and the zero norm is also employed to deal with the sparsity feature of the acoustic channel impulse response. Simulation results indicate the superiority of the proposed algorithm in presence of impulsive noise along with sparse acoustic channel. Manuscript profile
    • Open Access Article

      17 - Optimization of Random Phase Updating Technique for Effective Reduction in PAPR, Using Discrete Cosine Transform
      Babak Haji Bagher Naeeni
      One of problems of OFDM systems, is the big value of peak to average power ratio. To reduce it, any attempt have been done amongst which, random phase updating is an important technique. In contrast to paper, since power variance is computable before IFFT block, the com More
      One of problems of OFDM systems, is the big value of peak to average power ratio. To reduce it, any attempt have been done amongst which, random phase updating is an important technique. In contrast to paper, since power variance is computable before IFFT block, the complexity of this method would be less than other phase injection methods which could be an important factor. Another interesting capability of random phase updating technique is the possibility of applying the variance of threshold power. The operation of phase injection is repeated till the power variance reaches threshold power variance. However, this may be a considered as a disadvantage for random phase updating technique. The reason is that reaching the mentioned threshold may lead to possible system delay. In this paper, in order to solve the mentioned problem, DCT transform is applied on subcarrier outputs before phase injection. This leads to reduce the number of required carriers for reaching the threshold value which results in reducing system delay accordingly. Manuscript profile
    • Open Access Article

      18 - Nonlinear State Estimation Using Hybrid Robust Cubature Kalman Filter
      Behrooz Safarinejadian Mohsen Taher
      In this paper, a novel filter is provided that estimates the states of any nonlinear system, both in the presence and absence of uncertainty with high accuracy. It is well understood that a robust filter design is a compromise between the robustness and the estimation a More
      In this paper, a novel filter is provided that estimates the states of any nonlinear system, both in the presence and absence of uncertainty with high accuracy. It is well understood that a robust filter design is a compromise between the robustness and the estimation accuracy. In fact, a robust filter is designed to obtain an accurate and suitable performance in presence of modelling errors.So in the absence of any unknown or time-varying uncertainties, the robust filter does not provide the desired performance. The new method provided in this paper, which is named hybrid robust cubature Kalman filter (CKF), is constructed by combining a traditional CKF and a novel robust CKF. The novel robust CKF is designed by merging a traditional CKF with an uncertainty estimator so that it can provide the desired performance in the presence of uncertainty. Since the presence of uncertainty results in a large innovation value, the hybrid robust CKF adapts itself according to the value of the normalized innovation. The CKF and robust CKF filters are run in parallel and at any time, a suitable decision is taken to choose the estimated state of either the CKF or the robust CKF as the final state estimation. To validate the performance of the proposed filters, two examples are given that demonstrate their promising performance. Manuscript profile
    • Open Access Article

      19 - A Global-Local Noise Removal Approach to Remove High Density Impulse Noise
      Ali Mohammad Fotouhi Samane Abdoli Vahid Keshavarzi
      Impulse noise removal from images is one of the most important concerns in digital image processing. Noise must be removed in a way that the main and important information of image is kept. Traditionally, the median filter has been the best way to deal with impulse nois More
      Impulse noise removal from images is one of the most important concerns in digital image processing. Noise must be removed in a way that the main and important information of image is kept. Traditionally, the median filter has been the best way to deal with impulse noise; however, the image quality obtained in high noise density is not desirable. The aim of this paper is to propose an algorithm in order to improve the performance of adaptive median filter to remove high density impulse noise from digital images. The proposed method consists of two main stages of noise detection and noise removal. In the first stage, noise detection includes two global and local phases and in the second stage, noise removal is also done based on a two-phase algorithm. Global noise detection is done by a pixel classification approach in each block of the image and local noise detection is performed by automatically determining two threshold values in each block. In the noise removal stage only noisy pixels detected from the first stage of the algorithm are processed by estimating noise density and applying adaptive median filter on noise-free pixels in the neighborhood. Comparing experimental results obtained on standard images with other proposed methods proves the success of the proposed algorithm. Manuscript profile
    • Open Access Article

      20 - A New Calibration Method for SAR Analog-to-Digital Converters Based on All Digital Dithering
      Ebrahim Farshidi Shabnam Rahbar
      In this paper a new digital background calibration method for successive approximation register analog to digital converters is presented. For developing, a perturbation signal is added and also digital offset is injected. One of the main advantages of this work is that More
      In this paper a new digital background calibration method for successive approximation register analog to digital converters is presented. For developing, a perturbation signal is added and also digital offset is injected. One of the main advantages of this work is that it is completely digitally and eliminates the nonlinear errors between analog capacitor and array capacitors due to converter’s capacitors mismatch error by correcting the relative weights. Performing of this digital dithering method does not require extra capacitors or double independent converters and it will eliminate mismatches caused by these added elements. Also, No extra calibration overhead for complicated mathematical calculation is needed. It unlike split calibration, does not need two independent converters for production of two specified paths and it just have one capacitor array which makes it possible with simple architecture. Furthermore, to improve DNL and INL and correct the missing code error, sub radix-2 is used in the converter structure. Proposed calibration method is implemented by a 10 bit, 1.87-radix SAR converter. Simulation results with MATLAB software show great improvement in static and dynamic characteristics in applied analog to digital converter after calibration. So, it can be used in calibration of successive approximation register analog to digital converters. Manuscript profile
    • Open Access Article

      21 - Wavelet-based Bayesian Algorithm for Distributed Compressed Sensing
      Razieh Torkamani Ramezan Ali Sadeghzadeh
      The emerging field of compressive sensing (CS) enables the reconstruction of the signal from a small set of linear projections. Traditional CS deals with a single signal; while one can jointly reconstruct multiple signals via distributed CS (DCS) algorithm. DCS inversio More
      The emerging field of compressive sensing (CS) enables the reconstruction of the signal from a small set of linear projections. Traditional CS deals with a single signal; while one can jointly reconstruct multiple signals via distributed CS (DCS) algorithm. DCS inversion method exploits both the inter- and intra-signal correlations via joint sparsity models (JSM). Since the wavelet coefficients of many signals is sparse, in this paper, the wavelet transform is used as sparsifying transform, and a new wavelet-based Bayesian DCS algorithm (WB-DCS) is proposed, which takes into account the inter-scale dependencies among the wavelet coefficients via hidden Markov tree model (HMT), as well as the inter-signal correlations. This paper uses the Bayesian procedure to statistically model this correlations via the prior distributions. Also, in this work, a type-1 JSM (JSM-1) signal model is used for jointly sparse signals, in which every sparse coefficient vector is considered as the sum of a common component and an innovation component. In order to jointly reconstruct multiple sparse signals, the centralized approach is used in DCS, in which all the data is processed in the fusion center (FC). Also, variational Bayes (VB) procedure is used to infer the posterior distributions of unknown variables. Simulation results demonstrate that the structure exploited within the wavelet coefficients provides superior performance in terms of average reconstruction error and structural similarity index. Manuscript profile
    • Open Access Article

      22 - Reliability Analysis of the Sum-Product Decoding Algorithm for the PSK Modulation Scheme
      Hadi Khodaei Jooshin Mahdi Nangir
      Iteratively decoding and reconstruction of encoded data has been considered in recent decades. Most of these iterative schemes are based on graphical codes. Messages are passed through space graphs to reach a reliable belief of the original data. This paper presents a p More
      Iteratively decoding and reconstruction of encoded data has been considered in recent decades. Most of these iterative schemes are based on graphical codes. Messages are passed through space graphs to reach a reliable belief of the original data. This paper presents a performance analysis of the Low-Density Parity-Check (LDPC) code design method which approach the capacity of the Additive White Gaussian Noise (AWGN) model for communication channels. We investigate the reliability of the system under Phase Shift Keying (PSK) modulation. We study the effects and advantages of variation in the codeword length, the rate of parity-check matrix of the LDPC codes, and the number of iterations in the Sum-Product Algorithm (SPA). By employing an LDPC encoder prior to the PSK modulation block and the SPA in the decoding part, the Bit Error Rate (BER) performance of the PSK modulation system can improve significantly. The BER performance improvement of a point-to-point communication system is measured in different cases. Our analysis is capable for applying any other iterative message-passing algorithm. The code design process of the communication systems and parameter selection of the encoding and decoding algorithms are accomplished by considering hardware limitations in a communication system. Our results help to design and select paramours efficiently. Manuscript profile
    • Open Access Article

      23 - Denoising and Enhancement Speech Signal Using Wavelet
      Meriane Brahim
      Speech enhancement aims to improve the quality and intelligibility of speech using various techniques and algorithms. The speech signal is always accompanied by background noise. The speech and communication processing systems must apply effective noise reduction techni More
      Speech enhancement aims to improve the quality and intelligibility of speech using various techniques and algorithms. The speech signal is always accompanied by background noise. The speech and communication processing systems must apply effective noise reduction techniques in order to extract the desired speech signal from its corrupted speech signal. In this project we study wavelet and wavelet transform, and the possibility of its employment in the processing and analysis of the speech signal in order to enhance the signal and remove noise of it. We will present different algorithms that depend on the wavelet transform and the mechanism to apply them in order to get rid of noise in the speech, and compare the results of the application of these algorithms with some traditional algorithms that are used to enhance the speech. The basic principles of the wavelike transform are presented as an alternative to the Fourier transform. Or immediate switching of the window The practical results obtained are based on processing a large database dedicated to speech bookmarks polluted with various noises in many SNRs. This article tends to be an extension of practical research to improve speech signal for hearing aid purposes. Also learn about the main frequency of letters and their uses in intelligent systems, such as voice control systems. Manuscript profile
    • Open Access Article

      24 - A New High-Capacity Audio Watermarking Based on Wavelet Transform using the Golden Ratio and TLBO Algorithm
      Ali Zeidi joudaki Marjan Abdeyazdan Mohammad Mosleh Mohammad Kheyrandish
      Digital watermarking is one of the best solutions for copyright infringement, copying, data verification, and illegal distribution of digital media. Recently, the protection of digital audio signals has received much attention as one of the fascinating topics for resear More
      Digital watermarking is one of the best solutions for copyright infringement, copying, data verification, and illegal distribution of digital media. Recently, the protection of digital audio signals has received much attention as one of the fascinating topics for researchers and scholars. In this paper, we presented a new high-capacity, clear, and robust audio signaling scheme based on the DWT conversion synergy and golden ratio advantages using the TLBO algorithm. We used the TLBO algorithm to determine the effective frame length and embedded range, and the golden ratio to determine the appropriate embedded locations for each frame. First, the main audio signal was broken down into several sub-bands using a DWT in a specific frequency range. Since the human auditory system is not sensitive to changes in high-frequency bands, to increase the clarity and capacity of these sub-bands to embed bits we used the watermark signal. Moreover, to increase the resistance to common attacks, we framed the high-frequency bandwidth and then used the average of the frames as a key value. Our main idea was to embed an 8-bit signal simultaneously in the host signal. Experimental results showed that the proposed method is free from significant noticeable distortion (SNR about 29.68dB) and increases the resistance to common signal processing attacks such as high pass filter, echo, resampling, MPEG (MP3), etc. Manuscript profile
    • Open Access Article

      25 - Nonlinear Regression Model Based on Fractional Bee Colony Algorithm for Loan Time Series
      Farid Ahmadi Mohammad Pourmahmood Aghababa Hashem Kalbkhani
      High levels of nonperforming loans provide negative impacts on the growth rate of gross domestic product. Therefore, predicting the occurrence of nonperforming loans is a vital issue for the financial sector and governments. In this paper, an intelligent nonlinear model More
      High levels of nonperforming loans provide negative impacts on the growth rate of gross domestic product. Therefore, predicting the occurrence of nonperforming loans is a vital issue for the financial sector and governments. In this paper, an intelligent nonlinear model is proposed for describing the behavior of nonperforming loans. In order to find the optimal parameters of the model, a new fractional bee colony algorithm (BCA) based on fractional calculus techniques is proposed. The inputs of the nonlinear model are the loan type, approved amount, refund amount, and economic realm. The output of the regression model is that whether the current information is for a nonperforming loan or not. Consequently, the model is modified to detect the status of a loan. So, the modified model predicts the occurrence of a nonperforming loan and determines the loan status, i.e., current, overdue, and nonperforming. The proposed procedure is applied to data gathered from an economic institution in Iran. The findings of this study are helpful for the managers of banks, and financial sectors to forecast the future of the loans and, therefore, manage the budget for the upcoming loan requests. Manuscript profile
    • Open Access Article

      26 - SQP-based Power Allocation Strategy for Target Tracking in MIMO Radar Network with Widely Separated Antennas
      Mohammad  Akhondi Darzikolaei Mohammad Reza Karami-Mollaei Maryam Najimi
      MIMO radar with widely separated antennas enhances detection and estimation resolution by utilizing the diversity of the propagation path. Each antenna of this type of radar can steer its beam independently towards any direction as an independent transmitter. However, t More
      MIMO radar with widely separated antennas enhances detection and estimation resolution by utilizing the diversity of the propagation path. Each antenna of this type of radar can steer its beam independently towards any direction as an independent transmitter. However, the joint processing of signals for transmission and reception differs this radar from the multistatic radar. There are many resource optimization problems which improve the performance of MIMO radar. But power allocation is one of the most interesting resource optimization problems. The power allocation finds an optimum strategy to assign power to transmit antennas with the aim of minimizing the target tracking errors under specified transmit power constraints. In this study, the performance of power allocation for target tracking in MIMO radar with widely separated antennas is investigated. Therefore, a MIMO radar with distributed antennas is configured and a target motion model using the constant velocity (CV) method is modeled. Then Joint Cramer Rao bound (CRB) for target parameters (joint target position and velocity) estimation error is calculated. This is utilized as a power allocation problem objective function. Since the proposed power allocation problem is nonconvex. Therefore, a SQP-based power allocation algorithm is proposed to solve it. In simulation results, the performance of the proposed algorithm in various conditions such as a different number of antennas and antenna geometry configurations is examined. Results affirm the accuracy of the proposed algorithm. Manuscript profile
    • Open Access Article

      27 - Study and Realization of an Alarm System by Coded Laser Barrier Analyzed by the Wavelet Transform
      meriane brahim Salah Rahmouni Issam Tifouti
      This article introduces the study and realization of the laser barrier alarm system, after the laser is obtained by an electronic card, the wireless control system is connected to the control room to announce the application in real time, and the laser is used in many a More
      This article introduces the study and realization of the laser barrier alarm system, after the laser is obtained by an electronic card, the wireless control system is connected to the control room to announce the application in real time, and the laser is used in many applications fields, from industry to medicine, in this article on the basis of Industrial applications such as laser barrier. It uses an alarm system to detect and deter intruders. Basic security includes protecting the perimeter of a military base or a safety distance in unsafe locations or near a government location. The first stage secures surrounding access points such as doors and windows; The second stage consists of internal detection with motion detectors that monitor movements, In this article, we adopt the embodiment of a coded laser barrier that is transmitted between two units, processes the signal, compares the agreed conditions, and to be high accuracy, we suggest using wavelet transmission to process the received signal and find out the frequencies that achieve alarm activation considering that the transmitted signal They are pulses, but after analysis with a proposed algorithm, we can separate the unwanted frequencies generated by the differential vibrations in order to arrive at a practically efficient system. Manuscript profile
    • Open Access Article

      28 - Remote Sensing Image Registration based on a Geometrical Model Matching
      Zahra Hossein-Nejad Hamed Agahi Azar Mahmoodzadeh
      Remote sensing image registration is the method of aligning two images from the same scene taken under different imaging circumstances containing different times, angles, or sensors. Scale-invariant feature transform (SIFT) is one of the most common matching methods pre More
      Remote sensing image registration is the method of aligning two images from the same scene taken under different imaging circumstances containing different times, angles, or sensors. Scale-invariant feature transform (SIFT) is one of the most common matching methods previously used in the remote sensing image registration. The defects of SIFT are the large number of mismatches and high execution time due to the high dimensions of classical SIFT descriptor. These drawbacks reduce the efficiency of the SIFT algorithm. To enhance the performance of the remote sensing image registration, this paper proposes an approach consisting of three different steps. At first, the keypoints of both reference and second images are extracted using SIFT algorithm. Then, to increase the speed of the algorithm and accuracy of the matching, the SIFT descriptor with the vector length of 64 is used for keypoints description. Finally, a new method has been proposed for the image matching. The proposed matching method is based on calculating the distances of keypoints and their transformed points. Simulation results of applying the proposed method to some standard databases demonstrated the superiority of this approach compared with some other existing methods, according to the root mean square error (RMSE), precision and running time criteria. Manuscript profile
    • Open Access Article

      29 - A High Performance Dual Stage Face Detection Algorithm Implementation using FPGA Chip and DSP Processor
      M V Ganeswara Rao P Ravi  Kumar T  Balaji
      A dual stage system architecture for face detection based on skin tone detection and Viola and Jones face detection structure is presented in this paper. The proposed architecture able to track down human faces in the image with high accuracy within time constrain. A no More
      A dual stage system architecture for face detection based on skin tone detection and Viola and Jones face detection structure is presented in this paper. The proposed architecture able to track down human faces in the image with high accuracy within time constrain. A non-linear transformation technique is introduced in the first stage to reduce the false alarms in second stage. Moreover, in the second stage pipe line technique is used to improve overall throughput of the system. The proposed system design is based on Xil inx’s Virtex FPGA chip and Texas Instruments DSP processor. The dual port BRAM memory in FPGA chip and EMIF (External Memory Interface) of DSP processor are used as interface between FPGA and DSP processor. The proposed system exploits advantages of both the computational elements (FPGA and DSP) and the system level pipelining to achieve real time perform ance. The present system implementation focuses on high accurate and high speed face detec tion and this system evaluated using standard BAO image database, which include images with different poses, orientations, occlusions and illumination. The proposed system attained 16.53 FPS frame rate for the input image spatial resolution of 640X480, which is 23.4 times faster detection of faces compared to MATLAB implementation and 12.14 times faster than DSP implementation and 2.1 times faster than FPGA implementation. Manuscript profile
    • Open Access Article

      30 - An Autoencoder based Emotional Stress State Detection Approach by using Electroencephalography Signals
      Jia Uddin
      Identifying hazards from human error is critical for industrial safety since dangerous and reckless industrial worker actions, as well as a lack of measures, are directly accountable for human-caused problems. Lack of sleep, poor nutrition, physical deformities, and wea More
      Identifying hazards from human error is critical for industrial safety since dangerous and reckless industrial worker actions, as well as a lack of measures, are directly accountable for human-caused problems. Lack of sleep, poor nutrition, physical deformities, and weariness are some of the key factors that contribute to these risky and reckless behaviors that might put a person in a perilous scenario. This scenario causes discomfort, worry, despair, cardiovascular disease, a rapid heart rate, and a slew of other undesirable outcomes. As a result, it would be advantageous to recognize people's mental states in the future in order to provide better care for them. Researchers have been studying electroencephalogram (EEG) signals to determine a person's stress level at work in recent years. A full feature analysis from domains is necessary to develop a successful machine learning model using electroencephalogram (EEG) inputs. By analyzing EEG data, a time-frequency based hybrid bag of features is designed in this research to determine human stress dependent on their sex. This collection of characteristics includes features from two types of assessments: time-domain statistical analysis and frequency-domain wavelet-based feature assessment. The suggested two layered autoencoder based neural networks (AENN) are then used to identify the stress level using a hybrid bag of features. The experiment uses the DEAP dataset, which is freely available. The proposed method has a male accuracy of 77.09% and a female accuracy of 80.93%. Manuscript profile
    • Open Access Article

      31 - A New Power Allocation Optimization for One Target Tracking in Widely Separated MIMO Radar
      Mohammad Akhondi Darzikolaei Mohammad Reza Karami-Mollaei Maryam Najimi
      In this paper, a new power allocation scheme for one target tracking in MIMO radar with widely dispersed antennas is designed. This kind of radar applies multiple antennas which are deployed widely dispersed from each other. Therefore, a target is observed simultaneousl More
      In this paper, a new power allocation scheme for one target tracking in MIMO radar with widely dispersed antennas is designed. This kind of radar applies multiple antennas which are deployed widely dispersed from each other. Therefore, a target is observed simultaneously from different uncorrelated angles and it offers spatial diversity. In this radar, a target’s radar cross section (RCS) is different in each transmit-receive path. So, a random complex Gaussian RCS is supposed for one target. Power allocation is used to allocate the optimum power to each transmit antenna and avoid illuminating the extra power in the environment and hiding it from interception. This manuscript aims to minimize the target tracking error with constraints on total transmit power and the power of each transmit antenna. For calculation of target tracking error, the joint Cramer Rao bound for a target velocity and position is computed and this is assumed as an objective function of the problem. It should be noted that a target RCS is also considered as unknown parameter and it is estimated along with target parameters. This makes a problem more similar to real conditions. After the investigation of the problem convexity, the problem is solved by particle swarm optimization (PSO) and sequential quadratic programming (SQP) algorithms. Then, various scenarios are simulated to evaluate the proposed scheme. The simulation results validate the accuracy and the effectiveness of the power allocation structure for target tracking in MIMO radar with widely separated antennas. Manuscript profile