Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

32
VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874 VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874 IEEE 2014 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech VENTM14001 Regularized Simultaneous Forward–Backward Greedy Algorithm for Sparse Unmixing of Hyperspectral Data Abstract—Sparse unmixing assumes that each observed signature of a hyper spectral image is a linear combination of only a few spectra (end members) in an available spectral library. It then estimates the fractional abundances of these end members in the scene. The sparse un mixing problem still remains a great difficulty due to the usually high correlation of the spectral library. Under such circumstances, this paper presents a novel algorithm termed as the regularized simultaneous forward–backward greedy algorithm (RSFoBa) for sparse un mixing of hyper spectral data. The RSFoBa has low computational complexity of getting an approximate solution for the l0 problem directly and can exploit the joint sparsity among all the pixels in the hyper spectral data. In addition, the combination of the forward greedy step and the backward greedy step makes the RSFoBa more stable and less likely to be trapped into the local optimum than the conventional greedy algorithms. Furthermore, when updating the solution in each iteration, a regularizer that enforces the spatial-contextual coherence within the hyper spectral image is considered to make the algorithm more effective. We also show that the sublibrary obtained by the RSFoBa can serve as input for any other sparse unmixing algorithms to make them more accurate and time efficient. Experimental results on both synthetic and real data demonstrate the effectiveness of the proposed algorithm. Published in: Geo science and Remote Sensing, IEEE Transactions on (Volume:52 , Issue: 9 ) Date of Publication: Sept. 2014 Index Terms—Dictionary pruning, greedy algorithm (GA), hyperspectral unmixing, multiple- measurement vector (MMV), sparse unmixing. VENTM14002 Mixed Noise Removal by Weighted Encoding with Sparse Nonlocal Regularization Abstract: Mixed noise removal from natural images is a challenging task since the noise distribution usually does not have a parametric model and has a heavy tail. One typical kind of mixed noise is additive white Gaussian noise (AWGN) coupled with impulse noise (IN). Many mixed noise removal methods are detection based methods. They first detect the locations of IN pixels and then remove the mixed noise. However, such methods tend to generate many artifacts when the mixed noise is strong. In this paper, we propose a simple yet effective method, namely weighted encoding with sparsenonlocal regularization (WESNR), for mixed noise removal. In WESNR, there is not an explicit step of impulse pixel detection; instead, soft impulse pixel detection via weighted encoding is used to deal with IN and AWGN simultaneously. Meanwhile, the image sparsity prior and nonlocal self-similarity prior are integrated into a regularization term and introduced into the variational encoding framework. Experimental results show that the proposed WESNR method achieves leading mixed noise removal performance in terms of both quantitative measures and visual quality.

description

 

Transcript of Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

Page 1: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

IEEE 2014 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

VENTM14001 Regularized Simultaneous Forward–Backward Greedy Algorithm for Sparse Unmixing ofHyperspectral Data

Abstract—Sparse unmixing assumes that each observed signature of a hyper spectral image is a linearcombination of only a few spectra (end members) in an available spectral library. It then estimates thefractional abundances of these end members in the scene. The sparse un mixing problem still remains agreat difficulty due to the usually high correlation of the spectral library. Under such circumstances, thispaper presents a novel algorithm termed as the regularized simultaneous forward–backward greedyalgorithm (RSFoBa) for sparse un mixing of hyper spectral data. The RSFoBa has low computationalcomplexity of getting an approximate solution for the l0 problem directly and can exploit the jointsparsity among all the pixels in the hyper spectral data. In addition, the combination of the forwardgreedy step and the backward greedy step makes the RSFoBa more stable and less likely to be trappedinto the local optimum than the conventional greedy algorithms. Furthermore, when updating thesolution in each iteration, a regularizer that enforces the spatial-contextual coherence within the hyperspectral image is considered to make the algorithm more effective. We also show that the sublibraryobtained by the RSFoBa can serve as input for any other sparse unmixing algorithms to make them moreaccurate and time efficient. Experimental results on both synthetic and real data demonstrate theeffectiveness of the proposed algorithm.

Published in: Geo science and Remote Sensing, IEEE Transactions on (Volume:52 , Issue: 9 )Date of Publication: Sept. 2014Index Terms—Dictionary pruning, greedy algorithm (GA), hyperspectral unmixing, multiple-measurement vector (MMV), sparse unmixing.

VENTM14002 Mixed Noise Removal by Weighted Encoding with Sparse Nonlocal Regularization

Abstract: Mixed noise removal from natural images is a challenging task since the noise distributionusually does not have a parametric model and has a heavy tail. One typical kind of mixed noise isadditive white Gaussian noise (AWGN) coupled with impulse noise (IN).Many mixed noise removal methods are detection based methods. They first detect the locations of INpixels and then remove the mixed noise. However, such methods tend to generate many artifacts whenthe mixed noise is strong. In this paper, we propose a simple yet effective method,namely weighted encoding with sparsenonlocal regularization (WESNR), for mixed noise removal. InWESNR, there is not an explicit step of impulse pixel detection; instead, soft impulse pixel detectionvia weighted encoding is used to deal with IN and AWGN simultaneously. Meanwhile, the image sparsityprior and nonlocal self-similarity prior are integrated into a regularization term and introduced into thevariational encoding framework. Experimental results show that the proposed WESNR method achievesleading mixed noise removal performance in terms of both quantitative measures and visual quality.

Page 2: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

Published in:Image Processing, IEEE Transactions on (Volume:23 , Issue: 6 )Date of Publication: June 2014Index Terms—Mixed noise removal, weighted encoding, nonlocal, sparse representation.

VENTM14003 Subspace Matching Pursuit for Sparse Unmixing of Hyperspectral Data

Abstract : Sparse unmixing assumes that each mixed pixel in the hyperspectral image can be expressedas a linear combination of only a few spectra (end members) in a spectral library, known a priori. It thenaims at estimating the fractional abundances of these endmembers in the scene. Unfortunately,because of the usually high correlation of the spectral library, the sparse unmixing problem still remainsa great challenge. Moreover, most related work focuses on the l1 convex relaxation methods, and littleattention has been paid to the use of simultaneous sparse representation via greedy algorithms (GAs)(SGA) for sparse unmixing. SGA has advantages such as that it can get an approximate solution for thel0 problem directly without smoothing the penalty term in a low computational complexity as well asexploit the spatial information of the hyperspectral data. Thus, it is necessary to explore the potential ofusing such algorithms for sparse unmixing. Inspired by the existing SGA methods, this paper presents anovel GA termed subspace matching pursuit (SMP) forsparse unmixing of hyperspectral data. SMPmakes use of the low-degree mixed pixels in thehyperspectral image to iteratively find a subspace toreconstruct the hyperspectral data. It is proved that, under certain conditions, SMP can recover theoptimal endmembers from the spectral library. Moreover, SMP can serve as a dictionary pruningalgorithm. Thus, it can boost other sparseunmixing algorithms, making them more accurate and timeefficient. Experimental results on both synthetic and real data demonstrate the efficacy of the proposedalgorithm.

Published in: Geoscience and Remote Sensing, IEEE Transactions on (Volume:52 , Issue: 6 )Date of Publication: June 2014Index Terms—Dictionary pruning, greedy algorithm (GA), hyperspectral unmixing, multiple-measurement vector (MMV), simultaneous sparse representation, sparse unmixing, subspace matchingpursuit (SMP).

VENTM14004 Sparse Unmixing of Hyperspectral Data Using Spectral a Priori Information

Abstract: Given a spectral library, sparse unmixing aims at finding the optimal subset of endmembersfrom it to model each pixel in the hyperspectral scene. However, sparse unmixing still remains achallenging task due to the usually high mutual coherence of the spectral library. In this paper, weexploit the spectral a priori information in the hyperspectral image to alleviate this difficulty. It assumesthat some materials in the spectral library are known to exist in the scene. Such information can beobtained via field investigation or hyperspectral data analysis. Then, we propose a novel model toincorporate the spectral a priori information into sparse unmixing. Based on the alternating directionmethod of multipliers, we present a new algorithm, which istermed sparse unmixing using spectral apriori information (SUnSPI), to solve the model. Experimental

Page 3: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

results on both synthetic and real datademonstrate that the spectral a priori information is beneficialto sparse unmixing and that SUnSPI can exploit this information effectively to improve the abundanceestimation.

Published in: Geoscience and Remote Sensing, IEEE Transactions on (Volume:53 , Issue: 2 )Date of Publication: Feb. 2015Index Terms—Hyperspectral unmixing, sparse unmixing, alternating direction method of multipliers(ADMM), spectral a priori information.

VENTM14005 Gradient Histogram Estimation and Preservation for Texture Enhanced Image Denoising

Abstract: Natural image statistics plays an important role in image denoising, and variousnatural image priors, including gradient-based, sparse representation-based, and nonlocal self-similarity-based ones, have been widely studied and exploited for noise removal. In spite of the greatsuccess of many denoising algorithms, they tend to smooth the fine scale image textures whenremoving noise, degrading the image visual quality. To address this problem, in this paper, we proposea texture enhanced image denoising method by enforcing the gradient histogram of thedenoised image to be close to a reference gradient histogram of the original image. Given thereference gradient histogram, a novel gradient histogram preservation (GHP) algorithm is developedto enhance the texture structures while removing noise. Two region-based variants of GHP are proposedfor the denoising of images consisting of regions with different textures. An algorithm is also developedto effectively estimate the reference gradient histogram from the noisy observation of theunknownimage. Our experimental results demonstrate that the proposed GHP algorithm can wellpreserve thetexture appearance in the denoised images, making them look more natural.

Published in: Image Processing, IEEE Transactions on (Volume:23 , Issue: 6 )Date of Publication: June 2014Index Terms—Image denoising, histogram specification, nonlocal similarity, sparse representation.

VENTM14006 Image Set based Collaborative Representation for Face Recognition

Abstract: With the rapid development of digital imaging and communication technologies, image set-based face recognition (ISFR) is becoming increasingly important. One key issue of ISFR is how toeffectively and efficiently represent the query face image set using the gallery face image sets. Theset-to-set distance-based methods ignore the relationship between gallery sets, whereas representing thequery set images individually over the gallery sets ignores the correlation between query set images. Inthis paper, we propose a novel image set-based collaborative representationand classification methodfor ISFR. By modeling the query set as a convex or regularized hull, we represent this hull collaborativelyover all the gallery sets. With the resolved representationcoefficients, the distance between thequery set and each gallery set can then be calculated for classification. The proposed model naturally

Page 4: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

and effectively extends the image-based collaborative presentation to an image set based one, and ourextensive experiments on benchmark ISFR databases show the superiority of the proposed method tostate-of-the-art ISFR methods under different set sizes in terms of both recognition rate and efficiency.

Published in: Information Forensics and Security, IEEE Transactions on (Volume:9 , Issue: 7 )Date of Publication: July 2014Index Terms—image set, collaborative representation, set to sets distance, face recognition

VENTM14007 Fast Compressive Tracking

Abstract : It is a challenging task to develop effective and efficient appearance models for robustobject tracking due to factors such as pose variation, illumination change, occlusion, and motion blur.Existing online tracking algorithms often update models with samples from observations in recentframes. Despite much success has been demonstrated, numerous issues remain to be addressed. First,while these adaptive appearance models are data-dependent, there does not exist sufficient amount ofdata for online algorithms to learn at the outset. Second, online tracking algorithms often encounter thedrift problems. As a result of self-taught learning, misaligned samples are likely to be added and degradethe appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithmwith an appearance model based on features extracted from a multiscale image feature space withdata-independent basis. The proposed appearance model employs non-adaptive random projectionsthat preserve the structure of the image feature space of objects. A very sparse measurement matrix isconstructed to efficiently extract the features for the appearance model. We compress sample imagesof the foreground target and the background using the same sparse measurement matrix. The trackingtask is formulated as a binary classification via a naive Bayes classifier with online update in thecompressed domain. A coarse-to-fine search strategy is adopted to further reduce the computationalcomplexity in the detection procedure. The proposed compressive tracking algorithm runs in real-timeand performs favorably against state-of-the-art methods on challenging sequences in terms ofefficiency, accuracy and robustness.

Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:PP , Issue: 99 )April 2014Index Terms—Visual Tracking, Random Projection, Compressive Sensing, Compressed sensing, Featureextraction, Image coding, Object tracking, Robustness, Sparse matrices, Target tracking

VENTM14008 Speech Intelligibility Prediction Based on Mutual Information

Abstract: This paper deals with the problem of predicting the average intelligibility of noisy andpotentially processed speech signals, as observed by a group of normal hearing listeners. We propose amodel which performs this prediction based on the hypothesis that intelligibility is monotonicallyrelated to the mutual information between critical-band amplitude envelopes of the clean signal and thecorresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple

Page 5: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

function of the mean-square error (mse) that arises when estimating a clean critical-band amplitudeusing a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. Theproposed model predicts that speech intelligibility cannot be improved by any processing of noisycritical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) inpredicting the intelligibility of speech signals contaminated by additive noise and potentially non-linearlyprocessed using time-frequency weighting.

Published in: Audio, Speech, and Language Processing, IEEE/ACM Transactions on (Volume:22 ,Issue: 2 )Date of Publication: Feb. 2014Index Terms— Instrumental measures, noise reduction, objective distortion measures, speechenhancement, speech intelligibility prediction.

VENTM14009 Super-Resolution Compressed Sensing: An Iterative Reweighted Algorithm for JointParameter Learning and Sparse Signal Recovery

Abstract : In many practical applications such as direction-of- arrival (DOA) estimation and line spectralestimation, the sparsifying dictionary is usually characterized by a set of unknown parameters in acontinuous domain. To apply the conventional compressed sensing to such applications, thecontinuous parameter space has to be discretized to a finite set of grid points. Discretization, however,incurs errors and leads to deteriorated recovery performance. To address this issue, we proposean iterative reweighted method which jointly estimates the unknown parameters and thesparse signals.Specifically, the proposed algorithm is developed by iteratively decreasing a surrogate functionmajorizing a given objective function, which results in a gradual and interweavediterative process torefine the unknown parameters and the sparse signal. Numerical results show thatthe algorithm provides superior performance in resolving closely-spaced frequency components.

Published in: Signal Processing Letters, IEEE (Volume:21 , Issue: 6 )Date of Publication: June 2014Index Terms—Compressed sensing, super-resolution, parameter learning, sparse signal recovery

VENTM14010 Variants of non-negative least-mean-square algorithm and convergence analysis

Abstract: Due to the inherent physical characteristics of systems under investigation, non-negativity isone of the most interesting constraints that can usually be imposed on the parameters to estimate.TheNon-Negative Least-Mean-Square algorithm (NNLMS) was proposed to adaptively find solutions of atypical Wiener filtering problem but with the side constraint that the resulting weights need to be non-negative. It has been shown to have good convergence properties. Nevertheless, certain practicalapplications may benefit from the use of modified versions of this algorithm. In this paper, we derivethree variants of NNLMS. Each variant aims at improving the NNLMS performance regarding one of thefollowing aspects: sensitivity of input power, unbalance of convergence rates for different weights and

Page 6: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

computational cost. We study the stochastic behavior of the adaptive weights for these threenew algorithms for non-stationary environments. This study leads to analytical models to predict thefirst and second order moment behaviors of the weights for Gaussian inputs. Simulation results arepresented to illustrate the performance of the new algorithms and the accuracy of the derived models.

Published in: Signal Processing, IEEE Transactions on (Volume:62 , Issue: 15 )Date of Publication: Aug.1, 2014Keywords: Adaptive signal processing, convergence analysis, exponential algorithm, least-mean-square algorithms, non-negativity constraints, normalized algorithm, sign-sign algorithm.

VENTM14011 Training-Free Non-Intrusive Load Monitoring of Electric Vehicle Charging with LowSampling Rate

Abstract—Non-intrusive load monitoring (NILM) is an important topic in smart-grid and smart-home.Many energy disaggregation algorithms have been proposed to detect various individual appliancesfrom one aggregated signal observation. However, few works studied the energy disaggregation ofplugin electric vehicle (EV) charging in the residential environment since EVs charging at home hasemerged only recently. Recent studies showed that EV charging has a large impact on smartgridespecially in summer. Therefore, EV charging monitoring has become a more important and urgentmissing piece in energy disaggregation. In this paper, we present a novel method to disaggregate EVcharging signals from aggregated real power signals. The proposed method can effectively mitigateinterference coming from air-conditioner (AC), enabling accurate EV charging detection and energyestimation under the presence of AC power signals. Besides, the proposed algorithm requires notraining, demands a light computational load, delivers high estimation accuracy, and works well for datarecorded at the low sampling rate 1/60 Hz. When the algorithm is tested on real-world data recordedfrom 11 houses over about a whole year (total 125 months worth of data), the averaged error inestimating energy consumption of EV charging is 15.7 kwh/month (while the true averaged energyconsumption of EV charging is 208.5 kwh/month), and the averaged normalized mean square error indisaggregating EV charging load signals is 0.19.

Keywords—Non-intrusive load monitoring (NILM); Electric Vehicle (EV); Smart Grid; EnergyDisaggregation

Page 7: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

IEEE 2013 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

VENTM13001 -Dimensional Wavelet Packet Spectrum for Texture Analysis

Abstract :This brief derives a 2-D spectrum estimator from some recent results on the statisticalproperties ofwavelet packet coefficients of random processes. It provides an analysis of the bias of thisestimator with respect to the wavelet order. This brief also discusses the performance of this wavelet-based estimator, in comparison with the conventional 2-D Fourier-based spectrum estimatoron textureanalysis and content-based image retrieval. It highlights the effectiveness of the wavelet-basedspectrum estimation.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 6 )Date of Publication: June 2013Keywords – 2-D Wavelet packet transforms; Random fields; Spectral analysis, Spectrum estimation,Similarity measurements.

VENTM13002 Supervised and Unsupervised Speech Enhancement Using Nonnegative MatrixFactorization

Abstract—Reducing the interference noise in a monaural noisy speech signal has been a challenging taskfor many years. Compared to traditional unsupervised speech enhancement methods, e.g., Wienerfiltering, supervised approaches, such as algorithms based on hidden Markov models (HMM), lead tohigher-quality enhanced speech signals. However, the main practical difficulty of these approaches isthat for each noise type a model is required to be trained a priori. In this paper, we investigate a newclass of supervised speech denoising algorithms using nonnegative matrix factorization (NMF). Wepropose a novel speech enhancement method that is based on a Bayesian formulation of NMF (BNMF).To circumvent the mismatch problem between the training and testing stages, we propose twosolutions. First, we use an HMM in combination with BNMF (BNMF-HMM) to derive a minimum meansquare error (MMSE) estimator for the speech signal with no information about the underlying noisetype. Second, we suggest a scheme to learn the required noise BNMF model online, which is then usedto develop an unsupervised speech enhancement system. Extensive experiments are carried out toinvestigate the performance of the proposed methods under different conditions. Moreover, wecompare the performance of the developed algorithms with state-of-the-art speech enhancementschemes using various objective measures. Our simulations show that the proposed BNMF-basedmethods outperform the competing algorithms substantially.

Published in: Audio, Speech, and Language Processing, IEEE Transactions on (Volume:21 , Issue: 10 )Date of Publication: Oct. 2013Index Terms—Nonnegative matrix factorization (NMF), speech enhancement, PLCA, HMM, BayesianInference

Page 8: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VENTM13003 Image Segmentation Using a Sparse Coding Model of Cortical Area V1

Abstract: Algorithms that encode images using a sparse set of basis functions have previously beenshown to explain aspects of the physiology of a primary visual cortex (V1), and have been used forapplications, such as image compression, restoration, and classification. Here, a sparse coding algorithm,that has previously been used to account for the response properties of orientation tuned cells inprimary visual cortex, is applied to the task of perceptually salient boundary detection. The proposedalgorithm is currently limited to using only intensity information at a single scale. However, it is shownto out-perform the current state-of-the-art image segmentation method (Pb) when this method is alsorestricted to using the same information.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )

Index Terms— Image Segmentation; Edge detection; Neural Networks; Predictive Coding; SparseCoding; Primary Visual Cortex

VENTM13004 How to SAIF-ly Boost Denoising Performance

Abstract: Spatial domain image filters (e.g., bilateral filter, non-local means, locally adaptive regressionkernel) have achieved great success in de noising. Their overall performance, however, has not generallysurpassed the leading transform domain-based filters (such as BM3-D). One important reason is thatspatial domain filters lack efficiency to adaptively fine tune their de noising strength; something that isrelatively easy to do in transform domain method with shrinkage operators. In the pixel domain, thesmoothing strength is usually controlled globally by, for example, tuning a regularization parameter. Inthis paper, we propose spatially adaptive iterative filtering (SAIF) a new strategy to control the denoising strength locally for any spatial domain method. This approach is capable of filtering local imagecontent iteratively using the given base filter, and the type of iteration and the iteration number areautomatically optimized with respect to estimated risk (i.e., mean-squared error). In exploiting theestimated local signal-to-noise-ratio, we also present a new risk estimator that is different from theoften-employed SURE method, and exceeds its performance in many cases. Experiments illustrate thatour strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters,and effectively boost the performance of several existing de noising filters to generate state-of-the-artresults under both simulated and practical conditions.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )

Index Terms— Image de noising, spatial domain filter, risk estimator, SURE, pixel aggregation

VENTM13005 Nonlocally Centralized Sparse Representation for Image Restoration

Abstract: Sparse representation models code an image patch as a linear combination of a few atomschosen out from an over-complete dictionary, and they have shown promising results invarious image restoration applications. However, due to the degradation of the observed image (e.g.,

Page 9: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

noisy, blurred, and/or down-sampled), the sparse representations by conventional models may not beaccurate enough for a faithful reconstruction of the original image. To improve the performanceof sparse representation-based image restoration, in this paper the concept of sparse coding noise isintroduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To thisend, we exploit the image nonlocal self-similarity to obtain good estimates of the sparse codingcoefficients of the original image, and then centralize the sparse coding coefficients of theobserved image to those estimates. The so-called non locally centralized sparse representation (NCSR)model is as simple as the standard sparse representation model, while our extensive experiments onvarious types of image restoration problems, including de noising, de blurring and super-resolution,validate the generality and state-of-the-art performance of the proposed NCSR algorithm.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )Date of Publication: April 2013Index Terms— Image restoration, nonlocal similarity, sparse representation.

VENTM13006 Sparse Representation Based Image Interpolation With Nonlocal AutoregressiveModeling

Abstract: Sparse representation is proven to be a promising approach to image super-resolution, wherethe low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR)counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image isdirectly down-sampled from its HR counterpart without blurring, the super-resolution problem becomesan image interpolation problem. In such cases, however, the conventional sparse representationmodels (SRM) become less effective, because the data fidelity term fails to constrain the image localstructures. In natural images, fortunately, many nonlocal similar patches to a given patch could providenonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarityinto SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) isproposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrixis less coherent with the representation dictionary, and consequently makes SRM more effective forimage interpolation. Our extensive experimental results demonstrate that the proposed NARM-basedimage interpolation method can effectively reconstruct the edge structures and suppress thejaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well asperceptual quality metrics such as SSIM and FSIM.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )Date of Publication: April 2013Index Terms—Image interpolation, nonlocal autoregressive model, sparse representation, super-resolution.

VENTM13007 Acceleration of the Shiftable Algorithm for Bilateral Filtering and NonlocalMeans

Page 10: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

Abstract: A direct implementation of the bilateral filter requires O(σs2) operations per pixel, whereσs is the(effective) width of the spatial kernel. A fast implementation of the bilateral filter thatrequired O(1) operations per pixel with respect to σs was recently proposed. This was done by usingtrigonometric functions for the range kernel of the bilateral filter, and by exploiting their so-called shiftability property. In particular, a fast implementation of the Gaussian bilateral filter was realized byapproximating the Gaussian range kernel using raised cosines. Later, it was demonstrated that this ideacould be extended to a larger class of filters, including the popular non-local means filter. As alreadyobserved, a flip side of this approach was that the run time depended on the width σr of the rangekernel. For an image with dynamic range [0,T], the run time scaled as O(T2/σr

2) with σr. This made itdifficult to implement narrow range kernels, particularly for images with large dynamic range. In thispaper, we discuss this problem, and propose some simple steps to accelerate the implementation, ingeneral, and for small σr in particular. We provide some experimental results todemonstrate the acceleration that is achieved using these modifications.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )Date of Publication: April 2013Index Terms—Bilateral filter, non-local means, shiftability, constant-time algorithm, Gaussian kernel,truncation, running maximum, max filter, recursive filter, O(1) complexity.

VENTM13008 Incremental Learning of 3D-DCT Compact Representations for Robust Visual Tracking

Abstract: Visual tracking usually requires an object appearance model that is robust to changingillumination, pose, and other factors encountered in video. Many recent trackers utilize appearancesamples in previous frames to form the bases upon which the object appearance model is built. Thisapproach has the following limitations: 1) The bases are data driven, so they can be easily corrupted,and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct anappearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on aset of cosine basis functions which are determined by the dimensions of the 3D signal and thusindependent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrumwhose high-frequency coefficients are sparse if the appearance samples are similar. By discarding thesehigh-frequency coefficients, we simultaneously obtain a compact 3D-DCT-basedobject representation and a signal reconstruction-based similarity measure (reflecting the informationloss from signal reconstruction). To efficiently update the object representation, we proposean incremental 3D-DCTalgorithm which decomposes the 3D-DCT into successive operations of the 2Ddiscrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data.As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly addedframes as well as the 1D-DCT along the third dimension, which significantly reduces the computationalcomplexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion toevaluate the likelihood of a test sample belonging to the foreground object. We then embed thediscriminative criterion into a particle filtering framework for object state inference over time.Experimental results demonstrate the effectiveness and robustness of the proposed tracker.

Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:35 , Issue: 4 )

Page 11: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

Date of Publication: April 2013Index Terms—Visual tracking, appearance model, compact representation, discrete cosine transform(DCT), incremental learning, template matching.

VENTM13009 Visual Saliency Based on Scale-Space Analysis in the Frequency Domain

Abstract: We address the issue of visual saliency from three perspectives. First, weconsider saliency detection as a frequency domain analysis problem. Second, we achieve this byemploying the concept of non saliency. Third, we simultaneously consider the detection of salientregions of different size. The paper proposes a new bottom-up paradigm for detecting visual saliency,characterized by a scale-space analysis of the amplitude spectrum of natural images. We showthat the convolution of the image amplitude spectrum with a low-pass Gaussian kernel of anappropriate scale is equivalent to an image saliency detector. The saliency map is obtained byreconstructing the 2D signal using the original phase and the amplitude spectrum, filtered ata scale selected by minimizing saliency map entropy. A Hypercomplex Fourier Transformperforms the analysis in the frequency domain. Using available databases, we demonstrateexperimentally that the proposed model can predict human fixation data. We also introduce a newimage database and use it to show that the saliency detector can highlight both small and large salientregions, as well as inhibit repeated distractors in cluttered images. In addition, we show that it is able topredict salient regions on which people focus their attention.

Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:35 , Issue: 4 )Date of Publication: April 2013Index Terms—Visual attention, saliency, Hypercomplex Fourier Transform, eye-tracking, scale spaceanalysis.

VENTM13010 Demosaicking of Noisy Bayer-Sampled Color Images With Least-Squares Luma-ChromaDemultiplexing and Noise Level Estimation

Abstract: This paper adapts the least-squares luma-chroma de multiplexing (LSLCD) demosaicking method to noisy Bayer color filter array (CFA) images. A model is presented for the noise inwhite-balanced gamma-corrected CFA images. A method to estimate the noise level in each of the red,green, and blue color channels is then developed. Based on the estimated noise parameters, one of afinite set of configurations adapted to a particular level of noise is selected to de mosaic the noisy data.The noise-adaptive de mosaicking scheme is called LSLCD with noise estimation (LSLCD-NE).Experimental results demonstrate state-of-the-art performance over a widerange of noise levels, with low computational complexity. Many results with severalalgorithms, noise levels, and images are presented on our companion web site along with software toallow reproduction of our results.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 1 )Date of Publication: Jan. 2013Index Terms—color filter array, Bayer sampling, demosaicking, noise estimation, noise reduction, noisemodel

Page 12: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VENTM13011 Fuzzy C-Means Clustering With Local Information and Kernel Metric for ImageSegmentation

Abstract: In this paper, we present an improved fuzzy C-means (FCM)algorithm for image segmentation by introducing a tradeoff weighted fuzzy factor and a kernel metric.The tradeoff weighted fuzzy factor depends on the space distance of all neighboring pixels and theirgray-level difference simultaneously. By using this factor, the new algorithm can accurately estimate thedamping extent of neighboring pixels. In order to further enhance its robustness to noise and outliers,we introduce a kernel distance measure to its objective function. The new algorithm adaptivelydetermines the kernel parameter by using a fast bandwidth selection rule based on the distancevariance of all data points in the collection. Furthermore, the tradeoffweighted fuzzy factor and the kernel distance measure are both parameter free. Experimental results onsynthetic and real images show that the new algorithm is effective andefficient, and is relativelyindependent of this type of noise.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 2 )Date of Publication: Feb. 2013Index Terms—Fuzzy clustering, gray-level constraint, image segmentation, kernel metric, spatialconstraint.

VENTM13012 Re initialization-Free Level Set Evolution via Reaction Diffusion

Abstract: This paper presents a novel reaction-diffusion (RD) method for implicit active contours that iscompletely free of the costly re initialization procedure in level set evolution (LSE). A diffusion term isintroduced into LSE, resulting in an RD-LSE equation, from which a piecewise constant solution can bederived. In order to obtain a stable numerical solution from the RD-based LSE, we propose a two-stepsplitting method to iteratively solve the RD-LSE equation, where we first iterate the LSE equation, thensolve the diffusion equation. The second step regularizes the level set function obtained in the first stepto ensure stability, and thus the complex and costly re initialization procedure is completely eliminatedfrom LSE. By successfully applying diffusion to LSE, the RD-LSE model is stable by means of the simplefinite difference method, which is very easy to implement. The proposed RD method can be generalizedto solve the LSE for both variational level set method and partial differential equation-basedlevel set method. The RD-LSE method shows very good performance on boundary anti leakage. Theextensive and promising experimental results on synthetic and real images validate the effectiveness ofthe proposed RD-LSE approach.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 1Date of Publication: Jan. 2013Index Terms—Active contours, image segmentation, level set, partial differential equation (PDE),reaction-diffusion, variational method.

VENTM13013 Online Object Tracking With Sparse Prototypes

Page 13: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

Abstract: Online object tracking is a challenging problem as it entails learning an effective model toaccount for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose anovel online object tracking algorithm with sparse prototypes, which exploits both classic principalcomponent analysis (PCA) algorithms with recent sparse representation schemes for learning effectiveappearance models. We introduce l1 regularization into the PCA reconstruction, and develop a novelalgorithm to represent an object by sparse prototypes that account explicitly for data and noise.For tracking, objects are represented by the sparse prototypes learned online with update. In order toreduce tracking drift, we present a method that takes occlusion and motion blur into account ratherthan simply includes image observations for model update. Both qualitative and quantitativeevaluations on challenging image sequences demonstrate that the proposed tracking algorithmperforms favorably against several state-of-the-art methods.

Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 1 )Date of Publication: Jan. 2013Index Terms—Appearance model, _1 minimization, object tracking, principal component analysis (PCA),sparse prototypes

VENTM13014 Reversible Data Hiding in Encrypted Images by Reserving Room Before Encryption

Abstract: Recently, more and more attention is paidto reversible data hiding (RDH) in encrypted images, since it maintains the excellent property that theoriginal cover can be losslessly recovered after embedded data is extracted while protectingthe image content's confidentiality. All previous methods embed data by reversibly vacating room fromthe encrypted images, which may be subject to some errors on data extractionand/or image restoration. In this paper, we propose a novel method by reserving room beforeencryption with a traditional RDH algorithm, and thus it is easy for the data hider to reversibly embeddata in the encrypted image. The proposed method can achieve real reversibility, that is, data extractionand image recovery are free of any error. Experiments show that this novel method can embed morethan 10 times as large payloads for the same image quality as the previous methods, such as forPSNR=40 dB.

Published in: Information Forensics and Security, IEEE Transactions on (Volume:8 , Issue: 3 )Date of Publication: March 2013Index Terms— Reversible data hiding, image encryption, privacy protection, histogram shift.

VENTM13015 Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization

Abstract: Recovering a large matrix from a small subset of its entries is a challenging problem arising inmany real applications, such as image inpainting and recommender systems. Many existing approachesformulate this problem as a general low-rank matrix approximation problem. Since the rank operator isnon convex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convexrelaxation. One major limitation of the existing approaches based on nuclear norm minimization is thatall the singular values are simultaneously minimized, and thus the rank may not be well approximated in

Page 14: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

practice. In this paper, we propose to achieve a better approximation to the rankof matrix by truncatednuclear norm, which is given by the nuclear norm subtracted by the sum of thelargest few singular values. In addition, we develop a novel matrix completion algorithm by minimizingthe Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM,TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes thealternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximalgradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of anadaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Ourempirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.

Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:35 , Issue: 9 )Date of Publication: Sept. 2013Index Terms—Matrix completion, nuclear norm minimization, alternating direction method ofmultipliers, accelerated proximal gradient Method

VENTM13016 A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform

Abstract: Researchers have been taking advantage of visual attention in various image processingapplications such as image retargeting, video coding, etc. Recently, many saliency detection algorithmshave been proposed by extracting features in spatial or transform domains. In this paper, anovel saliency detection model is introduced by utilizing low-level features obtained fromthe wavelet transform domain. Firstly, wavelet transform is employed to create the multi-scale feature maps which can represent different features from edge to texture. Then, we propose acomputational model for the saliency map from these features. The proposed model aims to modulatelocal contrast at a location with its global saliency computed based on the likelihood of the features, andthe proposed model considers local center-surround differences and global contrast in thefinal saliency map. Experimental evaluation depicts the promising results from the proposed model byoutperforming the relevant state of the artsaliency detection models.

Published in: Multimedia, IEEE Transactions on (Volume:15 , Issue: 1 )Date of Publication: Jan. 2013Index Terms—Feature map, saliency detection, saliency map, visual attention, wavelet transform.

VENTM13017 Robust Point Matching Revisited: A Concave Optimization Approach

Abstract- The well-known robust point matching (RPM) method uses deterministic annealing foroptimization, and it has two problems. First, it cannot guarantee the global optimality of the solutionand tends to align the centers of two point sets. Second, deformation needs to be reg- ularized to avoidthe generation of undesirable results. To address these problems, in this paper we show that the energyfunction of RPM can be reduced to a concave function with very few non-rigid terms after eliminatingthe transformation variables and applying linear transformation; we then propose to use concave

Page 15: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

optimization technique to minimize the resulting energy function. The proposed method scales wellwith problem size, achieves the globally optimal solution, and does not need regularization for simpletransformations such as similarity transform. Experiments on synthetic and real data validate theadvantages of our method in comparison with state-of-the-art methods.

VENTM13018 Phase Noise in MIMO Systems: Bayesian Cram´er-Rao Bounds and Soft-InputEstimation

Abstract: This paper addresses the problem of estimating time varying phase noise caused byimperfect oscillators in multiple-input multiple-output (MIMO) systems. The estimationproblem is parameterized in detail and based on an equivalent signal model its dimensionalityis reduced to minimize the . New exact andclosed-form expressions for the Bayesian Cramér-Rao lower bounds (BCRLBs) and soft-inputmaximum a posteriori (MAP) estimators for online, i.e., filtering, and offline, i.e., smoothing,estimation of phase noise over the length of a frame are derived. Simulations demonstrate thatthe proposed MAP estimators' mean-square error (MSE) performances are very close to thederived BCRLBs at moderate-to-high signal-to-noise ratios. To reduce the overhead andcomplexity associated with tracking the phase noise processes over the length of a frame, anovel soft-input extended Kalman filter (EKF) and extended Kalman smoother (EKS) that usesoft statistics of the transmitted symbols given the current observations are proposed.Numerical results indicate that by employing the proposed phase tracking approach, the bit-error rate performance of a MIMO system affected by phase noise can be significantlyimproved. In addition, simulation results indicate that the proposed phase noise estimationscheme allows for application of higher order modulations and larger numbers of antennas inMIMO systems that employ imperfect oscillators.

Published in: Signal Processing, IEEE Transactions on (Volume:61 , Issue: 10 )Issue Date : May15, 2013Index Terms—Multi-input multi-output (MIMO), Wiener phase noise, Bayesian Cram´er Rao lowerbound (BCRLB), maximum-a-posteriori (MAP), soft-decision extended Kalman filter (EKF), and extendedKalman smoother (EKS).

VENTM13019 Multi scale Gossip for Efficient Decentralized Averaging in Wireless PacketNetworks

Abstract: This paper describes and analyzes a hierarchical algorithm called Multiscale Gossip for solving the distributed average consensus problem in wireless sensor networks.The algorithm proceeds by recursively partitioning a given network. Initially, nodes at the finestscale gossip to compute local averages. Then, using multi-hop communication and geographicrouting to communicate between nodes that are not directly connected, these

Page 16: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

local averages are progressively fused up the hierarchy until the global average is computed.We show that the proposed hierarchical scheme with k=Θ(loglogn) levels of hierarchy iscompetitive with state-of-the-art randomized gossip algorithms in terms of messagecomplexity, achieving ε-accuracy with high probability after O(n loglogn log[1/(ε)] ) single-hopmessages. Key to our analysis is the way in which the network is recursively partitioned. Wefind that the above scaling law is achieved when sub networks at scale j contain O(n(2/3)j) nodes;then the message complexity at any individual scale is O(n log[1/ε]). Another importantconsequence of the hierarchical construction is that the longest distance over which messagesare exchanged is O(n1/3) hops (at the highest scale), and most messages (at lower scales) travelshorter distances. In networks that use link-level acknowledgements, this results in lesscongestion and resource usage by reducing message retransmissions. Simulations illustrate thatthe proposed scheme is more efficient than state-of-the-art randomized gossip algorithmsbased on averaging along paths.

Published in: Signal Processing, IEEE Transactions on (Volume:61 , Issue: 9 )Date of Publication: May1, 2013

VENTM13020 Compressed Sensing of EEG for Wireless Tele monitoring with Low EnergyConsumption and Inexpensive Hardware

Abstract: Tele monitoring of electroencephalogram (EEG) through wireless body-area networksis an evolving direction in personalized medicine. Among various constraints in designing such asystem, three important constraints are energy consumption, data compression, and devicecost. Conventional data compression methodologies, although effective in data compression,consumes significant energy and cannot reduce device cost. Compressed sensing (CS), as anemerging data compression methodology, is promising in catering to these constraints.However, EEG is non sparse in the time domain and also non sparse in transformed domains(such as the wavelet domain). Therefore, it is extremely difficult for current CS algorithms torecover EEG with the quality that satisfies the requirements of clinicaldiagnosis and engineering applications. Recently, block sparse Bayesian learning (BSBL) wasproposed as a new method to the CS problem. This study introduces the technique to the telemonitoring of EEG. Experimental results show that its recovery quality is better than state-of-the-art CS algorithms, and sufficient for practical use. These results suggest that BSBL is verypromising for tele monitoring of EEG and other non sparse physiological signals.

Published in: Biomedical Engineering, IEEE Transactions on (Volume:60 , Issue: 1 )Date of Publication: Jan. 2013Index Terms—Telemonitoring, Healthcare, Wireless Body- Area Network (WBAN), CompressedSensing (CS), Block Sparse Bayesian Learning (BSBL), electroencephalogram (EEG)

Page 17: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VENTM13021 Compressed Sensing for Energy-Efficient Wireless Tele monitoring of Non-Invasive Fetal ECG via Block Sparse Bayesian Learning

Abstract: Fetal ECG (FECG) tele monitoring is an important branch in telemedicine. Thedesign of a tele monitoring system via a wireless body area network withlow energy consumption for ambulatory use is highly desirable. As an emergingtechnique, compressed sensing (CS) shows great promise in compressing/reconstructing datawith low energy consumption. However, due to some specific characteristics of raw FECGrecordings such as non sparsity and strong noise contamination, current CS algorithmsgenerally fail in this application. This paper proposes to use the block sparse Bayesianlearning framework to compress/reconstruct non sparse raw FECG recordings. Experimentalresults show that the framework can reconstruct the raw recordings with high quality.Especially, the reconstruction does not destroy the interdependence relation among themultichannel recordings. This ensures that the independent component analysisdecomposition of the reconstructed recordings has high fidelity. Furthermore, the frameworkallows the use of a sparse binary sensing matrix with much fewer nonzero entriesto compress recordings. Particularly, each column of the matrix can contain only two nonzeroentries. This shows that the framework, compared to other algorithms such as current CSalgorithms and wavelet algorithms, can greatly reduce code execution in CPU in the datacompression stage.

Published in: Biomedical Engineering, IEEE Transactions on (Volume:60 , Issue: 2 )Date of Publication: Feb. 2013Index Terms—Fetal ECG (FECG), Tele monitoring, Telemedicine, Healthcare, Block SparseBayesian Learning (BSBL), Compressed Sensing (CS), Independent Component Analysis (ICA)

IEEE 2012 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

VENTM12001 Image Forgery Localization via Fine-Grained Analysis of CFA Artifacts

Abstract: In this paper, a forensic tool able to discriminate between original and forged regions inan image captured by a digital camera is presented. We make the assumption that the image is acquiredusing a Color Filter Array, and that tampering removes the artifacts due to the de mosaicking algorithm.The proposed method is based on a new feature measuring the presence of de mosaicking artifacts at alocal level, and on a new statistical model allowing to derive the tampering probability of each 2 ×2image block without requiring to know a priori the position of the forged region. Experimental resultson different cameras equipped with different de mosaicking algorithms demonstrate both thevalidity of the theoretical model and the effectiveness of our scheme.

Page 18: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

Published in: Information Forensics and Security, IEEE Transactions on (Volume:7 , Issue: 5 )Date of Publication: Oct. 2012Index Terms—Image forensics, CFA artifacts, digital camera demosaicing, tampering probability map,forgery localization.

VENTM12002 Bottom-Up Saliency Detection Model Based on Human Visual Sensitivity and AmplitudeSpectrum

Abstract: With the wide applications of saliency information in visual signal processing, many saliencydetection methods have been proposed. However, some key characteristics of the human visual system(HVS) are still neglected in building these saliency detection models. In this paper,we propose a newsaliencydetection model based on the human visual sensitivity and the amplitude spectrum ofquaternion Fourier transform (QFT). We use the amplitude spectrum of QFT to represent the color,intensity, and orientation distributions for image patches. The saliency value for each image patch iscalculated by not only the differences between the QFT amplitude spectrum of this patch and otherpatches in the whole image, but also the visual impacts for these differences determined by the humanvisual sensitivity. The experiment results show that the proposed saliency detection model outperformsthe state-of-the-art detection models. In addition, we apply our proposed model in the application ofimage retargeting and achieve better performance over the conventional algorithms.

Published in: Multimedia, IEEE Transactions on (Volume:14 , Issue: 1 )Date of Publication: Feb. 2012Index Terms—Amplitude spectrum, Fourier transform, human visual sensitivity, saliency detection,visual attention.

VENTM12003 Monogenic Binary Coding: An Efficient Local Feature Extraction Approach to FaceRecognition

Abstract: Local-feature-based face recognition (FR) methods, such as Gabor features encodedby local binary pattern, could achieve state-of-the-art FR results in large-scale face databases such asFERET and FRGC. However, the time and space complexity of Gabor transformation are too high formany practical FR applications. In this paper, we propose a newand efficient local feature extraction scheme, namely monogenic binary coding (MBC),for face representation and recognition. Monogenic signal representation decomposes an original signalinto three complementary components: amplitude, orientation, and phase. We encodethe monogenic variation in each local region and monogenic feature in each pixel, and then calculatethe statistical features (e.g., histogram) of the extracted local features.The local statistical features extracted from the complementary monogenic components (i.e.,amplitude, orientation, and phase) are then fused for effective FR. It is shown that the proposed MBCscheme has significantly lower time and space complexity than the Gabor-transformation-based localfeature methods. The extensive FR experiments on four large-scale databases demonstrated theeffectiveness of MBC, whose performance is competitive with and even better than state-of-the-artlocal-feature-based FR methods.

Page 19: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

Published in: Information Forensics and Security, IEEE Transactions on (Volume:7 , Issue: 6 )Biometrics Compendium, IEEEDate of Publication: Dec. 2012Index Terms—Face recognition, Gabor filtering, LBP, monogenic binary coding, monogenic signalanalysis.

VENTM12004 A Joint Time-Invariant Filtering Approach to the Linear Gaussian Relay Problem

Abstract :In this paper, the linear Gaussian relay problem is considered. Under the linear time-invariant (LTI) model the rate maximization problem in the linear Gaussian relay channel is formulated inthe frequency domain based on the Toeplitz distribution theorem. Under the further assumption ofrealizable input spectra, the rate maximization problem is converted to the problem of joint sourceand relay filter design with two power constraints, one at the source and the other at the relay, and apractical solution to this problem is proposed based on the (adaptive) projected (sub)gradient method.Numerical results show that the proposed method yields a considerable gain over the instantaneousamplify-and-forward (AF) scheme in inter-symbol interference (ISI) channels. Also, the optimality of theAF scheme within the class of one-tap relay filters is established in flat-fading channels.

Published in: Signal Processing, IEEE Transactions on (Volume:60 , Issue: 8 )Date of Publication: Aug. 2012Index Terms—Filter design, linear Gaussian relay, linear time invariant model, projected subgradientmethod, Toeplitz distribution theorem.

VENTM12005 Monotonic Regression: A New Way for Correlating Subjective and Objective Ratings inImage Quality Research

Abstract: To assess the performance of image quality metrics (IQMs), some regressions, such as logisticregression and polynomial regression, are used to correlate objective ratings with subjective scores.However, some defects in optimality are shown in these regressions. In this correspondence,monotonic regression (MR) is found to be an effective correlation method in the performanceassessment of IQMs. Both theoretical analysis and experimental results have proven that MR performsbetter than any other regression. We believe that MR could be an effective tool for performanceassessment in the IQM research.

Published in: Image Processing, IEEE Transactions on (Volume:21 , Issue: 4 )Date of Publication: April 2012Index Terms—Image quality assessment, image quality metric (IQM), metric performance, monotonicregression (MR).

Page 20: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VENTM12006 An efficient leaf recognition algorithm for plant classification using support vectormachine

Abstract: Recognition of plants has become an active area of research as most of the plant species are atthe risk of extinction. This paper uses an efficient machine learning approach forthe classification purpose. This proposed approach consists of three phases such as preprocessing,feature extraction and classification. The preprocessing phase involves a typical image processing stepssuch as transforming to gray scale and boundary enhancement. The feature extraction phase derives thecommon DMF from five fundamental features. The main contribution of this approach isthe SupportVector Machine (SVM) classification for efficient leaf recognition. 12 leaf features which areextracted and orthogonalized into 5 principal variables are given as input vector to the SVM. Classifiertested with flavia dataset and a real dataset and compared with k-NN approach, the proposed approachproduces very high accuracy and takes very less execution time.

Published in: Pattern Recognition, Informatics and Medical Engineering (PRIME), 2012 InternationalConference onDate of Conference: 21-23 March 2012Keywords- Digital Morphological Features (DMFs); Leaf Recognition; Support Vector Machine

VENTM12007 Image Signature: Highlighting Sparse Salient Regions

Abstract—We introduce a simple image descriptor referred to as the image signature. We show, withinthe theoretical framework of sparse signal mixing, that this quantity spatially approximates theforeground of an image. We experimentally investigate whether this approximate foreground overlapswith visually conspicuous image locations by developing a saliency algorithm based on the imagesignature. This saliency algorithm predicts human fixation points best among competitors on the Bruceand Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment,we demonstrate with a change blindness data set that the distance between images induced by theimage signature is closer to human perceptual distance than can be achieved using other saliencyalgorithms, pixel-wise, or GIST [2] descriptor methods.

Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:34 , Issue: 1 )Date of Publication: Jan. 2012Index Terms—Saliency, visual attention, change blindness, sign function, sparse signal analysis.

VENTM12008 An Efficient Algorithm for Level Set Method Preserving Distance Function

Abstract: The level set method is a popular technique for tracking moving interfaces in severaldisciplines, including computer vision and fluid dynamics. However, despite its high flexibility, theoriginal level set method is limited by two important numerical issues. First, the level set method doesnot implicitly preserve the level set function as a distance function, which is necessary to estimateaccurately geometric features, s.a. the curvature or the contour normal. Second,

Page 21: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

the level set algorithm is slow because the time step is limited by the standard Courant-Friedrichs-Lewy(CFL) condition, which is also essential to the numerical stability of the iterative scheme. Recentadvances with graph cut methods and continuous convex relaxation methods provide powerfulalternatives to the level set method for image processing problems because they are fast, accurate, andguaranteed to find the global minimizer independently to the initialization. These recent techniques usebinary functions to represent the contour rather than distance functions, which are usually consideredfor the level set method. However, the binary function cannot provide the distance information, whichcan be essential for some applications, s.a. the surface reconstruction problem from scattered pointsand the cortex segmentation problem in medical imaging. In this paper, we propose afast algorithm to preserve distance functions inlevel set methods. Our algorithm is inspired byrecent efficient l1 optimization techniques, which will provide an efficient and easy toimplement algorithm. It is interesting to note that our algorithm is not limited by the CFL condition andit naturally preserves the level set function as a distance function during the evolution, which avoids theclassical re-distancing problem in level set methods. We apply the proposed algorithm to carry outimage segmentation, where our methods prove to be 5-6 times faster thanstandard distance preserving level set - techniques. We also present two applications wherepreserving a distance function is essential. Nonetheless, our method stays generic and can be applied toany level set methods that require the distance information.

Published in: Image Processing, IEEE Transactions on (Volume:21 , Issue: 12 )Date of Publication: Dec. 2012Index Terms: Image segmentation, level set, numerical scheme, signed distance function,splitting,surface reconstruction

VENTM12009 Structure Extraction from Texture via Relative Total Variation

Abstract: It is ubiquitous that meaningful structures are formed by or appear over textured surfaces.Extracting them under the complication of texture patterns, which could be regular, near-regular, orirregular, is very challenging, but of great practical importance. We propose new inherent variation andrelative total variation measures, which capture the essential difference of these two types of visualforms, and develop an efficient optimization system to extract main structures. The new variationmeasures are validated on millions of sample patches. Our approach finds a number of new applicationsto manipulate, render, and reuse the immense number of “structure with texture” images and drawingsthat were traditionally difficult to be edited properly.Keywords: texture, structure, smoothing, total variation, relative total variation, inherent variation,prior, regularized optimization2012

VENTM12010 Quick Detection of Brain Tumors and Edemas: A Bounding Box Method Using Symmetry

Abstract: A significant medical informatics task is indexing patient databases according to size, location,and other characteristics of brain tumors and edemas, possibly based on magnetic resonance (MR)

Page 22: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

imagery. This requires segmenting tumors and edemas within images from different MR modalities. Todate, automated brain tumor or edema segmentation from MR modalities remains a challenging as wellas computationally intensive task. In this paper, we propose a novel automated, fast, and approximatesegmentation technique. The input is a patient study consisting of a set of MR slices. The output is acorresponding set of the slices that circumscribe the tumors with axis-parallel bounding boxes. Theproposed approach is based on an unsupervised change detection method that searches for the mostdissimilar region (axis-parallel bounding boxes) between the left and the right halves of a brain in anaxial view MR slice. This change detection process uses a novel score function based on Bhattacharyacoefficient computed with gray level intensity histograms. We prove that this score function admits avery fast (linear in image height and width) search to locate the bounding box. The average dicecoefficients for localizing brain tumors and edemas, over ten patient studies, are 0.57 and 0.52respectively, which significantly exceeds the scores for two other competitive region-based boundingbox techniques.

Index Terms– MR image Segmentation, Bhattacharya coefficient, Brain Tumor, Edema.

VENTM12011 Efficient Misalignment-Robust Representation for Real-Time Face Recognition

Abstract: Sparse representation techniques for robust face recognition have been widely studied in thepast several years. Recently face recognition with simultaneous misalignment, occlusion and othervariations has achieved interesting results via robust alignment by sparse representation (RASR). InRASR, the best alignment of a testing sample is sought subject by subject in the database. However, suchan exhaustive search strategy can make the time complexity of RASR prohibitive in large-scale facedatabases. In this paper, we propose a novel scheme, namely misalignment robust representation(MRR), by representing the misaligned testing sample in the transformed face space spanned by allsubjects. The MRR seeks the best alignment via a two-step optimization with a coarse-to-fine searchstrategy, which needs only two deformation-recovery operations. Extensive experiments onrepresentative face databases show that MRR has almost the same accuracy as RASR in various facerecognition and verification tasks but it runs tens to hundreds of times faster than RASR. The runningtime of MRR is less than 1 second in the large-scale Multi-PIE face database, demonstrating its greatpotential for real-time face recognition.

VENTM12012 Multi-User Diversity vs. Accurate Channel State Information in MIMO DownlinkChannels

Abstract: In a multiple transmit antenna, single antenna per receiver downlink channel with limitedchannel state feedback, we consider the following question: given a constraint on the total system-widefeedback load, is it preferable to get low-rate/coarse channel feedback from a large number of receiversor high-rate/high-quality feedback from a smaller number of receivers? Acquiring feedback from manyreceivers allows multi-user diversity to be exploited, while high-rate feedback allows for very precise

Page 23: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

selection of beamforming directions. We show that there is a strong preference for obtaining high-quality feedback, and that obtaining near-perfect channel information from as many receivers aspossible provides a significantly larger sum rate than collecting a few feedback bits from a large numberof users. In terms of system design, this corresponds to a preference for acquiring high-quality feedbackfrom a few users on each time-frequency resource block, as opposed to coarse feedback from manyusers on each block.

Published in: Wireless Communications, IEEE Transactions on (Volume:11 , Issue: 9 )Date of Publication: September 2012Index Terms-MIMO downlink channels, MU-MIMO communication, multi-user diversity

VENTM12013 Joint Estimation of Channel and Oscillator Phase Noise in MIMO Systems

Abstract: Oscillator phase noise limits the performance of high speed communication systems sinceit results in time varying channels and rotation of the signal constellation from symbol tosymbol. In this paper, jointestimation of channel gains and Wiener phase noise in multi-input multi-output (MIMO) systems is analyzed. The signal model for the estimation problem isoutlined in detail and new expressions for the Cramér-Rao lower bounds (CRLBs) for the multi-parameter estimation problem are derived. A data-aided least-squares (LS) estimator for jointlyobtaining the channel gains and phase noise parameters is derived. Next, a decision-directedweighted least-squares (WLS) estimator is proposed, where pilots and estimated data symbols areemployed to track the time-varying phase noise parameters over a frame. In order to reduce theoverhead and delay associated with the estimation process, a new decision-directed extended Kalmanfilter (EKF) is proposed for tracking the MIMO phase noise throughout a frame. Numerical results showthat the proposed LS, WLS, and EKF estimators' performances are close to the CRLB. Finally, simulationresults demonstrate that by employing the proposed channel and time-varying phase noise estimators the bit-error rate performance of a MIMO system can be significantlyimproved.

Published in: Signal Processing, IEEE Transactions on (Volume:60 , Issue: 9 )Date of Publication: Sept. 2012Index Terms—Channel estimation, Cramér-Rao lower bound (CRLB), extended Kalman filter (EKF), multi-input multi-output (MIMO), weighted least squares (WLS), Wiener phase noise.

IEEE 2011 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

VENTM11001 An Augmented Lagrangian Method for Total Variation Video Restoration

Abstract: This paper presents a fast algorithm for restoring video sequences. The proposed algorithm, asopposed to existing methods, does not consider video restoration as a sequence of image restorationproblems. Rather, it treats a video sequence as a space-time volume and poses a space-time totalvariation regularization to enhance the smoothness of the solution. The optimization problem is solvedby transforming the original unconstrained minimization problem to an equivalent constrainedminimization problem. An augmented Lagrangian method is used to handle the constraints, and an

Page 24: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

alternating direction method is used to iteratively find solutions to the subproblems. The proposedalgorithm has a wide range of applications, including video deblurring and denoising, video disparityrefinement, and hot-air turbulence effect reduction.

Published in: Image Processing, IEEE Transactions on (Volume:20 , Issue: 11 )Date of Publication: Nov. 2011Index Terms: Alternating direction method (ADM), augmented Lagrangian, hot-air turbulence,total variation (TV), video deblurring, video disparity, video restoration

VENTM11002 On Optimal Power Control for Delay-Constrained Communication Over Fading Channels

Abstract: In this paper, the problem of optimal power control for delay-constrained communicationover fading channels is studied. The objective is to find a power control law that optimizes the link layerperformance, specifically, minimizes delay bound violation probability (or equivalently, the packet dropprobability), subject to constraints on average power, arrival rate and delay bound. The transmissionbuffer size is assumed to be finite; hence, when the buffer is full, there will be packet drop. The fadingchannel under study has a continuous state, e.g., Rayleigh fading. Since directly solving the powercontrol problem (which optimizes the link layer performance) is particularly challenging, the problem isdecomposed into three sub problems and the three sub problems are solved iteratively; the resultingscheme is called joint queue length aware (JQLA) power control, which produces a local optimal solutionto the three sub problems. It is proved that the solution that simultaneously solves the three subproblems is also an optimal solution to the optimal power control problem. Simulation results show thatthe JQLA scheme achieves superior performance over the time domain water filling and the truncatedchannel inversion power control.

Published in: Information Theory, IEEE Transactions on (Volume:57 , Issue: 6 )Date of Publication: June 2011Index Terms—Delay-constrained communication, power control, queuing analysis, delay boundviolation probability, packet drop probability.

VENTM11003 A Level Set Method for Image Segmentation in the Presence of IntensityInhomogeneities With Application to MRI

Abstract: Intensity in homogeneity often occurs in real-world images, which presents a considerablechallenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, whichoften fail to provide accurate segmentation results due to the intensity inhomogeneity. This paperproposes a novel region-based method for image segmentation, which is able to dealwith intensity inhomogeneities in the segmentation. First, based on the modelof images with intensity inhomogeneities, we derive a local intensity clustering property of

Page 25: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

the image intensities, and define a local clustering criterion function for the image intensities in aneighborhood of each point. This local clustering criterion function is then integrated with respect to theneighborhood center to give a global criterion of image segmentation. In alevel set formulation, thiscriterion defines an energy in terms of the level set functions that represent a partition ofthe image domain and a bias field that accounts for the intensity inhomogeneity of the image.Therefore, by minimizing this energy, our method is able to simultaneously segment the image andestimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction(or bias correction). Our method has been validated on synthetic images and real images of variousmodalities, with desirable performance in the presence of intensity inhomogeneities. Experiments showthat our method is more robust to initialization, faster and more accurate than the well-knownpiecewise smooth model. As an application, our method has been used for segmentation and biascorrection of magnetic resonance (MR) images with promising results.

Published in: Image Processing, IEEE Transactions on (Volume:20 , Issue: 7 )Date of Publication: July 2011Index Terms: Bias correction, MRI, image segmentation, intensity inhomogeneity, level set

VENTM11004 Hybrid DE algorithm with adaptive crossover operator for solving real-world numericaloptimization problems

Abstract—In this paper, the results for the CEC 2011 Competition on testing evolutionary algorithms onreal world optimization problems using a hybrid differential evolution algorithm are presented. Theproposal uses a local search routine to improve convergence and an adaptive crossover operator.According to the obtained results, this algorithm shows to be able to find competitive solutions withreported results.

Index Terms—Differential Evolution algorithm, parameter selection, CEC competition.Published in: Evolutionary Computation (CEC), 2011 IEEE Congress on June 2011

VENTM11005 An Improved Algorithm for Blind Reverberation Time Estimation

Abstract—An improved algorithm for the estimation of the reverberation time (RT) from reverberantspeech signals is presented. This blind estimation of the RT is based on a simple statistical model for thesound decay such that the RT can be estimated by means of a maximum-likelihood (ML) estimator. Theproposed algorithm has a significantly lower computational complexity than previous ML-basedalgorithms for RT estimation. This is achieved by a downsampling operation and a simple pre-selectionof possible sound decays. The new algorithm is more suitable to track time-varying RTs than relatedapproaches. In addition, it can also estimate the RT in the presence of (moderate) background noise.The proposed algorithm can be employed to measure the RT of rooms from sound recordings without

Page 26: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

using a dedicated measurement setup. Another possible application is its use within speechdereverberation systems for hands-free devices or digital hearing aids.

Index Terms—reverberation time, blind estimation, low complexity, speech dereverberation

IEEE 2010 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

VENTM10001 Distance Regularized Level Set Evolution and Its Application to Image Segmentation

Abstract—Level set methods have been widely used in image processing and computer vision. Inconventional level set formulations, the level set function typically develops irregularities during itsevolution, which may cause numerical errors and eventually destroy the stability of the evolution.Therefore, a numerical remedy, called reinitialization, is typically applied to periodically replace thedegraded level set function with a signed distance function. However, the practice of reinitialization notonly raises serious problems as when and how it should be performed, but also affects numericalaccuracy in an undesirable way. This paper proposes a new variational level set formulation in which theregularity of the level set function is intrinsically maintained during the level set evolution. The level setevolution is derived as the gradient flow that minimizes an energy functional with a distanceregularization term and an external energy that drives the motion of the zero level set toward desiredlocations. The distance regularization term is defined with a potential function such that the derivedlevel set evolution has a unique forward-and-backward (FAB) diffusion effect, which is able to maintain adesired shape of the level set function, particularly a signed distance profile near the zero level set. Thisyields a new type of level set evolution called distance regularized level set evolution (DRLSE). Thedistance regularization effect eliminates the need for reinitialization and thereby avoids its inducednumerical errors. In contrast to complicated implementations of conventional level set formulations, asimpler and more efficient finite difference scheme can be used to implement the DRLSE formulation.DRLSE also allows the use of more general and efficient initialization of the level set function. In itsnumerical implementation, relatively large time steps can be used in the finite difference scheme toreduce the number of iterations, while ensuring sufficient numerical accuracy. To demonstrate theeffectiveness of the DRLSE formulation, we apply it to an edge-based active contour model for imagesegmentation, and provide a simple narrowband implementation to greatly reduce computational cost.

Published in: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 12, DECEMBER 2010Index Terms—Forward and backward diffusion, image segmentation, level set method, narrowband,reinitialization.

VENTM10002 Demonstration of Real-Time Spectrum Sensing for Cognitive Radio

Abstract: The requirement for real-time processing indeed poses challenges onimplementing spectrum sensingalgorithms. Trade-off between the complexity and the effectivenessof spectrum sensing algorithms should be taken into consideration. In this paper, a fast Fourier

Page 27: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

transform based spectrum sensingalgorithm, whose decision variable is independent of noise level, isintroduced. A small form factor software defined radio development platform is employed to implementa spectrum sensing receiver with the proposed algorithm. To our best knowledge, it is thefirst time that real-time spectrum sensingon hardware platform with controllable primary user devices isdemonstrated.

Published in: Communications Letters, IEEE (Volume:14 , Issue: 10 )Date of Publication: October 2010Index Terms: Cognitive radio, demonstration, real-time, spectrum sensing

VENTM10003 Retinal Vessel Extraction by Matched Filter with First-Order Derivative of Gaussian

Abstract: Accurate extraction of retinal blood vessels is an important task in computer aided diagnosisof retinopathy. The Matched Filter (MF) is a simple yet effective method for vessel extraction. However,a MF will respond not only to vessels but also to non-vessel edges. This will lead to frequent false vesseldetection. In this paper we propose a novel extension of the MF approach, namely the MF-FDOG, todetect retinal blood vessels. The proposed MF-FDOG is composed of the original MF, which is a zero-mean Gaussian function, and the first-order derivative of Gaussian (FDOG). The vessels are detected bythresholding the retinal image’s response to the MF, while the threshold is adjusted by the image’sresponse to the FDOG. The proposed MF-FDOG method is very simple; however, it reduces significantlythe false detections produced by the original MF and detects many fine vessels that are missed by theMF. It achieves competitive vessel detection results as compared with those state-of-the-art schemesbut with much lower complexity. In addition, it performs well at extracting vessels from pathologicalretinal images.

Keywords: retinal image segmentation; vessel detection; matched filter; line detection2010

VENTM10004 Accurate Computation of the MGF of the Lognormal Distribution and its Application toSum of Lognormals

Abstract: Sums of lognormal random variables (RVs) are of wide interest in wirelesscommunications and other areas of science and engineering. Since the distribution oflognormal sums is not log-normal and does not have a closed-form analytical expression, manyapproximations and bounds have been developed. This paper develops two computationalmethods for the moment generating function (MGF) or the characteristic function (CHF) of asingle lognormal RV. The first method uses classical complex integration techniques based onsteepest-descent integration. The saddle point of the integrand is explicitly expressed by theLambert function. The steepest-descent (optimal) contour and two closely-related closed-form

Page 28: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

contours are derived. A simple integration rule (e.g., the midpoint rule) along any of thesecontours computes the MGF/CHF with high accuracy. The second approach uses a variation onthe trapezoidal rule due to Ooura and Mori. Importantly, the cumulative distribution functionof lognormalsums is derived as an alternating series and convergence acceleration via theEpsilon algorithm is used to reduce, in some cases, the computational load by a factor of 106!Overall, accuracy levels of 13 to 15 significant digits are readily achievable.

Published in: Communications, IEEE Transactions on (Volume:58 , Issue: 5 )Date of Publication: May 2010Index Terms—Sum of lognormals, moment-generating function, characteristic function.

IEEE <2009 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

VENTMXX001 Canny Edge Detection Enhancement by Scale Multiplication

The technique of scale multiplication is analyzed in the framework of Canny edge detection. A scalemultiplication function is defined as the product of the responses of the detection filter at two scales.Edge maps are constructed as the local maxima by thresholding the scale multiplication results. Thedetection and localization criteria of the scale multiplication are derived. At a small loss in the detectioncriterion, the localization criterion can be much improved by scale multiplication. The product of the twocriteria for scale multiplication is greater than that for a single scale, which leads tobetter edgedetection performance. Experimental results are presented.

Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:27 , Issue: 9 )Date of Publication: Sept. 2005

VENTMXX002 Performance analysis of channel estimation and adaptive equalization in slow fadingchannel

ABSTRACT: In our project, we first build up a wireless communication simulator including Gray coding,modulation, different channel models (AWGN, flat fading and frequency selective fading channels),channel estimation, adaptive equalization, and demodulation. Next, we test the effect of differentchannel models to the data and image in receiver with constellation and BER (bit error rate) plots underQPSK modulation. For Image data source, we also compare the received image quality to original imagein different channels. At last, we give detail results and analyses of the performance improvement withchannel estimation and adaptive equalization in slow Rayleigh fading channel. For frequency selectivefading channel, we use linear equalization with both LMS (least mean squares) and RLS (Recursive LeastSquares) algorithms to compare the different improvements. We will see that in AWGN channel, theimage is degraded by random noise; in flat fading channel, the image is degraded by random noise andblock noise; in frequency selective fading channel, the image is degraded by random noise, block noise,and ISI.

Page 29: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

Keywords: Slow fading, flat fading, frequency selective fading, channel estimation, LMS, RLS2007

VENTMXX003 ROBUST OBJECT TRACKING USING JOINT COLOR-TEXTURE HISTOGRAM

Abstract: A novel object tracking algorithm is presented in this paper by using the joint color texturehistogram to represent a target and then applying it to the mean shift framework. Apart from theconventional color histogram features, the texture features of the object are also extracted by using thelocal binary pattern (LBP) technique to represent the object. The major uniform LBP patterns areexploited to form a mask for joint color-texture feature selection. Compared with the traditional colorhistogram based algorithms that use the whole target region for tracking, the proposed algorithmextracts effectively the edge and corner features in the target region, which characterize better andrepresent more robustly the target. The experimental results validate that the proposed methodimproves greatly the tracking accuracy and efficiency with fewer mean shift iterations than standardmean shift tracking. It can robustly track the target under complex scenes, such as similar target andbackground appearance, on which the traditional color based schemes may fail to track.

Keywords: Object tracking; mean shift; local binary pattern; color histogram.2000

VENTMXX004 Efficient Encoding of Low-Density Parity-Check Codes

Abstract—Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codesin terms of performance and complexity and they are based on a similar philosophy: constrainedrandom code ensembles and iterative decoding algorithms. In this paper,we consider the encodingproblem for LDPCcodes. More generally, we consider the encoding problem for codes specified bysparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix toobtain efficient encoders. For the (3 6)-regular LDPC code, for example, the complexity of encoding isessentially quadratic in the block length. However, we showthat the associated coefficient can be madequite small, so that encoding codes even of length 100 000 is still quite practical. More importantly, wewill show that “optimized” codes actually admit linear time encoding.

Published in: Information Theory, IEEE Transactions on (Volume:47 , Issue: 2 )Date of Publication: Feb 2001Index Terms—Binary erasure channel, decoding, encoding, parity check, random graphs, sparsematrices, turbo codes.

VENTMXX005 ML Estimation of Time and Frequency Offset in OFDM Systems

Abstract: We present the joint maximum likelihood (ML) symbol-time and carrier-frequency offset estimator in orthogonal frequency-division multiplexing (OFDM) systems. Redundantinformation contained within the cyclic prefix enables this estimation without additional pilots.

Page 30: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

Simulations show that the frequencyestimator may be used in a tracking mode and the time estimatorin an acquisition mode

Published in: Signal Processing, IEEE Transactions on (Volume:45 , Issue: 7 )Date of Publication: Jul 1997

Index Terms: OFDM systems, acquisition mode, carrier-frequency offset estimator, cyclic prefix,maximum likelihood, orthogonal frequency-division multiplexing, redundant information, symbol-time estimator, time offset, tracking mode

VENTMXX006 Performance analysis of channel estimation and adaptive equalization in slow fadingchannel

Abstract: In our project, we first build up a wireless communication simulator including Gray coding,modulation, different channel models (AWGN, flat fading and frequency selective fading channels),channel estimation, adaptive equalization, and demodulation. Next, we test the effect of differentchannel models to the data and image in receiver with constellation and BER (bit error rate) plots underQPSK modulation. For Image data source, we also compare the received image quality to original imagein different channels. At last, we give detail results and analyses of the performance improvement withchannel estimation and adaptive equalization in slow Rayleigh fading channel. For frequency selectivefading channel, we use linear equalization with both LMS (least mean squares) and RLS (Recursive LeastSquares) algorithms to compare the different improvements. We will see that in AWGN channel, theimage is degraded by random noise; in flat fading channel, the image is degraded by random noise andblock noise; in frequency selective fading channel, the image is degraded by random noise, block noise,and ISI.Keywords: Slow fading, flat fading, frequency selective fading, channel estimation, LMS, RLS2007

VENTMXX007 PARAFAC-Based Blind Estimation Of Possibly Underdetermined Convolutive MIMOSystemsAbstract: In this paper, we consider the problem of blind identification of a convolutive multiple-input-multiple-output (MIMO) system with No outputs and Ni inputs. While many methods have beenproposed to blindly identify convolutive MIMO systems with No ges Ni (overdetermined), very scarceresults exist for the case of (underdetermined), all of which refer to systems that either have somespecial structure or special and values. In this paper, we show that, as long as , independent of whetherthe system is overdetermined or underdetermined, we can always find the appropriate order ofstatistics that guarantees identifiability of the system response within trivial ambiguities. We alsopropose an algorithm to reach the solution, that consists of parallel factorization (PARAFAC) of a -waytensor containing th-order statistics of the system outputs, followed by an iterative scheme. For acertain order of statistics , we provide the description of the class of identifiable MIMO systems. We alsoshow that this class can be expanded by applying PARAFAC decomposition to a pair of tensors instead of

Page 31: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

one tensor. The proposed approach constitutes a novel scheme for estimation ofunderdetermined systems, and improves over existing approaches for over determined systems.Published in: Signal Processing, IEEE Transactions on (Volume:56 , Issue: 1 )Date of Publication: Jan. 2008Keywords: Blind multiple-input-multiple-output (MIMO), MIMO identification, PARAFAC, higher orderstatistics, underdetermined MIMO

VENTMXX008 Minimization of Region-Scalable Fitting Energy for Image Segmentation

Abstract—Intensity inhomogeneities often occur in real-world images and may cause considerabledifficulties in image segmentation. In order to overcome the difficulties caused by intensityinhomogeneities, we propose a region-based active contour model that draws upon intensityinformation in local regions at a controllable scale. A data fitting energy is defined in terms of a contourand two fitting functions that locally approximate the image intensities on the two sides of the contour.This energy is then incorporated into a variational level set formulation with a level set regularizationterm, from which a curve evolution equation is derived for energy minimization. Due to a kernelfunction in the data fitting term, intensity information in local regions is extracted to guide the motionof the contour, which thereby enables our model to cope with intensity inhomogeneity. In addition, theregularity of the level set function is intrinsically preserved by the level set regularization term to ensureaccurate computation and avoids expensive reinitialization of the evolving level set function.Experimental results for synthetic and real images show desirable performances of our method.Index Terms—Image segmentation, intensity inhomogeneity, level set method, region-scalable fittingenergy, variational method.Published in: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 10, OCTOBER 2008

VENTMXX009 Fast and accurate sequential floating forward feature selection with the Bayes classifierapplied to speech emotion recognitionAbstract: This paper addresses subset feature selection performed by the sequential floating forwardselection (SFFS). The criterion employed in SFFS is the correct classification rate of the Bayes classifierassuming that the features obey the multivariate Gaussian distribution. A theoretical analysis thatmodels the number of correctly classified utterances as a hypergeometric random variable enables thederivation of an accurate estimate of the variance of the correct classification rate during cross-validation. By employing such variance estimate, we propose a fast SFFS variant. Experimental findingson Danish emotional speech (DES) and Speech Under Simulated and Actual Stress (SUSAS) databasesdemonstrate that SFFS computational time is reduced by 50% and the correct classification rate forclassifying speech into emotional states for the selected subset of features varies less than the correctclassification rate found by the standard SFFS. Although the proposed SFFS variant is tested in theframework of speech emotion recognition, the theoretical results are valid for any classifier in thecontext of any wrapper algorithm.Key words: Bayes classifier, cross-validation, variance of the correct classification rate of the Bayesclassifier, feature selection, wrappers 2008

Page 32: Vensoft IEEE 2014 2015 Matlab Projects tiltle Image Processing Wireless Signal Processing

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

VENTMXX010 Sum Power Iterative Water-Filling for Multi-Antenna Gaussian Broadcast Channels

Abstract—In this correspondence, we consider the problem of maximizing sum rate of a multiple-antenna Gaussian broadcast channel (BC). It was recently found that dirty-paper coding is capacityachieving for this channel. In order to achieve capacity, the optimal transmission policy (i.e., the optimaltransmit covariance structure) given the channel conditions and power constraint must be found.However, obtaining the optimal transmission policy when employing dirty-paper coding is acomputationally complex nonconvex problem. We use duality to transform this problem into a well-structured convex multiple-access channel (MAC) problem. We exploit the structure of this problem andderive simple and fast iterative algorithms that provide the optimum transmission policies for the MAC,which can easily be mapped to the optimal BC policies.Index Terms—Broadcast channel, dirty-paper coding, duality, multipleaccess channel (MAC), multiple-input multiple-output (MIMO), systems.Published in: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 4, APRIL 2005

VENTMXX011 Symmetric Capacity of MIMO Downlink ChannelsAbstract: This paper studies the symmetric capacity of the MIMO downlink channel, which is defined tobe the maximum rate that can be allocated to every receiver in the system. The symmetric capacityrepresents absolute fairness and is an important metric for slowly fading channels in which users havesymmetric rate demands. An efficient and provably convergent algorithm for computing the symmetriccapacity is proposed, and it is shown that a simple modification of the algorithm can be used to computethe minimum power required to meet given downlink rate demands. In addition, the differencebetween the symmetric and sum capacity, termed the fairness penalty, is studied. Exact analyticalresults for the fairness penalty at high SNR are provided for the 2 user downlink channel, and numericalresults are given for channels with more users

Published in: Information Theory, 2006 IEEE International Symposium on July 2006Index Terms: MIMO systems, channel capacity, fading channels, radio links