Kernel Learning Framework for Cancer Subtype Analysis with ...
Research Article A Tensor-Product-Kernel Framework for...
Transcript of Research Article A Tensor-Product-Kernel Framework for...
Research ArticleA Tensor-Product-Kernel Framework forMultiscale Neural Activity Decoding and Control
Lin Li1 Austin J Brockmeier2 John S Choi3 Joseph T Francis4
Justin C Sanchez5 and Joseacute C Priacutencipe2
1 Philips Research North America Briarcliff Manor NY 10510 USA2Department of Electrical and Computer Engineering University of Florida Gainesville FL 32611 USA3 Joint Program in Biomedical Engineering NYU Polytechnic School of Engineering and SUNY Downstate Brooklyn NY 11203 USA4Department of Physiology and Pharmacology State University of New York Downstate Medical CenterJoint Program in Biomedical Engineering NYU Polytechnic School of Engineering and SUNY DownstateRobert F Furchgott Center for Neural amp Behavioral Science Brooklyn NY 11203 USA
5Department of Biomedical Engineering Department of Neuroscience Miami Project to Cure Paralysis University of MiamiCoral Gables FL 33146 USA
Correspondence should be addressed to Lin Li lin-liphilipscom
Received 22 August 2013 Revised 28 January 2014 Accepted 11 February 2014 Published 14 April 2014
Academic Editor Zhe (Sage) Chen
Copyright copy 2014 Lin Li et al This is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers orprostheses with the brainrsquos motor and sensory areas thereby bypassing the body The availability of multiscale neural recordingsincluding spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling byenriching the characterization of the neural system state However heterogeneity on data type (spike timing versus continuousamplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity In this paper wepropose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary informationavailable in multiscale neural activity This provides a common mathematical framework for incorporating signals from differentdomains The approach is applied to the problem of neural decoding and control For neural decoding the framework is able toidentify the nonlinear functional relationship between themultiscale neural responses and the stimuli using general purpose kerneladaptive filtering In a sensory stimulation experiment the tensor-product-kernel decoder outperforms decoders that use only asingle neural data type In addition an adaptive inverse controller for delivering electrical microstimulation patterns that utilizesthe tensor-product kernel achieves promising results in emulating the responses to natural stimulation
1 Introduction
Brain machine interfaces (BMIs) provide new means tocommunicate with the brain by directly accessing interpret-ing and even controlling neural states They have attractedattention as a promising technology to aid the disabled (iespinal cord injury movement disability stroke hearing lossand blindness) [1ndash6] When designing neural prostheticsand brain machine interfaces (BMIs) the fundamental stepsinvolve quantifying the information contained in neural
activity modeling the neural system decoding the intentionof movement or stimulation and controlling the spatiotem-poral neural activity pattern to emulate natural stimulationFurthermore the complexity and distributed dynamic natureof the neural system pose challenges for the modeling tasks
The development of recording technology enables accessto brain activity from multiple functional levels includingthe activity of individual neurons (spike trains) local fieldpotentials (LFPs) electrocorticogram (ECoG) and elec-troencephalogram (EEG) collectively forming a multiscale
Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2014 Article ID 870160 16 pageshttpdxdoiorg1011552014870160
2 Computational Intelligence and Neuroscience
characterization of brain state Simultaneous recording ofmultiple types of signals could facilitate enhanced neural sys-tem modeling Although there are underlying relationshipsamong these brain activities it is unknown how to leveragethe heterogeneous set of signals to improve the identificationof the neural-response-stimulus mappings The challenge isin defining a framework that can incorporate these heteroge-nous signal formats coming from multiple spatiotemporalscales In ourwork wemainly address integrating spike trainsand LFPs for multiscale neural decoding
Spike trains and LFPs encode complementary informa-tion of stimuli and behaviors [7 8] In most recordings spiketrains are obtained by detecting transient events on a signalthat is conditioned using a high-pass filter with the cutofffrequency set at about 300ndash500Hz while LFPs are obtainedby using a low-pass filter with 300Hz cutoff frequency [9]Spike trains represent single- or multiunit neural activitywith a fine temporal resolution However their stochasticproperties induce considerable variability especially whenthe stimulation amplitude is small that is the same stimulirarely elicit the same firing patterns in repeated trials Inaddition a functional unit of the brain contains thousandsof neurons Only the activity of a small subset of neurons canbe recorded and of these only a subset may modulate withrespect to the stimuli or condition of interest
In contrast LFPs reflect the average synaptic input to aregion near the electrode [10] which limits specificity butprovides robustness for characterizing the modulationinduced by stimuli Furthermore LFPs naturally providepopulation-level measure of neural activity Therefore anappropriate aggregation of LFPs and spike trains enablesenhanced accuracy and robustness of neural decoding mod-els For example the decoder can coordinate LFPs or spikepatterns to tag particularly salient events or extract differentstimulus features characterized by multisource signals How-ever heterogeneity between LFPs and spike trains compli-cates their integration into the same model The informationin a spike train is coded in a set of ordered spike timings[11 12] while an LFP is a continuous amplitude time seriesMoreover the time scale of LFPs is significantly longer thanspike trains Whereas recent work has compared the decod-ing accuracy of LFPs and spikes [13] only a small number ofsimple models have been developed to relate both activities[14] However the complete relationship between LFPs andspike trains is still a subject of controversy [15ndash17] whichhinders principled modeling approaches
To address these modeling issues this paper proposes asignal processing framework based on tensor-product kernelsto enable decoding and even controlling multiscale neuralactivities The tensor-product kernel uses multiple heteroge-nous signals and implicitly defines a kernel space constructedby the tensor product of individual kernels designed foreach signal type [18] The tensor-product kernel uses thejoint features of spike trains and LFPs This enables kernel-basedmachine learningmethodologies to leveragemultiscaleneural activity to uncover the mapping from the neuralsystem states and the corresponding stimuli
The kernel least mean square (KLMS) algorithm is usedto estimate the dynamic nonlinear mapping from the two
types of neural responses to the stimuli The KLMS algo-rithm exploits the fact that the linear signal processing inreproducing kernel Hilbert spaces (RKHS) corresponds tononlinear processing in the input space and is used in theadaptive inverse control scheme [19] designed for control-ling neural systems Utilizing the tensor-product kernel wenaturally extend this scheme to multiscale neural activitySince the nonlinear control is achieved via linear processingin the RKHS it bypasses the local minimum issues normallyencountered in nonlinear control
The validation of the effectiveness of the proposed tensor-product-kernel framework is done in a somatosensory stim-ulation study Somatosensory feedback remains underdevel-oped in BMI which is important for motor and sensoryintegration during movement execution such as propriocep-tive and tactile feedback about limb state during interactionwith external objects [20 21] A number of early experi-ments have shown that spatiotemporally patterned micros-timulation delivered to somatosensory cortex can be used toguide the direction of reaching movements [22ndash24] Inorder to effectively apply the artificial sensory feedback inBMI it is essential to find out how to use microstimula-tion to replicate the target spatiotemporal patterns in thesomatosensory cortex where neural decoding and controlare the critical technologies to achieve this goal In thispaper our framework is applied to leverage multiscale neuralactivities to decode both natural sensory stimulation andmicrostimulation Its decoding accuracy is compared withdecoders that use a single type of neural activity (LFPs orspike trains)
In the neural system control scenario this tensor-product-kernel methodology can also improve the controllerperformance Controlling the neural activity via stimulationhas raised the prospect of generating specific neural activitypatterns in downstream areas evenmimicking natural neuralresponses which is central both for our basic understandingof neural information processing and for engineering ldquoneuralprostheticrdquo devices that can interact with the brain directly[25] From a control theory perspective the neural circuit istreated as the ldquoplantrdquo where the applied microstimulationis the control signal and the plant output is the elicitedneural response represented by spike trains and LFPs Mostconventional control schema cannot be directly applied tospike trains because there is no algebraic structure in thespace of spike trains Therefore most existing neural controlapproaches have been applied to binned spike trains orLFPs [25ndash31] Here we will utilize the kernel-based adaptiveinverse controller for spike trains proposed in our previouswork [19] as an input-output (system identification) basedcontrol scheme This methodology can directly be extendedto the tensor-product kernel to leverage the availability ofmultiscale neural signals (eg spike trains and LFPs) andimproves the robustness and accuracy of the stimulation op-timization by exploiting the complementary information ofthe heterogenous neural signals recorded frommultiple sour-ces
The adaptive inverse control framework controls pat-terned electrical microstimulation in order to drive neural
Computational Intelligence and Neuroscience 3
responses to mimic the spatiotemporal neural activity pat-terns induced by tactile stimulation This framework createsnew opportunities to improve the ability to control neuralstates to emulate the natural stimuli by leveraging the com-plementary information from multiscale neural activitiesThis better interprets the neural system internal states andthus enhances the robustness and accuracy of the optimalmicrostimulation pattern estimation
The rest of the paper is organized as follows Sec-tion 2 introduces kernels for spike trains and LFPs andthe tensor-product kernel that combines them The kernel-based decoding model and the adaptive inverse controlscheme that exploit kernel-based neural decoding technologyto enable control in RKHS are introduced in Sections 3and 4 respectively Section 5 discusses the somatosensorystimulation emulation scenario and illustrates the test resultsby applying tensor-product kernel to leverage multiscaleneural activity for decoding and controlling tasks Section 6concludes this paper
2 Tensor-Product Kernel for MultiscaleHeterogeneous Neural Activity
The mathematics of many signal processing and patternrecognition algorithms is based on evaluating the similarityof pairs of exemplars For vectors or functions the innerproduct defined on Hilbert spaces is a linear operator anda measure of similarity However not all data types existin Hilbert spaces Kernel functions are bivariate symmetricfunctions that implicitly embed samples in a Hilbert spaceConsequently if a kernel on a data type can be defined thenalgorithms defined in terms of inner products can be usedThis has enabled various kernel algorithms [18 32ndash34]
To begin we define the general framework for the variouskernel functions used here keeping in mind that the inputcorresponds to assorted neural data types Let the domainof a single neural response dimension that is a single LFPchannel or one spiking unit be denoted by X and considera kernel 120581 X times X rarr R If 120581 is positive definite thenthere is an implicit mapping 120601 X rarr H that maps anytwo sample points say 119909 1199091015840 isin X to corresponding elementsin the Hilbert space 120601(119909) 120601(1199091015840) isin H such that 120581(119909 1199091015840) =⟨120601(119909) 120601(119909
1015840)⟩ is the inner product of these elements in the
Hilbert space As an inner product the kernel evaluation120581(119909 119909
1015840) quantifies the similarity between 119909 and 1199091015840
A useful property is that this inner product induces adistance metric
119889 (119909 1199091015840) =
10038171003817100381710038171003817120601 (119909) minus 120601 (119909
1015840)
10038171003817100381710038171003817H
= radic⟨120601 (119909) minus 120601 (1199091015840) 120601 (119909) minus 120601 (119909
1015840)⟩
= radic120581 (119909 119909) + 120581 (1199091015840 1199091015840) minus 2120581 (119909 119909
1015840)
(1)
For normalized and shift-invariant kernels where for all119909 120581(119909 119909) = 1 the distance is inversely proportional to thekernel evaluation 119889(119909 1199091015840) = radic2 minus 2120581(119909 1199091015840)
To utilize two ormore dimensions of the neural responsea kernel that operates on the joint space is requiredThere aretwo basic approaches to construct multidimensional kernelsfrom kernels defined on the individual variables direct sumand tensor-product kernels In terms of kernel evaluationsthey consist of taking either the sum or the product of theindividual kernel evaluations In both cases the resultingkernel is positive definite as long as the individual kernels arepositive definite [18 35]
Let X119894denote the neural response domain of the 119894th
dimension and consider a positive-definite kernel 120581119894 X119894times
X119894rarr R and corresponding mapping 120601
119894 X119894rarr H
119894for
this dimension Again the similarity between a pair of sam-ples 119909 and 1199091015840 on the 119894th dimension is 120581
119894(119909(119894) 1199091015840
(119894)) = ⟨120601
119894(119909(119894))
120601119894(1199091015840
(119894))⟩
For the sum kernel the joint similarity over a set of di-mensionsI is
120581Σ(x x1015840) = sum
119894isinI
120581119894(119909(119894) 1199091015840
(119894)) (2)
This measure of similarity is an average similarity across alldimensions When the sum is over a large number of dimen-sions the contributions of individual dimensions are dilutedThis is useful for multiunit spike trains or multichannelLFPs when the individual dimensions are highly variablewhich if used individually would result in a poor decodingperformance on a single trial basis
For the tensor-product kernel the joint similarity overtwo dimensions 119894 and 119895 is computed by taking the productbetween the kernel evaluations 120581
[119894119895]([119909(119894) 119909(119895)] [1199091015840
(119894) 1199091015840
(119895)]) =
120581119894(119909(119894) 1199091015840
(119894)) sdot 120581119895(119909(119895) 1199091015840
(119895)) The new kernel 120581
[119894119895]corresponds
to a mapping function that is the tensor product betweenthe individual mapping functions 120601
[119894119895]= 120601119894otimes 120601119895where 120601
[119894119895]
(119909(119894119895)) isin H
[119894119895] This is the tensor-product Hilbert space The
product can be taken over a set of dimensions I and theresult is a positive-definite kernel over the joint space 120581
Π
(x x1015840) = prod119894isinI120581119894(119909(119894) 119909
1015840
(119894))
The tensor-product kernel corresponds to a stricter mea-sure of similarity than the sum kernel Due to the product iffor one dimension 120581
119894(119909(119894) 1199091015840
(119894)) asymp 0 then 120581
Π(x x1015840) asymp 0 The
tensor-product kernel requires the joint similarity that is forsamples to be considered similar in the joint space they mustbe close in all the individual spaces If even one dimensionis dissimilar the product will appear dissimilar If some ofthe dimensions are highly variable then they will have a del-eterious effect on the joint similarity measure On the otherhand the tensor product is a more precise measure of sim-ilarity that will be used later to combine multiscale neuralactivity
More generally an explicit weight can be used to adjustthe influence of the individual dimensions on the joint kernelAny convex combinations of kernels are positive definiteand learning the weights of this combination is known asmultiple kernel learning [36ndash38] In certain cases of theconstituent kernels namely that they are infinitely divisible[39] a weighted product kernel can also be applied [40]However the optimization of theseweightings is not exploredin the current work
4 Computational Intelligence and Neuroscience
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Spike trains
LFPs
si
sj
xm
xn120593(xn)
120593(xm)
120601(si)
120601(sj)
(si sj) ⟨
⟨
120601(si) 120601(sj)⟩
⟩(xm xn) = 120593(xm) 120593(xn )
120601(si) otimes 120593(xn)
120601(sj) otimes 120593(xn)
120581(si sj xm xn) = 120581s(si sj)120581x(xm xn)= ⟨120601(si) 120601(sj)⟩⟨120593(xm) 120593(xm)⟩
= ⟨120601(si) otimes 120593(xm) 120601(si) otimes 120593(xn)⟩
120581s =
120581x
Figure 1 Schematic representation of the construction of the RKHS defined by the tensor-product kernel from the individual spike andLFP kernel spaces along with the mapping from the original data Specifically s
119894denotes a window of multiunit spike trains x
119899denotes a
window of multichannel LFPs 120581119904(sdot sdot) denotes the spike train kernel with implicit mapping function 120601(sdot) and 120581
119909(sdot sdot) denotes the LFP kernel
with implicit mapping function 120593(sdot)
In general a joint kernel space constructed via eitherdirect sum or tensor product allows the homogeneous pro-cessing of heterogenous signal types all within the frameworkof RKHSsWe use direct sum kernels to combine the differentdimensions of multiunit spike trains or multichannel LFPsFor the spike trains using the sum kernel across the differentunits enables an ldquoaveragerdquo population similarity over thespace of spike trains where averages cannot be computedThen a tensor-product kernel combines the two kernels onefor the multiunit spike trains and one for the multichannelLPFs see Figure 1 for an illustration The kernels for spiketrains and LFPs can be selected and specified individuallyaccording to their specific properties
In conclusion composite kernels are very different fromthose commonly used in kernel-based machine learning forexample for the support vector machine In fact here a pairof windowed spike trains and windowed LFPs is mappedinto a feature function in the joint RKHS Different spiketrain and LFP pairs are mapped to different locations in thisRKHS as shown in Figure 1 Due to its versatility Schoenbergkernels defined for both the spike timing space and LFPs areemployed in this paper and discussed below
21 Kernel for Spike Trains Unlike conventional amplitudedata there is no natural algebraic structure in the space ofspike trains The binning process which easily transformsthe point processes into discrete amplitude time series iswidely used in spike train analysis and allows the applicationof conventional amplitude-based kernels to spike trains [41]at the expense of losing the temporal resolution of theneural responses This means that any temporal informationin spikes within and between bins is disregarded which isespecially alarming when spike timing precision can be inthe millisecond range Although the bin size can be set smallenough to preserve the fine time resolution it will sparsify
the signal representation increase the artifact variabilityand cause high-dimensionality in the model which requiresvoluminous data for proper training
According to the literature it is appropriate to considera spike train to be a realization of a point process whichdescribes the temporal distribution of the spikes Generallyspeaking a point process 119901
119894can be completely characterized
by its conditional intensity function 120582(119905 | 119867119894119905) where 119905 isin 120591 =
[0 119879] denotes the time coordinate and119867119894119905is the history of the
process up to 119905 A recent research area [42 43] is to definean injective mapping from spike trains to RKHS based onthe kernel between the conditional intensity functions of twopoint processes [42] Among the cross-intensity (CI) kernelsthe Schoenberg kernel is defined as
120581 (120582 (119905 | 119867119894
119905) 120582 (119905 | 119867
119895
119905))
= exp(minus10038171003817100381710038171003817120582 (119905 | 119867
119894
119905) minus 120582 (119905 | 119867
119895
119905)
10038171003817100381710038171003817
2
1205902
)
= exp(minusint120591(120582 (119905 | 119867
119894
119905) minus 120582 (119905 | 119867
119895
119905))
2
d1199051205902
)
(3)
where 120590 is the kernel size The Schoenberg kernel is selectedin this work because of its modeling accuracy and robustnessto free parameter settings [44] The Schoenberg kernel isa Gaussian-like kernel defined on intensity functions thatis strictly positive definite and sensitive to the nonlinearcoupling of two intensity functions [42] Different spiketrains will then be mapped to different locations in theRKHS Compared to kernels designed on binned spike trains(eg spikernel [41]) the main advantage of the Schoenbergkernel is that the precision in the spike event location isbetter preserved and the limitations of the sparseness and
Computational Intelligence and Neuroscience 5
high-dimensionality for model building are also avoided re-sulting in enhanced robustness and reduced computationalcomplexity especially when the application requires fine timeresolution [44]
In order to be applicable the methodology must lead toa simple estimation of the quantities of interest (eg thekernel) from experimental data A practical choice used inour work estimates the conditional intensity function using akernel smoothing approach [42 45] which allows estimatingthe intensity function from a single realization and nonpara-metrically and injectively maps a windowed spike train intoa continuous function The estimated intensity function isobtained by simply convolving 119904(119905)with the smoothing kernel119892(119905) yielding
120582 (119905) =
119872
sum
119898=1
119892 (119905 minus 119905119898) 119905
119898isin T 119898 = 1 119872 (4)
where the smoothing function 119892(119905) integrates to 1 Here 120582(119905)can be interpreted as an estimation of the intensity functionThe rectangular and exponential functions [42 46] are bothpopular smoothing kernels which guarantee injective map-pings from the spike train to the estimated intensity functionIn order to decrease the kernel computation complexity therectangular function 119892(119905) = (1T)(119880(119905) minus 119880(119905 minusT)) (T ≫
the interspike interval) is used in our work where 119880(119905) is aHeaviside function This rectangular function approximatesthe cumulative density function of spikes counts in the win-dow 119879 and compromises the locality of the spike trains thatis the mapping places more emphasis on the early spikesthan the later ones However our experiments show that thiscompromise only causes a minimal impact on the kernel-based regression performance
Let 119904119899119894(119905) denote the spike train for the 119894th sample of the
119899th spiking unit The multiunit spike kernel is taken as theunweighted sum over the kernels on the individual units
120581119904(s119894(119905) s119895(119905)) = sum
119899
120581119904(119904119899
119894(119905) 119904119899
119895(119905)) (5)
22 Kernels for LFPs In contrast with spike trains LFPsexhibit less spatial and temporal selectivity [15] In the timedomain LFP features can be obtained by sliding a windowon the signal which describes the temporal LFP structureThe length of the window is selected based on the durationof neural responses to certain stimuli the extent of theduration can be assessed by its autocorrelation function aswe will discuss in Section 531 In the frequency domain thespectral power and phase in different frequency bands arealso known to be informative features for decoding but herewe concentrate only on the time-domain decompositions Inthe time domain we can simply treat single channel LFPsas a time series and apply the standard Schoenberg kernelto the sequence of signal samples in time The Schoenberg
kernel defined in continuous spaces maps the correlationtime structure of the LFP 119909(119905) into a function in RKHS
120581119909(119909119894(119905) 119909119895(119905))
= exp(minus10038171003817100381710038171003817119909119894(119905) minus 119909
119895(119905)
10038171003817100381710038171003817
2
1205902
119909
)
= exp(minus
intT119909
(119909119894(119905) minus 119909
119895(119905))
2
d119905
1205902
119909
)
= exp(minus(intT119909
119909119894(119905) 119909119894(119905)
+119909119895(119905) 119909119895(119905) minus 2119909
119894(119905) 119909119895(119905) d119905)
times (1205902
119909)
minus1
)
(6)
whereT119909= [0 119879
119909]
Let 119909119899119894(119905) denote the LFP waveform for the 119894th sample of
the 119899th channel Similar to the multiunit spike kernel themultichannel LFP kernel is defined by the direct sum kernel
120581119909(x119894(119905) x119895(119905)) = sum
119899
120581119909(119909119899
119894(119905) 119909119899
119895(119905)) (7)
23 Discrete Time Sampling Assuming a sampling rate withperiod 120591 let x
119894= [1199091
119894 1199091
119894+1 119909
1
119894minus1+119879120591 1199092
119894 119909
119873
119894minus1+119879120591] de-
note the 119894th multichannel LFP vector obtained by sliding the119879-length window with step 120591 Let s
119894= 119905119898minus(119894minus1)120591 119905
119898isin [(119894minus
1)120591 (119894 minus 1)120591 + 119879] 119898 = 1 119872 denote the corresponding119894th window of the multiunit spike timing sequence The timescale both in terms of the window length and sampling rateof the analysis for LFPs and spikes is very important andneeds to be defined by the characteristics of each signal Thetensor-product kernel allows the time scales of the analysis forLFPs and spike trains to be specified individually that is thewindow length 119879 and sampling rate for spike trains and LFPscould be different The suitable time scale can be estimatedthrough autocorrelation coefficients of the signal as will beexplained below
3 Adaptive Neural Decoding Model
For neural decoding applications a regression model withmultiscale neural activities as the input is built to reconstructa stimulus The appeal of kernel-based filters is the usage ofthe linear structure of RKHS to implement well-establishedlinear adaptive algorithms and to obtain a nonlinear filterin the input space that leads to universal approximationcapability without the problem of local minima There areseveral candidate kernel-based regressionmethods [32] suchas support vector regression (SVR) [33] kernel recursive leastsquares (KRLS) and kernel least mean square (KLMS) [34]The KLMS algorithm is preferred here because it is an onlinemethodology of low computation complexity
6 Computational Intelligence and Neuroscience
Input 119909119899 119889119899 119899 = 1 2 119873
Initialization initialize the weight vector Ω(1) codebook (set of centers)C(0) = and coefficient vector 119886(0) = []ComputationFor 119899 = 1 2 119873(1) compute the output
119910119899= ⟨Ω(119899) 120601(119909
119899)⟩ =
size(C(119899minus1))
sum
119895=1
119886119895(119899 minus 1)120581(119909
119899C119895(119899 minus 1))
(2) compute the error 119890119899= 119889119899minus 119910119899
(3) compute the minimum distance in RKHS between 119909119899and each 119909
119894isin C(119899 minus 1)
119889(119909119899) = min
119895
(2 minus 2120581(119909119899C119895(119899 minus 1)))
(4) if 119889(119909119899) le 120576 then keep the codebook unchangedC(119899) = C(119899 minus 1) and update the coefficient of the center closest to 119909
119899
119886119896(119899) = 119886
119896(119899 minus 1) + 120578119890(119899) where 119896 = argmin
119895
radic2 minus 2120581(119909119899C119895(119899 minus 1))
(5) otherwise store the new centerC(119899) = C(119899 minus 1) 119909119899 119886(119899) = [119886(119899 minus 1) 120578119890
119899]
(6) Ω(119899 + 1) =size(C(119899))
sum
119895=1
119886119895(119899)120601(C
119895(119899))
end
Algorithm 1 Quantized kernel least mean square (QKLMS) algorithm
The quantized kernel least mean square (Q-KLMS) isselected in our work to decrease the filter growth Algo-rithm 1 shows the pseudocode for the Q-KLMS algorithmwith a simple online vector quantization (VQ)method wherethe quantization is performed based on the distance betweenthe new input and each existing center In this work thisdistance between a center and the input is defined by theirdistance in RKHS which for a shift-invariant normalizedkernel for all 119909120581(119909 119909) = 1 is 120601(119909
119899) minus 120601(119909
119894)2
2= 2minus2120581(119909
119899 119909119894)
If the smallest distance is less than a prespecified quantizationsize 120576 the new coefficient 120578119890
119899adjusts the weight of the closest
center otherwise a new center is added Compared to othertechniques [47ndash50] that have been proposed to curb thegrowth of the networks the simple online VQ method isnot optimal but is very efficient Since in our work thealgorithm must be applied several times to the same data forconvergence after the first iteration over the data we choose120576 = 0 whichmerges the repeated centers and enjoys the sameperformance as KLMS
We use the Q-KLMS framework with the multiunitspike kernels the multichannel LFP kernels and the tensor-product kernel using the joint samples of LFPs and spiketrains This is quite unlike previous work in adaptive filteringthat almost exclusively uses the Gaussian kernel with real-valued time series
4 Adaptive Inverse Control of theSpatiotemporal Patterns of Neural Activity
As the name indicates the basic idea of adaptive inverse con-trol is to learn an inverse model of the plant as the controllerin Figure 2(a) such that the cascade of the controller and theplant will perform like a unitary transfer function that is aperfect wire with some delay The target plant output is usedas the controllerrsquos command inputThe controller parametersare updated to minimize the dissimilarity between the target
output and the plantrsquos output during the control processwhich enables the controller to track the plant variation andcancel system noises The filtered-120598 LMS adaptive inversecontrol diagram [51] shown in Figure 2(a) represents thefiltered-120598 approach to find119862(119911) If the ideal inverse controller119862(119911) is the actual inverse controller the mean square of theoverall system error 120598
119896would be minimized The objective
is to make 119862(119911) as close as possible to the ideal 119862(119911) Thedifference between the outputs of 119862(119911) and 119862(119911) both drivenby the command input is therefore an error signal 1205981015840 Sincethe target stimulation is unknown instead of 1205981015840 a filterederror 120598 obtained by filtering the overall system error 120598
119896
through the inverse plantmodel minus1(119911) is used for adaptationin place of 1205981015840
If the plant has a long response time a modeling delayis advantageous to capture the early stages of the responsewhich is determined by the sliding window length that isused to obtain the inverse controller input There is noperformance penalty from the delay Δ as long as the input to119862(119911) undergoes the same delayThe parameters of the inversemodel minus1(119911) are initially modeled offline and updated dur-ing the whole system operation which allows minus1(119911) to in-crementally identify the inverse system and thus make 120598approach 1205981015840 Moreover the adaptation enables minus1(119911) to trackchanges in the plant Thus minimizing the filter error ob-tained from
minus1(119911) makes the controller follow the system
variation
41 Relation toNeuralDecoding In this control scheme thereare only two models 119862(119911) and minus1(119911) adjusted during thecontrol process which share the same input (neural activ-ity) and output types (continuous stimulation waveforms)Therefore bothmodels perform like neural decoders and canbe implemented using the Q-KLMS method we introducedin the previous section Since all the mathematical models
Computational Intelligence and Neuroscience 7
Ideal inverse
Command
Overall system error
Plant disturbance and noise
Control system output
+
+
+
+
C(z)
C(z)
C(z)Delay Δ
Delay Δ
minus
minus
Filtered error 120598
120598998400
sum
sum
sum
P(z)
Pminus1(z)
120598k
input
(a) A filtered-120598 LMS adaptive inverse control diagram
Plant disturbance and noise
Control system output
+
++Command
input
Delay Δ
Delay Δ
Δ
Δ
minus
sum
sum
P(z)120601(x)y = times 120601(x)
120601(z)
120598 = times 120601(xΔ) minus times 120601(z)
120601(x )
120598k = 120601(x ) minus 120601(z)
WC
WC
WC
WPminus1
WPminus1
WPminus1
(b) An adaptive inverse control diagram in RKHS
Figure 2 Adaptive inverse control diagram
in this control scheme are kernel-based models the wholecontrol scheme can be mapped into an RKHS space whichholds several advantages as follows (1) No restriction isimposed on the signal type As long as a kernel can map theplant activity to RKHS the plant can be controlled with thisscheme (2) Both the plant inverse minus1(119911) and the controller119862(119911) have linear structure in RKHS which avoids the dangerof converging to local minima
Specifically the controller 119862(119911) and the plant inverseminus1(119911) are separatelymodeled with the tensor-product kernel
that we described in Section 2 and the model coefficientsare updated with Q-KLMS This structure is shown in Fig-ure 2(b) The model coefficients W
and W
minus1 represent
the weight matrix of 119862(119911) and minus1(119911) obtained by Q-
KLMS respectively As this is a multiple-input multiple-output model W
and W
minus1 are the concatenation of the
filter weights for each stimulation channelThe variables x y and z denote the concatenation of
the windowed target spike trains and LFPs as the commandinput of the controller the estimated stimulation and the
plant output respectively xΔis delayed target signal which is
aligned with the plant output z 120601(sdot) represents the mappingfunction from input space to the RKHS associated with thetensor-product kernel
The overall system error is defined as 120598119896= 120601(x
Δ) minus 120601(z)
which means that the controllerrsquos parameter adaptation seeksto minimize the distance in the RKHS between the targetspike trainLFP and the output of the plant inverse minus1(119911)In this way since the inverse model minus1(119911) has a linearstructure in RKHS the filtered error for stimulation channel119895 isin 1 119872 is
120598 (119895) =Wminus1
119895120601 (xΔ) minusW
minus1
119895120601 (z) (8)
The controller model 119862(119911) has a single input x cor-responding to the concatenation of the spike trains andLFPs and has an119872-channel output y corresponding to themicrostimulation Q-KLMS is used to model 119862(119911) with 119873input samples The target spike trains are repeated amongdifferent trials which means that the repeated samples will
8 Computational Intelligence and Neuroscience
Touch sites
Electrodes
Tactile
Neural responses
Electrical
VPL
VPL
S1
S1
stimulation microstimulation
spike trains and local field potentials
2mm
250120583m
500120583m
Figure 3 Neural elements in tactile stimulation experiments To the left is the ratrsquos hand with representative cutaneous receptive fieldsWhenthe tactor touches a particular ldquoreceptive fieldrdquo on the hand VPL thalamus receives this information and relays it to S1 cortex To emulateldquonatural touchrdquo with microstimulation the optimized spatiotemporal microstimulus patterns are injected into the same receptive field onVPL thalamus through a microarray so that the target neural activity pattern can be replicated in somatosensory regions (S1) to convey thenatural touch sensation to the animal
be merged on the same kernel center of the first pass throughthe data by the quantization and thus the network size of theinverse controller is fixed (119873 centers) Only the coefficientmatrix a is updated with the filtered error 120598 during the wholecontrol operation The output of 119862(119911) can be calculated by
[
[
[
[
[
1199101
1199102
119910119872
]
]
]
]
]
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
[
120601(c1198881)
1015840
120601(c1198882)
1015840
120601(c119888119873)
1015840
]
]
]
]
]
]
]
]
]
120601 (x)
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
120581 (c1198881 x)
120581 (c1198882 x)
120581 (c119888119873 x)
]
]
]
]
]
]
]
]
(9)
where c119888119899is the 119899th center and 119886
119898119899is the coefficient assigned
to the 119899th kernel center for the119898 channel of the output
5 Sensory Stimulation Experiment
51 Experimental Motivation and Framework We appliedthese methods to the problem of converting touch in-formation to electrical stimulation in neural prostheses So-matosensory information originating in the peripheral ner-vous system ascends through the ventral posterior lateral(VPL) nucleus of the thalamus on its way to the primary
somatosensory cortex (S1) Since most cutaneous and pro-prioceptive information is relayed through this nucleus weexpect that a suitably designed electrode array could be usedto selectively stimulate a local group of VPL neurons so asto convey similar information to cortex Electrophysiologicalexperiments [52] suggest that the rostral portion of the ratVPL nucleus carries a large amount of proprioceptive infor-mation while themedial and caudal portions codemostly forcutaneous stimuli Since the body maps for both VPL thal-amus and S1 are known and fairly consistent it is possible toimplant electrode arrays in somatotopically overlapping areasof both regions
We applied the proposed control method to generatemultichannel electrical stimulation in VPL so as to evokea naturalistic neural trajectory in S1 Figure 3 shows aschematic depiction of our experiment whichwas conductedin rats After implanting arrays in both VPL and S1 theresponses to natural stimulation delivered by a motorizedtactor were recorded in S1 Then we applied randomlypatterned microstimulation in the VPL while recording theresponses in S1 Using these responses we then trained ourcontroller to output themicrostimulation patterns that wouldmost accurately reproduce the neural responses to naturaltouch in S1 To solve this control problemwe first investigatedhow reliably the two types of stimulation natural touch andelectrical microstimulation can be decoded
52 Data Collection All animal procedures were approvedby the SUNY Downstate Medical Center IACUC and con-formed to National Institutes of Health guidelines A sin-gle female Long-Evans rat (Hilltop Scottsdale PA) wasimplanted with two microarrays while under anesthesia
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
2 Computational Intelligence and Neuroscience
characterization of brain state Simultaneous recording ofmultiple types of signals could facilitate enhanced neural sys-tem modeling Although there are underlying relationshipsamong these brain activities it is unknown how to leveragethe heterogeneous set of signals to improve the identificationof the neural-response-stimulus mappings The challenge isin defining a framework that can incorporate these heteroge-nous signal formats coming from multiple spatiotemporalscales In ourwork wemainly address integrating spike trainsand LFPs for multiscale neural decoding
Spike trains and LFPs encode complementary informa-tion of stimuli and behaviors [7 8] In most recordings spiketrains are obtained by detecting transient events on a signalthat is conditioned using a high-pass filter with the cutofffrequency set at about 300ndash500Hz while LFPs are obtainedby using a low-pass filter with 300Hz cutoff frequency [9]Spike trains represent single- or multiunit neural activitywith a fine temporal resolution However their stochasticproperties induce considerable variability especially whenthe stimulation amplitude is small that is the same stimulirarely elicit the same firing patterns in repeated trials Inaddition a functional unit of the brain contains thousandsof neurons Only the activity of a small subset of neurons canbe recorded and of these only a subset may modulate withrespect to the stimuli or condition of interest
In contrast LFPs reflect the average synaptic input to aregion near the electrode [10] which limits specificity butprovides robustness for characterizing the modulationinduced by stimuli Furthermore LFPs naturally providepopulation-level measure of neural activity Therefore anappropriate aggregation of LFPs and spike trains enablesenhanced accuracy and robustness of neural decoding mod-els For example the decoder can coordinate LFPs or spikepatterns to tag particularly salient events or extract differentstimulus features characterized by multisource signals How-ever heterogeneity between LFPs and spike trains compli-cates their integration into the same model The informationin a spike train is coded in a set of ordered spike timings[11 12] while an LFP is a continuous amplitude time seriesMoreover the time scale of LFPs is significantly longer thanspike trains Whereas recent work has compared the decod-ing accuracy of LFPs and spikes [13] only a small number ofsimple models have been developed to relate both activities[14] However the complete relationship between LFPs andspike trains is still a subject of controversy [15ndash17] whichhinders principled modeling approaches
To address these modeling issues this paper proposes asignal processing framework based on tensor-product kernelsto enable decoding and even controlling multiscale neuralactivities The tensor-product kernel uses multiple heteroge-nous signals and implicitly defines a kernel space constructedby the tensor product of individual kernels designed foreach signal type [18] The tensor-product kernel uses thejoint features of spike trains and LFPs This enables kernel-basedmachine learningmethodologies to leveragemultiscaleneural activity to uncover the mapping from the neuralsystem states and the corresponding stimuli
The kernel least mean square (KLMS) algorithm is usedto estimate the dynamic nonlinear mapping from the two
types of neural responses to the stimuli The KLMS algo-rithm exploits the fact that the linear signal processing inreproducing kernel Hilbert spaces (RKHS) corresponds tononlinear processing in the input space and is used in theadaptive inverse control scheme [19] designed for control-ling neural systems Utilizing the tensor-product kernel wenaturally extend this scheme to multiscale neural activitySince the nonlinear control is achieved via linear processingin the RKHS it bypasses the local minimum issues normallyencountered in nonlinear control
The validation of the effectiveness of the proposed tensor-product-kernel framework is done in a somatosensory stim-ulation study Somatosensory feedback remains underdevel-oped in BMI which is important for motor and sensoryintegration during movement execution such as propriocep-tive and tactile feedback about limb state during interactionwith external objects [20 21] A number of early experi-ments have shown that spatiotemporally patterned micros-timulation delivered to somatosensory cortex can be used toguide the direction of reaching movements [22ndash24] Inorder to effectively apply the artificial sensory feedback inBMI it is essential to find out how to use microstimula-tion to replicate the target spatiotemporal patterns in thesomatosensory cortex where neural decoding and controlare the critical technologies to achieve this goal In thispaper our framework is applied to leverage multiscale neuralactivities to decode both natural sensory stimulation andmicrostimulation Its decoding accuracy is compared withdecoders that use a single type of neural activity (LFPs orspike trains)
In the neural system control scenario this tensor-product-kernel methodology can also improve the controllerperformance Controlling the neural activity via stimulationhas raised the prospect of generating specific neural activitypatterns in downstream areas evenmimicking natural neuralresponses which is central both for our basic understandingof neural information processing and for engineering ldquoneuralprostheticrdquo devices that can interact with the brain directly[25] From a control theory perspective the neural circuit istreated as the ldquoplantrdquo where the applied microstimulationis the control signal and the plant output is the elicitedneural response represented by spike trains and LFPs Mostconventional control schema cannot be directly applied tospike trains because there is no algebraic structure in thespace of spike trains Therefore most existing neural controlapproaches have been applied to binned spike trains orLFPs [25ndash31] Here we will utilize the kernel-based adaptiveinverse controller for spike trains proposed in our previouswork [19] as an input-output (system identification) basedcontrol scheme This methodology can directly be extendedto the tensor-product kernel to leverage the availability ofmultiscale neural signals (eg spike trains and LFPs) andimproves the robustness and accuracy of the stimulation op-timization by exploiting the complementary information ofthe heterogenous neural signals recorded frommultiple sour-ces
The adaptive inverse control framework controls pat-terned electrical microstimulation in order to drive neural
Computational Intelligence and Neuroscience 3
responses to mimic the spatiotemporal neural activity pat-terns induced by tactile stimulation This framework createsnew opportunities to improve the ability to control neuralstates to emulate the natural stimuli by leveraging the com-plementary information from multiscale neural activitiesThis better interprets the neural system internal states andthus enhances the robustness and accuracy of the optimalmicrostimulation pattern estimation
The rest of the paper is organized as follows Sec-tion 2 introduces kernels for spike trains and LFPs andthe tensor-product kernel that combines them The kernel-based decoding model and the adaptive inverse controlscheme that exploit kernel-based neural decoding technologyto enable control in RKHS are introduced in Sections 3and 4 respectively Section 5 discusses the somatosensorystimulation emulation scenario and illustrates the test resultsby applying tensor-product kernel to leverage multiscaleneural activity for decoding and controlling tasks Section 6concludes this paper
2 Tensor-Product Kernel for MultiscaleHeterogeneous Neural Activity
The mathematics of many signal processing and patternrecognition algorithms is based on evaluating the similarityof pairs of exemplars For vectors or functions the innerproduct defined on Hilbert spaces is a linear operator anda measure of similarity However not all data types existin Hilbert spaces Kernel functions are bivariate symmetricfunctions that implicitly embed samples in a Hilbert spaceConsequently if a kernel on a data type can be defined thenalgorithms defined in terms of inner products can be usedThis has enabled various kernel algorithms [18 32ndash34]
To begin we define the general framework for the variouskernel functions used here keeping in mind that the inputcorresponds to assorted neural data types Let the domainof a single neural response dimension that is a single LFPchannel or one spiking unit be denoted by X and considera kernel 120581 X times X rarr R If 120581 is positive definite thenthere is an implicit mapping 120601 X rarr H that maps anytwo sample points say 119909 1199091015840 isin X to corresponding elementsin the Hilbert space 120601(119909) 120601(1199091015840) isin H such that 120581(119909 1199091015840) =⟨120601(119909) 120601(119909
1015840)⟩ is the inner product of these elements in the
Hilbert space As an inner product the kernel evaluation120581(119909 119909
1015840) quantifies the similarity between 119909 and 1199091015840
A useful property is that this inner product induces adistance metric
119889 (119909 1199091015840) =
10038171003817100381710038171003817120601 (119909) minus 120601 (119909
1015840)
10038171003817100381710038171003817H
= radic⟨120601 (119909) minus 120601 (1199091015840) 120601 (119909) minus 120601 (119909
1015840)⟩
= radic120581 (119909 119909) + 120581 (1199091015840 1199091015840) minus 2120581 (119909 119909
1015840)
(1)
For normalized and shift-invariant kernels where for all119909 120581(119909 119909) = 1 the distance is inversely proportional to thekernel evaluation 119889(119909 1199091015840) = radic2 minus 2120581(119909 1199091015840)
To utilize two ormore dimensions of the neural responsea kernel that operates on the joint space is requiredThere aretwo basic approaches to construct multidimensional kernelsfrom kernels defined on the individual variables direct sumand tensor-product kernels In terms of kernel evaluationsthey consist of taking either the sum or the product of theindividual kernel evaluations In both cases the resultingkernel is positive definite as long as the individual kernels arepositive definite [18 35]
Let X119894denote the neural response domain of the 119894th
dimension and consider a positive-definite kernel 120581119894 X119894times
X119894rarr R and corresponding mapping 120601
119894 X119894rarr H
119894for
this dimension Again the similarity between a pair of sam-ples 119909 and 1199091015840 on the 119894th dimension is 120581
119894(119909(119894) 1199091015840
(119894)) = ⟨120601
119894(119909(119894))
120601119894(1199091015840
(119894))⟩
For the sum kernel the joint similarity over a set of di-mensionsI is
120581Σ(x x1015840) = sum
119894isinI
120581119894(119909(119894) 1199091015840
(119894)) (2)
This measure of similarity is an average similarity across alldimensions When the sum is over a large number of dimen-sions the contributions of individual dimensions are dilutedThis is useful for multiunit spike trains or multichannelLFPs when the individual dimensions are highly variablewhich if used individually would result in a poor decodingperformance on a single trial basis
For the tensor-product kernel the joint similarity overtwo dimensions 119894 and 119895 is computed by taking the productbetween the kernel evaluations 120581
[119894119895]([119909(119894) 119909(119895)] [1199091015840
(119894) 1199091015840
(119895)]) =
120581119894(119909(119894) 1199091015840
(119894)) sdot 120581119895(119909(119895) 1199091015840
(119895)) The new kernel 120581
[119894119895]corresponds
to a mapping function that is the tensor product betweenthe individual mapping functions 120601
[119894119895]= 120601119894otimes 120601119895where 120601
[119894119895]
(119909(119894119895)) isin H
[119894119895] This is the tensor-product Hilbert space The
product can be taken over a set of dimensions I and theresult is a positive-definite kernel over the joint space 120581
Π
(x x1015840) = prod119894isinI120581119894(119909(119894) 119909
1015840
(119894))
The tensor-product kernel corresponds to a stricter mea-sure of similarity than the sum kernel Due to the product iffor one dimension 120581
119894(119909(119894) 1199091015840
(119894)) asymp 0 then 120581
Π(x x1015840) asymp 0 The
tensor-product kernel requires the joint similarity that is forsamples to be considered similar in the joint space they mustbe close in all the individual spaces If even one dimensionis dissimilar the product will appear dissimilar If some ofthe dimensions are highly variable then they will have a del-eterious effect on the joint similarity measure On the otherhand the tensor product is a more precise measure of sim-ilarity that will be used later to combine multiscale neuralactivity
More generally an explicit weight can be used to adjustthe influence of the individual dimensions on the joint kernelAny convex combinations of kernels are positive definiteand learning the weights of this combination is known asmultiple kernel learning [36ndash38] In certain cases of theconstituent kernels namely that they are infinitely divisible[39] a weighted product kernel can also be applied [40]However the optimization of theseweightings is not exploredin the current work
4 Computational Intelligence and Neuroscience
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Spike trains
LFPs
si
sj
xm
xn120593(xn)
120593(xm)
120601(si)
120601(sj)
(si sj) ⟨
⟨
120601(si) 120601(sj)⟩
⟩(xm xn) = 120593(xm) 120593(xn )
120601(si) otimes 120593(xn)
120601(sj) otimes 120593(xn)
120581(si sj xm xn) = 120581s(si sj)120581x(xm xn)= ⟨120601(si) 120601(sj)⟩⟨120593(xm) 120593(xm)⟩
= ⟨120601(si) otimes 120593(xm) 120601(si) otimes 120593(xn)⟩
120581s =
120581x
Figure 1 Schematic representation of the construction of the RKHS defined by the tensor-product kernel from the individual spike andLFP kernel spaces along with the mapping from the original data Specifically s
119894denotes a window of multiunit spike trains x
119899denotes a
window of multichannel LFPs 120581119904(sdot sdot) denotes the spike train kernel with implicit mapping function 120601(sdot) and 120581
119909(sdot sdot) denotes the LFP kernel
with implicit mapping function 120593(sdot)
In general a joint kernel space constructed via eitherdirect sum or tensor product allows the homogeneous pro-cessing of heterogenous signal types all within the frameworkof RKHSsWe use direct sum kernels to combine the differentdimensions of multiunit spike trains or multichannel LFPsFor the spike trains using the sum kernel across the differentunits enables an ldquoaveragerdquo population similarity over thespace of spike trains where averages cannot be computedThen a tensor-product kernel combines the two kernels onefor the multiunit spike trains and one for the multichannelLPFs see Figure 1 for an illustration The kernels for spiketrains and LFPs can be selected and specified individuallyaccording to their specific properties
In conclusion composite kernels are very different fromthose commonly used in kernel-based machine learning forexample for the support vector machine In fact here a pairof windowed spike trains and windowed LFPs is mappedinto a feature function in the joint RKHS Different spiketrain and LFP pairs are mapped to different locations in thisRKHS as shown in Figure 1 Due to its versatility Schoenbergkernels defined for both the spike timing space and LFPs areemployed in this paper and discussed below
21 Kernel for Spike Trains Unlike conventional amplitudedata there is no natural algebraic structure in the space ofspike trains The binning process which easily transformsthe point processes into discrete amplitude time series iswidely used in spike train analysis and allows the applicationof conventional amplitude-based kernels to spike trains [41]at the expense of losing the temporal resolution of theneural responses This means that any temporal informationin spikes within and between bins is disregarded which isespecially alarming when spike timing precision can be inthe millisecond range Although the bin size can be set smallenough to preserve the fine time resolution it will sparsify
the signal representation increase the artifact variabilityand cause high-dimensionality in the model which requiresvoluminous data for proper training
According to the literature it is appropriate to considera spike train to be a realization of a point process whichdescribes the temporal distribution of the spikes Generallyspeaking a point process 119901
119894can be completely characterized
by its conditional intensity function 120582(119905 | 119867119894119905) where 119905 isin 120591 =
[0 119879] denotes the time coordinate and119867119894119905is the history of the
process up to 119905 A recent research area [42 43] is to definean injective mapping from spike trains to RKHS based onthe kernel between the conditional intensity functions of twopoint processes [42] Among the cross-intensity (CI) kernelsthe Schoenberg kernel is defined as
120581 (120582 (119905 | 119867119894
119905) 120582 (119905 | 119867
119895
119905))
= exp(minus10038171003817100381710038171003817120582 (119905 | 119867
119894
119905) minus 120582 (119905 | 119867
119895
119905)
10038171003817100381710038171003817
2
1205902
)
= exp(minusint120591(120582 (119905 | 119867
119894
119905) minus 120582 (119905 | 119867
119895
119905))
2
d1199051205902
)
(3)
where 120590 is the kernel size The Schoenberg kernel is selectedin this work because of its modeling accuracy and robustnessto free parameter settings [44] The Schoenberg kernel isa Gaussian-like kernel defined on intensity functions thatis strictly positive definite and sensitive to the nonlinearcoupling of two intensity functions [42] Different spiketrains will then be mapped to different locations in theRKHS Compared to kernels designed on binned spike trains(eg spikernel [41]) the main advantage of the Schoenbergkernel is that the precision in the spike event location isbetter preserved and the limitations of the sparseness and
Computational Intelligence and Neuroscience 5
high-dimensionality for model building are also avoided re-sulting in enhanced robustness and reduced computationalcomplexity especially when the application requires fine timeresolution [44]
In order to be applicable the methodology must lead toa simple estimation of the quantities of interest (eg thekernel) from experimental data A practical choice used inour work estimates the conditional intensity function using akernel smoothing approach [42 45] which allows estimatingthe intensity function from a single realization and nonpara-metrically and injectively maps a windowed spike train intoa continuous function The estimated intensity function isobtained by simply convolving 119904(119905)with the smoothing kernel119892(119905) yielding
120582 (119905) =
119872
sum
119898=1
119892 (119905 minus 119905119898) 119905
119898isin T 119898 = 1 119872 (4)
where the smoothing function 119892(119905) integrates to 1 Here 120582(119905)can be interpreted as an estimation of the intensity functionThe rectangular and exponential functions [42 46] are bothpopular smoothing kernels which guarantee injective map-pings from the spike train to the estimated intensity functionIn order to decrease the kernel computation complexity therectangular function 119892(119905) = (1T)(119880(119905) minus 119880(119905 minusT)) (T ≫
the interspike interval) is used in our work where 119880(119905) is aHeaviside function This rectangular function approximatesthe cumulative density function of spikes counts in the win-dow 119879 and compromises the locality of the spike trains thatis the mapping places more emphasis on the early spikesthan the later ones However our experiments show that thiscompromise only causes a minimal impact on the kernel-based regression performance
Let 119904119899119894(119905) denote the spike train for the 119894th sample of the
119899th spiking unit The multiunit spike kernel is taken as theunweighted sum over the kernels on the individual units
120581119904(s119894(119905) s119895(119905)) = sum
119899
120581119904(119904119899
119894(119905) 119904119899
119895(119905)) (5)
22 Kernels for LFPs In contrast with spike trains LFPsexhibit less spatial and temporal selectivity [15] In the timedomain LFP features can be obtained by sliding a windowon the signal which describes the temporal LFP structureThe length of the window is selected based on the durationof neural responses to certain stimuli the extent of theduration can be assessed by its autocorrelation function aswe will discuss in Section 531 In the frequency domain thespectral power and phase in different frequency bands arealso known to be informative features for decoding but herewe concentrate only on the time-domain decompositions Inthe time domain we can simply treat single channel LFPsas a time series and apply the standard Schoenberg kernelto the sequence of signal samples in time The Schoenberg
kernel defined in continuous spaces maps the correlationtime structure of the LFP 119909(119905) into a function in RKHS
120581119909(119909119894(119905) 119909119895(119905))
= exp(minus10038171003817100381710038171003817119909119894(119905) minus 119909
119895(119905)
10038171003817100381710038171003817
2
1205902
119909
)
= exp(minus
intT119909
(119909119894(119905) minus 119909
119895(119905))
2
d119905
1205902
119909
)
= exp(minus(intT119909
119909119894(119905) 119909119894(119905)
+119909119895(119905) 119909119895(119905) minus 2119909
119894(119905) 119909119895(119905) d119905)
times (1205902
119909)
minus1
)
(6)
whereT119909= [0 119879
119909]
Let 119909119899119894(119905) denote the LFP waveform for the 119894th sample of
the 119899th channel Similar to the multiunit spike kernel themultichannel LFP kernel is defined by the direct sum kernel
120581119909(x119894(119905) x119895(119905)) = sum
119899
120581119909(119909119899
119894(119905) 119909119899
119895(119905)) (7)
23 Discrete Time Sampling Assuming a sampling rate withperiod 120591 let x
119894= [1199091
119894 1199091
119894+1 119909
1
119894minus1+119879120591 1199092
119894 119909
119873
119894minus1+119879120591] de-
note the 119894th multichannel LFP vector obtained by sliding the119879-length window with step 120591 Let s
119894= 119905119898minus(119894minus1)120591 119905
119898isin [(119894minus
1)120591 (119894 minus 1)120591 + 119879] 119898 = 1 119872 denote the corresponding119894th window of the multiunit spike timing sequence The timescale both in terms of the window length and sampling rateof the analysis for LFPs and spikes is very important andneeds to be defined by the characteristics of each signal Thetensor-product kernel allows the time scales of the analysis forLFPs and spike trains to be specified individually that is thewindow length 119879 and sampling rate for spike trains and LFPscould be different The suitable time scale can be estimatedthrough autocorrelation coefficients of the signal as will beexplained below
3 Adaptive Neural Decoding Model
For neural decoding applications a regression model withmultiscale neural activities as the input is built to reconstructa stimulus The appeal of kernel-based filters is the usage ofthe linear structure of RKHS to implement well-establishedlinear adaptive algorithms and to obtain a nonlinear filterin the input space that leads to universal approximationcapability without the problem of local minima There areseveral candidate kernel-based regressionmethods [32] suchas support vector regression (SVR) [33] kernel recursive leastsquares (KRLS) and kernel least mean square (KLMS) [34]The KLMS algorithm is preferred here because it is an onlinemethodology of low computation complexity
6 Computational Intelligence and Neuroscience
Input 119909119899 119889119899 119899 = 1 2 119873
Initialization initialize the weight vector Ω(1) codebook (set of centers)C(0) = and coefficient vector 119886(0) = []ComputationFor 119899 = 1 2 119873(1) compute the output
119910119899= ⟨Ω(119899) 120601(119909
119899)⟩ =
size(C(119899minus1))
sum
119895=1
119886119895(119899 minus 1)120581(119909
119899C119895(119899 minus 1))
(2) compute the error 119890119899= 119889119899minus 119910119899
(3) compute the minimum distance in RKHS between 119909119899and each 119909
119894isin C(119899 minus 1)
119889(119909119899) = min
119895
(2 minus 2120581(119909119899C119895(119899 minus 1)))
(4) if 119889(119909119899) le 120576 then keep the codebook unchangedC(119899) = C(119899 minus 1) and update the coefficient of the center closest to 119909
119899
119886119896(119899) = 119886
119896(119899 minus 1) + 120578119890(119899) where 119896 = argmin
119895
radic2 minus 2120581(119909119899C119895(119899 minus 1))
(5) otherwise store the new centerC(119899) = C(119899 minus 1) 119909119899 119886(119899) = [119886(119899 minus 1) 120578119890
119899]
(6) Ω(119899 + 1) =size(C(119899))
sum
119895=1
119886119895(119899)120601(C
119895(119899))
end
Algorithm 1 Quantized kernel least mean square (QKLMS) algorithm
The quantized kernel least mean square (Q-KLMS) isselected in our work to decrease the filter growth Algo-rithm 1 shows the pseudocode for the Q-KLMS algorithmwith a simple online vector quantization (VQ)method wherethe quantization is performed based on the distance betweenthe new input and each existing center In this work thisdistance between a center and the input is defined by theirdistance in RKHS which for a shift-invariant normalizedkernel for all 119909120581(119909 119909) = 1 is 120601(119909
119899) minus 120601(119909
119894)2
2= 2minus2120581(119909
119899 119909119894)
If the smallest distance is less than a prespecified quantizationsize 120576 the new coefficient 120578119890
119899adjusts the weight of the closest
center otherwise a new center is added Compared to othertechniques [47ndash50] that have been proposed to curb thegrowth of the networks the simple online VQ method isnot optimal but is very efficient Since in our work thealgorithm must be applied several times to the same data forconvergence after the first iteration over the data we choose120576 = 0 whichmerges the repeated centers and enjoys the sameperformance as KLMS
We use the Q-KLMS framework with the multiunitspike kernels the multichannel LFP kernels and the tensor-product kernel using the joint samples of LFPs and spiketrains This is quite unlike previous work in adaptive filteringthat almost exclusively uses the Gaussian kernel with real-valued time series
4 Adaptive Inverse Control of theSpatiotemporal Patterns of Neural Activity
As the name indicates the basic idea of adaptive inverse con-trol is to learn an inverse model of the plant as the controllerin Figure 2(a) such that the cascade of the controller and theplant will perform like a unitary transfer function that is aperfect wire with some delay The target plant output is usedas the controllerrsquos command inputThe controller parametersare updated to minimize the dissimilarity between the target
output and the plantrsquos output during the control processwhich enables the controller to track the plant variation andcancel system noises The filtered-120598 LMS adaptive inversecontrol diagram [51] shown in Figure 2(a) represents thefiltered-120598 approach to find119862(119911) If the ideal inverse controller119862(119911) is the actual inverse controller the mean square of theoverall system error 120598
119896would be minimized The objective
is to make 119862(119911) as close as possible to the ideal 119862(119911) Thedifference between the outputs of 119862(119911) and 119862(119911) both drivenby the command input is therefore an error signal 1205981015840 Sincethe target stimulation is unknown instead of 1205981015840 a filterederror 120598 obtained by filtering the overall system error 120598
119896
through the inverse plantmodel minus1(119911) is used for adaptationin place of 1205981015840
If the plant has a long response time a modeling delayis advantageous to capture the early stages of the responsewhich is determined by the sliding window length that isused to obtain the inverse controller input There is noperformance penalty from the delay Δ as long as the input to119862(119911) undergoes the same delayThe parameters of the inversemodel minus1(119911) are initially modeled offline and updated dur-ing the whole system operation which allows minus1(119911) to in-crementally identify the inverse system and thus make 120598approach 1205981015840 Moreover the adaptation enables minus1(119911) to trackchanges in the plant Thus minimizing the filter error ob-tained from
minus1(119911) makes the controller follow the system
variation
41 Relation toNeuralDecoding In this control scheme thereare only two models 119862(119911) and minus1(119911) adjusted during thecontrol process which share the same input (neural activ-ity) and output types (continuous stimulation waveforms)Therefore bothmodels perform like neural decoders and canbe implemented using the Q-KLMS method we introducedin the previous section Since all the mathematical models
Computational Intelligence and Neuroscience 7
Ideal inverse
Command
Overall system error
Plant disturbance and noise
Control system output
+
+
+
+
C(z)
C(z)
C(z)Delay Δ
Delay Δ
minus
minus
Filtered error 120598
120598998400
sum
sum
sum
P(z)
Pminus1(z)
120598k
input
(a) A filtered-120598 LMS adaptive inverse control diagram
Plant disturbance and noise
Control system output
+
++Command
input
Delay Δ
Delay Δ
Δ
Δ
minus
sum
sum
P(z)120601(x)y = times 120601(x)
120601(z)
120598 = times 120601(xΔ) minus times 120601(z)
120601(x )
120598k = 120601(x ) minus 120601(z)
WC
WC
WC
WPminus1
WPminus1
WPminus1
(b) An adaptive inverse control diagram in RKHS
Figure 2 Adaptive inverse control diagram
in this control scheme are kernel-based models the wholecontrol scheme can be mapped into an RKHS space whichholds several advantages as follows (1) No restriction isimposed on the signal type As long as a kernel can map theplant activity to RKHS the plant can be controlled with thisscheme (2) Both the plant inverse minus1(119911) and the controller119862(119911) have linear structure in RKHS which avoids the dangerof converging to local minima
Specifically the controller 119862(119911) and the plant inverseminus1(119911) are separatelymodeled with the tensor-product kernel
that we described in Section 2 and the model coefficientsare updated with Q-KLMS This structure is shown in Fig-ure 2(b) The model coefficients W
and W
minus1 represent
the weight matrix of 119862(119911) and minus1(119911) obtained by Q-
KLMS respectively As this is a multiple-input multiple-output model W
and W
minus1 are the concatenation of the
filter weights for each stimulation channelThe variables x y and z denote the concatenation of
the windowed target spike trains and LFPs as the commandinput of the controller the estimated stimulation and the
plant output respectively xΔis delayed target signal which is
aligned with the plant output z 120601(sdot) represents the mappingfunction from input space to the RKHS associated with thetensor-product kernel
The overall system error is defined as 120598119896= 120601(x
Δ) minus 120601(z)
which means that the controllerrsquos parameter adaptation seeksto minimize the distance in the RKHS between the targetspike trainLFP and the output of the plant inverse minus1(119911)In this way since the inverse model minus1(119911) has a linearstructure in RKHS the filtered error for stimulation channel119895 isin 1 119872 is
120598 (119895) =Wminus1
119895120601 (xΔ) minusW
minus1
119895120601 (z) (8)
The controller model 119862(119911) has a single input x cor-responding to the concatenation of the spike trains andLFPs and has an119872-channel output y corresponding to themicrostimulation Q-KLMS is used to model 119862(119911) with 119873input samples The target spike trains are repeated amongdifferent trials which means that the repeated samples will
8 Computational Intelligence and Neuroscience
Touch sites
Electrodes
Tactile
Neural responses
Electrical
VPL
VPL
S1
S1
stimulation microstimulation
spike trains and local field potentials
2mm
250120583m
500120583m
Figure 3 Neural elements in tactile stimulation experiments To the left is the ratrsquos hand with representative cutaneous receptive fieldsWhenthe tactor touches a particular ldquoreceptive fieldrdquo on the hand VPL thalamus receives this information and relays it to S1 cortex To emulateldquonatural touchrdquo with microstimulation the optimized spatiotemporal microstimulus patterns are injected into the same receptive field onVPL thalamus through a microarray so that the target neural activity pattern can be replicated in somatosensory regions (S1) to convey thenatural touch sensation to the animal
be merged on the same kernel center of the first pass throughthe data by the quantization and thus the network size of theinverse controller is fixed (119873 centers) Only the coefficientmatrix a is updated with the filtered error 120598 during the wholecontrol operation The output of 119862(119911) can be calculated by
[
[
[
[
[
1199101
1199102
119910119872
]
]
]
]
]
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
[
120601(c1198881)
1015840
120601(c1198882)
1015840
120601(c119888119873)
1015840
]
]
]
]
]
]
]
]
]
120601 (x)
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
120581 (c1198881 x)
120581 (c1198882 x)
120581 (c119888119873 x)
]
]
]
]
]
]
]
]
(9)
where c119888119899is the 119899th center and 119886
119898119899is the coefficient assigned
to the 119899th kernel center for the119898 channel of the output
5 Sensory Stimulation Experiment
51 Experimental Motivation and Framework We appliedthese methods to the problem of converting touch in-formation to electrical stimulation in neural prostheses So-matosensory information originating in the peripheral ner-vous system ascends through the ventral posterior lateral(VPL) nucleus of the thalamus on its way to the primary
somatosensory cortex (S1) Since most cutaneous and pro-prioceptive information is relayed through this nucleus weexpect that a suitably designed electrode array could be usedto selectively stimulate a local group of VPL neurons so asto convey similar information to cortex Electrophysiologicalexperiments [52] suggest that the rostral portion of the ratVPL nucleus carries a large amount of proprioceptive infor-mation while themedial and caudal portions codemostly forcutaneous stimuli Since the body maps for both VPL thal-amus and S1 are known and fairly consistent it is possible toimplant electrode arrays in somatotopically overlapping areasof both regions
We applied the proposed control method to generatemultichannel electrical stimulation in VPL so as to evokea naturalistic neural trajectory in S1 Figure 3 shows aschematic depiction of our experiment whichwas conductedin rats After implanting arrays in both VPL and S1 theresponses to natural stimulation delivered by a motorizedtactor were recorded in S1 Then we applied randomlypatterned microstimulation in the VPL while recording theresponses in S1 Using these responses we then trained ourcontroller to output themicrostimulation patterns that wouldmost accurately reproduce the neural responses to naturaltouch in S1 To solve this control problemwe first investigatedhow reliably the two types of stimulation natural touch andelectrical microstimulation can be decoded
52 Data Collection All animal procedures were approvedby the SUNY Downstate Medical Center IACUC and con-formed to National Institutes of Health guidelines A sin-gle female Long-Evans rat (Hilltop Scottsdale PA) wasimplanted with two microarrays while under anesthesia
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 3
responses to mimic the spatiotemporal neural activity pat-terns induced by tactile stimulation This framework createsnew opportunities to improve the ability to control neuralstates to emulate the natural stimuli by leveraging the com-plementary information from multiscale neural activitiesThis better interprets the neural system internal states andthus enhances the robustness and accuracy of the optimalmicrostimulation pattern estimation
The rest of the paper is organized as follows Sec-tion 2 introduces kernels for spike trains and LFPs andthe tensor-product kernel that combines them The kernel-based decoding model and the adaptive inverse controlscheme that exploit kernel-based neural decoding technologyto enable control in RKHS are introduced in Sections 3and 4 respectively Section 5 discusses the somatosensorystimulation emulation scenario and illustrates the test resultsby applying tensor-product kernel to leverage multiscaleneural activity for decoding and controlling tasks Section 6concludes this paper
2 Tensor-Product Kernel for MultiscaleHeterogeneous Neural Activity
The mathematics of many signal processing and patternrecognition algorithms is based on evaluating the similarityof pairs of exemplars For vectors or functions the innerproduct defined on Hilbert spaces is a linear operator anda measure of similarity However not all data types existin Hilbert spaces Kernel functions are bivariate symmetricfunctions that implicitly embed samples in a Hilbert spaceConsequently if a kernel on a data type can be defined thenalgorithms defined in terms of inner products can be usedThis has enabled various kernel algorithms [18 32ndash34]
To begin we define the general framework for the variouskernel functions used here keeping in mind that the inputcorresponds to assorted neural data types Let the domainof a single neural response dimension that is a single LFPchannel or one spiking unit be denoted by X and considera kernel 120581 X times X rarr R If 120581 is positive definite thenthere is an implicit mapping 120601 X rarr H that maps anytwo sample points say 119909 1199091015840 isin X to corresponding elementsin the Hilbert space 120601(119909) 120601(1199091015840) isin H such that 120581(119909 1199091015840) =⟨120601(119909) 120601(119909
1015840)⟩ is the inner product of these elements in the
Hilbert space As an inner product the kernel evaluation120581(119909 119909
1015840) quantifies the similarity between 119909 and 1199091015840
A useful property is that this inner product induces adistance metric
119889 (119909 1199091015840) =
10038171003817100381710038171003817120601 (119909) minus 120601 (119909
1015840)
10038171003817100381710038171003817H
= radic⟨120601 (119909) minus 120601 (1199091015840) 120601 (119909) minus 120601 (119909
1015840)⟩
= radic120581 (119909 119909) + 120581 (1199091015840 1199091015840) minus 2120581 (119909 119909
1015840)
(1)
For normalized and shift-invariant kernels where for all119909 120581(119909 119909) = 1 the distance is inversely proportional to thekernel evaluation 119889(119909 1199091015840) = radic2 minus 2120581(119909 1199091015840)
To utilize two ormore dimensions of the neural responsea kernel that operates on the joint space is requiredThere aretwo basic approaches to construct multidimensional kernelsfrom kernels defined on the individual variables direct sumand tensor-product kernels In terms of kernel evaluationsthey consist of taking either the sum or the product of theindividual kernel evaluations In both cases the resultingkernel is positive definite as long as the individual kernels arepositive definite [18 35]
Let X119894denote the neural response domain of the 119894th
dimension and consider a positive-definite kernel 120581119894 X119894times
X119894rarr R and corresponding mapping 120601
119894 X119894rarr H
119894for
this dimension Again the similarity between a pair of sam-ples 119909 and 1199091015840 on the 119894th dimension is 120581
119894(119909(119894) 1199091015840
(119894)) = ⟨120601
119894(119909(119894))
120601119894(1199091015840
(119894))⟩
For the sum kernel the joint similarity over a set of di-mensionsI is
120581Σ(x x1015840) = sum
119894isinI
120581119894(119909(119894) 1199091015840
(119894)) (2)
This measure of similarity is an average similarity across alldimensions When the sum is over a large number of dimen-sions the contributions of individual dimensions are dilutedThis is useful for multiunit spike trains or multichannelLFPs when the individual dimensions are highly variablewhich if used individually would result in a poor decodingperformance on a single trial basis
For the tensor-product kernel the joint similarity overtwo dimensions 119894 and 119895 is computed by taking the productbetween the kernel evaluations 120581
[119894119895]([119909(119894) 119909(119895)] [1199091015840
(119894) 1199091015840
(119895)]) =
120581119894(119909(119894) 1199091015840
(119894)) sdot 120581119895(119909(119895) 1199091015840
(119895)) The new kernel 120581
[119894119895]corresponds
to a mapping function that is the tensor product betweenthe individual mapping functions 120601
[119894119895]= 120601119894otimes 120601119895where 120601
[119894119895]
(119909(119894119895)) isin H
[119894119895] This is the tensor-product Hilbert space The
product can be taken over a set of dimensions I and theresult is a positive-definite kernel over the joint space 120581
Π
(x x1015840) = prod119894isinI120581119894(119909(119894) 119909
1015840
(119894))
The tensor-product kernel corresponds to a stricter mea-sure of similarity than the sum kernel Due to the product iffor one dimension 120581
119894(119909(119894) 1199091015840
(119894)) asymp 0 then 120581
Π(x x1015840) asymp 0 The
tensor-product kernel requires the joint similarity that is forsamples to be considered similar in the joint space they mustbe close in all the individual spaces If even one dimensionis dissimilar the product will appear dissimilar If some ofthe dimensions are highly variable then they will have a del-eterious effect on the joint similarity measure On the otherhand the tensor product is a more precise measure of sim-ilarity that will be used later to combine multiscale neuralactivity
More generally an explicit weight can be used to adjustthe influence of the individual dimensions on the joint kernelAny convex combinations of kernels are positive definiteand learning the weights of this combination is known asmultiple kernel learning [36ndash38] In certain cases of theconstituent kernels namely that they are infinitely divisible[39] a weighted product kernel can also be applied [40]However the optimization of theseweightings is not exploredin the current work
4 Computational Intelligence and Neuroscience
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Spike trains
LFPs
si
sj
xm
xn120593(xn)
120593(xm)
120601(si)
120601(sj)
(si sj) ⟨
⟨
120601(si) 120601(sj)⟩
⟩(xm xn) = 120593(xm) 120593(xn )
120601(si) otimes 120593(xn)
120601(sj) otimes 120593(xn)
120581(si sj xm xn) = 120581s(si sj)120581x(xm xn)= ⟨120601(si) 120601(sj)⟩⟨120593(xm) 120593(xm)⟩
= ⟨120601(si) otimes 120593(xm) 120601(si) otimes 120593(xn)⟩
120581s =
120581x
Figure 1 Schematic representation of the construction of the RKHS defined by the tensor-product kernel from the individual spike andLFP kernel spaces along with the mapping from the original data Specifically s
119894denotes a window of multiunit spike trains x
119899denotes a
window of multichannel LFPs 120581119904(sdot sdot) denotes the spike train kernel with implicit mapping function 120601(sdot) and 120581
119909(sdot sdot) denotes the LFP kernel
with implicit mapping function 120593(sdot)
In general a joint kernel space constructed via eitherdirect sum or tensor product allows the homogeneous pro-cessing of heterogenous signal types all within the frameworkof RKHSsWe use direct sum kernels to combine the differentdimensions of multiunit spike trains or multichannel LFPsFor the spike trains using the sum kernel across the differentunits enables an ldquoaveragerdquo population similarity over thespace of spike trains where averages cannot be computedThen a tensor-product kernel combines the two kernels onefor the multiunit spike trains and one for the multichannelLPFs see Figure 1 for an illustration The kernels for spiketrains and LFPs can be selected and specified individuallyaccording to their specific properties
In conclusion composite kernels are very different fromthose commonly used in kernel-based machine learning forexample for the support vector machine In fact here a pairof windowed spike trains and windowed LFPs is mappedinto a feature function in the joint RKHS Different spiketrain and LFP pairs are mapped to different locations in thisRKHS as shown in Figure 1 Due to its versatility Schoenbergkernels defined for both the spike timing space and LFPs areemployed in this paper and discussed below
21 Kernel for Spike Trains Unlike conventional amplitudedata there is no natural algebraic structure in the space ofspike trains The binning process which easily transformsthe point processes into discrete amplitude time series iswidely used in spike train analysis and allows the applicationof conventional amplitude-based kernels to spike trains [41]at the expense of losing the temporal resolution of theneural responses This means that any temporal informationin spikes within and between bins is disregarded which isespecially alarming when spike timing precision can be inthe millisecond range Although the bin size can be set smallenough to preserve the fine time resolution it will sparsify
the signal representation increase the artifact variabilityand cause high-dimensionality in the model which requiresvoluminous data for proper training
According to the literature it is appropriate to considera spike train to be a realization of a point process whichdescribes the temporal distribution of the spikes Generallyspeaking a point process 119901
119894can be completely characterized
by its conditional intensity function 120582(119905 | 119867119894119905) where 119905 isin 120591 =
[0 119879] denotes the time coordinate and119867119894119905is the history of the
process up to 119905 A recent research area [42 43] is to definean injective mapping from spike trains to RKHS based onthe kernel between the conditional intensity functions of twopoint processes [42] Among the cross-intensity (CI) kernelsthe Schoenberg kernel is defined as
120581 (120582 (119905 | 119867119894
119905) 120582 (119905 | 119867
119895
119905))
= exp(minus10038171003817100381710038171003817120582 (119905 | 119867
119894
119905) minus 120582 (119905 | 119867
119895
119905)
10038171003817100381710038171003817
2
1205902
)
= exp(minusint120591(120582 (119905 | 119867
119894
119905) minus 120582 (119905 | 119867
119895
119905))
2
d1199051205902
)
(3)
where 120590 is the kernel size The Schoenberg kernel is selectedin this work because of its modeling accuracy and robustnessto free parameter settings [44] The Schoenberg kernel isa Gaussian-like kernel defined on intensity functions thatis strictly positive definite and sensitive to the nonlinearcoupling of two intensity functions [42] Different spiketrains will then be mapped to different locations in theRKHS Compared to kernels designed on binned spike trains(eg spikernel [41]) the main advantage of the Schoenbergkernel is that the precision in the spike event location isbetter preserved and the limitations of the sparseness and
Computational Intelligence and Neuroscience 5
high-dimensionality for model building are also avoided re-sulting in enhanced robustness and reduced computationalcomplexity especially when the application requires fine timeresolution [44]
In order to be applicable the methodology must lead toa simple estimation of the quantities of interest (eg thekernel) from experimental data A practical choice used inour work estimates the conditional intensity function using akernel smoothing approach [42 45] which allows estimatingthe intensity function from a single realization and nonpara-metrically and injectively maps a windowed spike train intoa continuous function The estimated intensity function isobtained by simply convolving 119904(119905)with the smoothing kernel119892(119905) yielding
120582 (119905) =
119872
sum
119898=1
119892 (119905 minus 119905119898) 119905
119898isin T 119898 = 1 119872 (4)
where the smoothing function 119892(119905) integrates to 1 Here 120582(119905)can be interpreted as an estimation of the intensity functionThe rectangular and exponential functions [42 46] are bothpopular smoothing kernels which guarantee injective map-pings from the spike train to the estimated intensity functionIn order to decrease the kernel computation complexity therectangular function 119892(119905) = (1T)(119880(119905) minus 119880(119905 minusT)) (T ≫
the interspike interval) is used in our work where 119880(119905) is aHeaviside function This rectangular function approximatesthe cumulative density function of spikes counts in the win-dow 119879 and compromises the locality of the spike trains thatis the mapping places more emphasis on the early spikesthan the later ones However our experiments show that thiscompromise only causes a minimal impact on the kernel-based regression performance
Let 119904119899119894(119905) denote the spike train for the 119894th sample of the
119899th spiking unit The multiunit spike kernel is taken as theunweighted sum over the kernels on the individual units
120581119904(s119894(119905) s119895(119905)) = sum
119899
120581119904(119904119899
119894(119905) 119904119899
119895(119905)) (5)
22 Kernels for LFPs In contrast with spike trains LFPsexhibit less spatial and temporal selectivity [15] In the timedomain LFP features can be obtained by sliding a windowon the signal which describes the temporal LFP structureThe length of the window is selected based on the durationof neural responses to certain stimuli the extent of theduration can be assessed by its autocorrelation function aswe will discuss in Section 531 In the frequency domain thespectral power and phase in different frequency bands arealso known to be informative features for decoding but herewe concentrate only on the time-domain decompositions Inthe time domain we can simply treat single channel LFPsas a time series and apply the standard Schoenberg kernelto the sequence of signal samples in time The Schoenberg
kernel defined in continuous spaces maps the correlationtime structure of the LFP 119909(119905) into a function in RKHS
120581119909(119909119894(119905) 119909119895(119905))
= exp(minus10038171003817100381710038171003817119909119894(119905) minus 119909
119895(119905)
10038171003817100381710038171003817
2
1205902
119909
)
= exp(minus
intT119909
(119909119894(119905) minus 119909
119895(119905))
2
d119905
1205902
119909
)
= exp(minus(intT119909
119909119894(119905) 119909119894(119905)
+119909119895(119905) 119909119895(119905) minus 2119909
119894(119905) 119909119895(119905) d119905)
times (1205902
119909)
minus1
)
(6)
whereT119909= [0 119879
119909]
Let 119909119899119894(119905) denote the LFP waveform for the 119894th sample of
the 119899th channel Similar to the multiunit spike kernel themultichannel LFP kernel is defined by the direct sum kernel
120581119909(x119894(119905) x119895(119905)) = sum
119899
120581119909(119909119899
119894(119905) 119909119899
119895(119905)) (7)
23 Discrete Time Sampling Assuming a sampling rate withperiod 120591 let x
119894= [1199091
119894 1199091
119894+1 119909
1
119894minus1+119879120591 1199092
119894 119909
119873
119894minus1+119879120591] de-
note the 119894th multichannel LFP vector obtained by sliding the119879-length window with step 120591 Let s
119894= 119905119898minus(119894minus1)120591 119905
119898isin [(119894minus
1)120591 (119894 minus 1)120591 + 119879] 119898 = 1 119872 denote the corresponding119894th window of the multiunit spike timing sequence The timescale both in terms of the window length and sampling rateof the analysis for LFPs and spikes is very important andneeds to be defined by the characteristics of each signal Thetensor-product kernel allows the time scales of the analysis forLFPs and spike trains to be specified individually that is thewindow length 119879 and sampling rate for spike trains and LFPscould be different The suitable time scale can be estimatedthrough autocorrelation coefficients of the signal as will beexplained below
3 Adaptive Neural Decoding Model
For neural decoding applications a regression model withmultiscale neural activities as the input is built to reconstructa stimulus The appeal of kernel-based filters is the usage ofthe linear structure of RKHS to implement well-establishedlinear adaptive algorithms and to obtain a nonlinear filterin the input space that leads to universal approximationcapability without the problem of local minima There areseveral candidate kernel-based regressionmethods [32] suchas support vector regression (SVR) [33] kernel recursive leastsquares (KRLS) and kernel least mean square (KLMS) [34]The KLMS algorithm is preferred here because it is an onlinemethodology of low computation complexity
6 Computational Intelligence and Neuroscience
Input 119909119899 119889119899 119899 = 1 2 119873
Initialization initialize the weight vector Ω(1) codebook (set of centers)C(0) = and coefficient vector 119886(0) = []ComputationFor 119899 = 1 2 119873(1) compute the output
119910119899= ⟨Ω(119899) 120601(119909
119899)⟩ =
size(C(119899minus1))
sum
119895=1
119886119895(119899 minus 1)120581(119909
119899C119895(119899 minus 1))
(2) compute the error 119890119899= 119889119899minus 119910119899
(3) compute the minimum distance in RKHS between 119909119899and each 119909
119894isin C(119899 minus 1)
119889(119909119899) = min
119895
(2 minus 2120581(119909119899C119895(119899 minus 1)))
(4) if 119889(119909119899) le 120576 then keep the codebook unchangedC(119899) = C(119899 minus 1) and update the coefficient of the center closest to 119909
119899
119886119896(119899) = 119886
119896(119899 minus 1) + 120578119890(119899) where 119896 = argmin
119895
radic2 minus 2120581(119909119899C119895(119899 minus 1))
(5) otherwise store the new centerC(119899) = C(119899 minus 1) 119909119899 119886(119899) = [119886(119899 minus 1) 120578119890
119899]
(6) Ω(119899 + 1) =size(C(119899))
sum
119895=1
119886119895(119899)120601(C
119895(119899))
end
Algorithm 1 Quantized kernel least mean square (QKLMS) algorithm
The quantized kernel least mean square (Q-KLMS) isselected in our work to decrease the filter growth Algo-rithm 1 shows the pseudocode for the Q-KLMS algorithmwith a simple online vector quantization (VQ)method wherethe quantization is performed based on the distance betweenthe new input and each existing center In this work thisdistance between a center and the input is defined by theirdistance in RKHS which for a shift-invariant normalizedkernel for all 119909120581(119909 119909) = 1 is 120601(119909
119899) minus 120601(119909
119894)2
2= 2minus2120581(119909
119899 119909119894)
If the smallest distance is less than a prespecified quantizationsize 120576 the new coefficient 120578119890
119899adjusts the weight of the closest
center otherwise a new center is added Compared to othertechniques [47ndash50] that have been proposed to curb thegrowth of the networks the simple online VQ method isnot optimal but is very efficient Since in our work thealgorithm must be applied several times to the same data forconvergence after the first iteration over the data we choose120576 = 0 whichmerges the repeated centers and enjoys the sameperformance as KLMS
We use the Q-KLMS framework with the multiunitspike kernels the multichannel LFP kernels and the tensor-product kernel using the joint samples of LFPs and spiketrains This is quite unlike previous work in adaptive filteringthat almost exclusively uses the Gaussian kernel with real-valued time series
4 Adaptive Inverse Control of theSpatiotemporal Patterns of Neural Activity
As the name indicates the basic idea of adaptive inverse con-trol is to learn an inverse model of the plant as the controllerin Figure 2(a) such that the cascade of the controller and theplant will perform like a unitary transfer function that is aperfect wire with some delay The target plant output is usedas the controllerrsquos command inputThe controller parametersare updated to minimize the dissimilarity between the target
output and the plantrsquos output during the control processwhich enables the controller to track the plant variation andcancel system noises The filtered-120598 LMS adaptive inversecontrol diagram [51] shown in Figure 2(a) represents thefiltered-120598 approach to find119862(119911) If the ideal inverse controller119862(119911) is the actual inverse controller the mean square of theoverall system error 120598
119896would be minimized The objective
is to make 119862(119911) as close as possible to the ideal 119862(119911) Thedifference between the outputs of 119862(119911) and 119862(119911) both drivenby the command input is therefore an error signal 1205981015840 Sincethe target stimulation is unknown instead of 1205981015840 a filterederror 120598 obtained by filtering the overall system error 120598
119896
through the inverse plantmodel minus1(119911) is used for adaptationin place of 1205981015840
If the plant has a long response time a modeling delayis advantageous to capture the early stages of the responsewhich is determined by the sliding window length that isused to obtain the inverse controller input There is noperformance penalty from the delay Δ as long as the input to119862(119911) undergoes the same delayThe parameters of the inversemodel minus1(119911) are initially modeled offline and updated dur-ing the whole system operation which allows minus1(119911) to in-crementally identify the inverse system and thus make 120598approach 1205981015840 Moreover the adaptation enables minus1(119911) to trackchanges in the plant Thus minimizing the filter error ob-tained from
minus1(119911) makes the controller follow the system
variation
41 Relation toNeuralDecoding In this control scheme thereare only two models 119862(119911) and minus1(119911) adjusted during thecontrol process which share the same input (neural activ-ity) and output types (continuous stimulation waveforms)Therefore bothmodels perform like neural decoders and canbe implemented using the Q-KLMS method we introducedin the previous section Since all the mathematical models
Computational Intelligence and Neuroscience 7
Ideal inverse
Command
Overall system error
Plant disturbance and noise
Control system output
+
+
+
+
C(z)
C(z)
C(z)Delay Δ
Delay Δ
minus
minus
Filtered error 120598
120598998400
sum
sum
sum
P(z)
Pminus1(z)
120598k
input
(a) A filtered-120598 LMS adaptive inverse control diagram
Plant disturbance and noise
Control system output
+
++Command
input
Delay Δ
Delay Δ
Δ
Δ
minus
sum
sum
P(z)120601(x)y = times 120601(x)
120601(z)
120598 = times 120601(xΔ) minus times 120601(z)
120601(x )
120598k = 120601(x ) minus 120601(z)
WC
WC
WC
WPminus1
WPminus1
WPminus1
(b) An adaptive inverse control diagram in RKHS
Figure 2 Adaptive inverse control diagram
in this control scheme are kernel-based models the wholecontrol scheme can be mapped into an RKHS space whichholds several advantages as follows (1) No restriction isimposed on the signal type As long as a kernel can map theplant activity to RKHS the plant can be controlled with thisscheme (2) Both the plant inverse minus1(119911) and the controller119862(119911) have linear structure in RKHS which avoids the dangerof converging to local minima
Specifically the controller 119862(119911) and the plant inverseminus1(119911) are separatelymodeled with the tensor-product kernel
that we described in Section 2 and the model coefficientsare updated with Q-KLMS This structure is shown in Fig-ure 2(b) The model coefficients W
and W
minus1 represent
the weight matrix of 119862(119911) and minus1(119911) obtained by Q-
KLMS respectively As this is a multiple-input multiple-output model W
and W
minus1 are the concatenation of the
filter weights for each stimulation channelThe variables x y and z denote the concatenation of
the windowed target spike trains and LFPs as the commandinput of the controller the estimated stimulation and the
plant output respectively xΔis delayed target signal which is
aligned with the plant output z 120601(sdot) represents the mappingfunction from input space to the RKHS associated with thetensor-product kernel
The overall system error is defined as 120598119896= 120601(x
Δ) minus 120601(z)
which means that the controllerrsquos parameter adaptation seeksto minimize the distance in the RKHS between the targetspike trainLFP and the output of the plant inverse minus1(119911)In this way since the inverse model minus1(119911) has a linearstructure in RKHS the filtered error for stimulation channel119895 isin 1 119872 is
120598 (119895) =Wminus1
119895120601 (xΔ) minusW
minus1
119895120601 (z) (8)
The controller model 119862(119911) has a single input x cor-responding to the concatenation of the spike trains andLFPs and has an119872-channel output y corresponding to themicrostimulation Q-KLMS is used to model 119862(119911) with 119873input samples The target spike trains are repeated amongdifferent trials which means that the repeated samples will
8 Computational Intelligence and Neuroscience
Touch sites
Electrodes
Tactile
Neural responses
Electrical
VPL
VPL
S1
S1
stimulation microstimulation
spike trains and local field potentials
2mm
250120583m
500120583m
Figure 3 Neural elements in tactile stimulation experiments To the left is the ratrsquos hand with representative cutaneous receptive fieldsWhenthe tactor touches a particular ldquoreceptive fieldrdquo on the hand VPL thalamus receives this information and relays it to S1 cortex To emulateldquonatural touchrdquo with microstimulation the optimized spatiotemporal microstimulus patterns are injected into the same receptive field onVPL thalamus through a microarray so that the target neural activity pattern can be replicated in somatosensory regions (S1) to convey thenatural touch sensation to the animal
be merged on the same kernel center of the first pass throughthe data by the quantization and thus the network size of theinverse controller is fixed (119873 centers) Only the coefficientmatrix a is updated with the filtered error 120598 during the wholecontrol operation The output of 119862(119911) can be calculated by
[
[
[
[
[
1199101
1199102
119910119872
]
]
]
]
]
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
[
120601(c1198881)
1015840
120601(c1198882)
1015840
120601(c119888119873)
1015840
]
]
]
]
]
]
]
]
]
120601 (x)
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
120581 (c1198881 x)
120581 (c1198882 x)
120581 (c119888119873 x)
]
]
]
]
]
]
]
]
(9)
where c119888119899is the 119899th center and 119886
119898119899is the coefficient assigned
to the 119899th kernel center for the119898 channel of the output
5 Sensory Stimulation Experiment
51 Experimental Motivation and Framework We appliedthese methods to the problem of converting touch in-formation to electrical stimulation in neural prostheses So-matosensory information originating in the peripheral ner-vous system ascends through the ventral posterior lateral(VPL) nucleus of the thalamus on its way to the primary
somatosensory cortex (S1) Since most cutaneous and pro-prioceptive information is relayed through this nucleus weexpect that a suitably designed electrode array could be usedto selectively stimulate a local group of VPL neurons so asto convey similar information to cortex Electrophysiologicalexperiments [52] suggest that the rostral portion of the ratVPL nucleus carries a large amount of proprioceptive infor-mation while themedial and caudal portions codemostly forcutaneous stimuli Since the body maps for both VPL thal-amus and S1 are known and fairly consistent it is possible toimplant electrode arrays in somatotopically overlapping areasof both regions
We applied the proposed control method to generatemultichannel electrical stimulation in VPL so as to evokea naturalistic neural trajectory in S1 Figure 3 shows aschematic depiction of our experiment whichwas conductedin rats After implanting arrays in both VPL and S1 theresponses to natural stimulation delivered by a motorizedtactor were recorded in S1 Then we applied randomlypatterned microstimulation in the VPL while recording theresponses in S1 Using these responses we then trained ourcontroller to output themicrostimulation patterns that wouldmost accurately reproduce the neural responses to naturaltouch in S1 To solve this control problemwe first investigatedhow reliably the two types of stimulation natural touch andelectrical microstimulation can be decoded
52 Data Collection All animal procedures were approvedby the SUNY Downstate Medical Center IACUC and con-formed to National Institutes of Health guidelines A sin-gle female Long-Evans rat (Hilltop Scottsdale PA) wasimplanted with two microarrays while under anesthesia
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
4 Computational Intelligence and Neuroscience
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Injective mapping
Spike trains
LFPs
si
sj
xm
xn120593(xn)
120593(xm)
120601(si)
120601(sj)
(si sj) ⟨
⟨
120601(si) 120601(sj)⟩
⟩(xm xn) = 120593(xm) 120593(xn )
120601(si) otimes 120593(xn)
120601(sj) otimes 120593(xn)
120581(si sj xm xn) = 120581s(si sj)120581x(xm xn)= ⟨120601(si) 120601(sj)⟩⟨120593(xm) 120593(xm)⟩
= ⟨120601(si) otimes 120593(xm) 120601(si) otimes 120593(xn)⟩
120581s =
120581x
Figure 1 Schematic representation of the construction of the RKHS defined by the tensor-product kernel from the individual spike andLFP kernel spaces along with the mapping from the original data Specifically s
119894denotes a window of multiunit spike trains x
119899denotes a
window of multichannel LFPs 120581119904(sdot sdot) denotes the spike train kernel with implicit mapping function 120601(sdot) and 120581
119909(sdot sdot) denotes the LFP kernel
with implicit mapping function 120593(sdot)
In general a joint kernel space constructed via eitherdirect sum or tensor product allows the homogeneous pro-cessing of heterogenous signal types all within the frameworkof RKHSsWe use direct sum kernels to combine the differentdimensions of multiunit spike trains or multichannel LFPsFor the spike trains using the sum kernel across the differentunits enables an ldquoaveragerdquo population similarity over thespace of spike trains where averages cannot be computedThen a tensor-product kernel combines the two kernels onefor the multiunit spike trains and one for the multichannelLPFs see Figure 1 for an illustration The kernels for spiketrains and LFPs can be selected and specified individuallyaccording to their specific properties
In conclusion composite kernels are very different fromthose commonly used in kernel-based machine learning forexample for the support vector machine In fact here a pairof windowed spike trains and windowed LFPs is mappedinto a feature function in the joint RKHS Different spiketrain and LFP pairs are mapped to different locations in thisRKHS as shown in Figure 1 Due to its versatility Schoenbergkernels defined for both the spike timing space and LFPs areemployed in this paper and discussed below
21 Kernel for Spike Trains Unlike conventional amplitudedata there is no natural algebraic structure in the space ofspike trains The binning process which easily transformsthe point processes into discrete amplitude time series iswidely used in spike train analysis and allows the applicationof conventional amplitude-based kernels to spike trains [41]at the expense of losing the temporal resolution of theneural responses This means that any temporal informationin spikes within and between bins is disregarded which isespecially alarming when spike timing precision can be inthe millisecond range Although the bin size can be set smallenough to preserve the fine time resolution it will sparsify
the signal representation increase the artifact variabilityand cause high-dimensionality in the model which requiresvoluminous data for proper training
According to the literature it is appropriate to considera spike train to be a realization of a point process whichdescribes the temporal distribution of the spikes Generallyspeaking a point process 119901
119894can be completely characterized
by its conditional intensity function 120582(119905 | 119867119894119905) where 119905 isin 120591 =
[0 119879] denotes the time coordinate and119867119894119905is the history of the
process up to 119905 A recent research area [42 43] is to definean injective mapping from spike trains to RKHS based onthe kernel between the conditional intensity functions of twopoint processes [42] Among the cross-intensity (CI) kernelsthe Schoenberg kernel is defined as
120581 (120582 (119905 | 119867119894
119905) 120582 (119905 | 119867
119895
119905))
= exp(minus10038171003817100381710038171003817120582 (119905 | 119867
119894
119905) minus 120582 (119905 | 119867
119895
119905)
10038171003817100381710038171003817
2
1205902
)
= exp(minusint120591(120582 (119905 | 119867
119894
119905) minus 120582 (119905 | 119867
119895
119905))
2
d1199051205902
)
(3)
where 120590 is the kernel size The Schoenberg kernel is selectedin this work because of its modeling accuracy and robustnessto free parameter settings [44] The Schoenberg kernel isa Gaussian-like kernel defined on intensity functions thatis strictly positive definite and sensitive to the nonlinearcoupling of two intensity functions [42] Different spiketrains will then be mapped to different locations in theRKHS Compared to kernels designed on binned spike trains(eg spikernel [41]) the main advantage of the Schoenbergkernel is that the precision in the spike event location isbetter preserved and the limitations of the sparseness and
Computational Intelligence and Neuroscience 5
high-dimensionality for model building are also avoided re-sulting in enhanced robustness and reduced computationalcomplexity especially when the application requires fine timeresolution [44]
In order to be applicable the methodology must lead toa simple estimation of the quantities of interest (eg thekernel) from experimental data A practical choice used inour work estimates the conditional intensity function using akernel smoothing approach [42 45] which allows estimatingthe intensity function from a single realization and nonpara-metrically and injectively maps a windowed spike train intoa continuous function The estimated intensity function isobtained by simply convolving 119904(119905)with the smoothing kernel119892(119905) yielding
120582 (119905) =
119872
sum
119898=1
119892 (119905 minus 119905119898) 119905
119898isin T 119898 = 1 119872 (4)
where the smoothing function 119892(119905) integrates to 1 Here 120582(119905)can be interpreted as an estimation of the intensity functionThe rectangular and exponential functions [42 46] are bothpopular smoothing kernels which guarantee injective map-pings from the spike train to the estimated intensity functionIn order to decrease the kernel computation complexity therectangular function 119892(119905) = (1T)(119880(119905) minus 119880(119905 minusT)) (T ≫
the interspike interval) is used in our work where 119880(119905) is aHeaviside function This rectangular function approximatesthe cumulative density function of spikes counts in the win-dow 119879 and compromises the locality of the spike trains thatis the mapping places more emphasis on the early spikesthan the later ones However our experiments show that thiscompromise only causes a minimal impact on the kernel-based regression performance
Let 119904119899119894(119905) denote the spike train for the 119894th sample of the
119899th spiking unit The multiunit spike kernel is taken as theunweighted sum over the kernels on the individual units
120581119904(s119894(119905) s119895(119905)) = sum
119899
120581119904(119904119899
119894(119905) 119904119899
119895(119905)) (5)
22 Kernels for LFPs In contrast with spike trains LFPsexhibit less spatial and temporal selectivity [15] In the timedomain LFP features can be obtained by sliding a windowon the signal which describes the temporal LFP structureThe length of the window is selected based on the durationof neural responses to certain stimuli the extent of theduration can be assessed by its autocorrelation function aswe will discuss in Section 531 In the frequency domain thespectral power and phase in different frequency bands arealso known to be informative features for decoding but herewe concentrate only on the time-domain decompositions Inthe time domain we can simply treat single channel LFPsas a time series and apply the standard Schoenberg kernelto the sequence of signal samples in time The Schoenberg
kernel defined in continuous spaces maps the correlationtime structure of the LFP 119909(119905) into a function in RKHS
120581119909(119909119894(119905) 119909119895(119905))
= exp(minus10038171003817100381710038171003817119909119894(119905) minus 119909
119895(119905)
10038171003817100381710038171003817
2
1205902
119909
)
= exp(minus
intT119909
(119909119894(119905) minus 119909
119895(119905))
2
d119905
1205902
119909
)
= exp(minus(intT119909
119909119894(119905) 119909119894(119905)
+119909119895(119905) 119909119895(119905) minus 2119909
119894(119905) 119909119895(119905) d119905)
times (1205902
119909)
minus1
)
(6)
whereT119909= [0 119879
119909]
Let 119909119899119894(119905) denote the LFP waveform for the 119894th sample of
the 119899th channel Similar to the multiunit spike kernel themultichannel LFP kernel is defined by the direct sum kernel
120581119909(x119894(119905) x119895(119905)) = sum
119899
120581119909(119909119899
119894(119905) 119909119899
119895(119905)) (7)
23 Discrete Time Sampling Assuming a sampling rate withperiod 120591 let x
119894= [1199091
119894 1199091
119894+1 119909
1
119894minus1+119879120591 1199092
119894 119909
119873
119894minus1+119879120591] de-
note the 119894th multichannel LFP vector obtained by sliding the119879-length window with step 120591 Let s
119894= 119905119898minus(119894minus1)120591 119905
119898isin [(119894minus
1)120591 (119894 minus 1)120591 + 119879] 119898 = 1 119872 denote the corresponding119894th window of the multiunit spike timing sequence The timescale both in terms of the window length and sampling rateof the analysis for LFPs and spikes is very important andneeds to be defined by the characteristics of each signal Thetensor-product kernel allows the time scales of the analysis forLFPs and spike trains to be specified individually that is thewindow length 119879 and sampling rate for spike trains and LFPscould be different The suitable time scale can be estimatedthrough autocorrelation coefficients of the signal as will beexplained below
3 Adaptive Neural Decoding Model
For neural decoding applications a regression model withmultiscale neural activities as the input is built to reconstructa stimulus The appeal of kernel-based filters is the usage ofthe linear structure of RKHS to implement well-establishedlinear adaptive algorithms and to obtain a nonlinear filterin the input space that leads to universal approximationcapability without the problem of local minima There areseveral candidate kernel-based regressionmethods [32] suchas support vector regression (SVR) [33] kernel recursive leastsquares (KRLS) and kernel least mean square (KLMS) [34]The KLMS algorithm is preferred here because it is an onlinemethodology of low computation complexity
6 Computational Intelligence and Neuroscience
Input 119909119899 119889119899 119899 = 1 2 119873
Initialization initialize the weight vector Ω(1) codebook (set of centers)C(0) = and coefficient vector 119886(0) = []ComputationFor 119899 = 1 2 119873(1) compute the output
119910119899= ⟨Ω(119899) 120601(119909
119899)⟩ =
size(C(119899minus1))
sum
119895=1
119886119895(119899 minus 1)120581(119909
119899C119895(119899 minus 1))
(2) compute the error 119890119899= 119889119899minus 119910119899
(3) compute the minimum distance in RKHS between 119909119899and each 119909
119894isin C(119899 minus 1)
119889(119909119899) = min
119895
(2 minus 2120581(119909119899C119895(119899 minus 1)))
(4) if 119889(119909119899) le 120576 then keep the codebook unchangedC(119899) = C(119899 minus 1) and update the coefficient of the center closest to 119909
119899
119886119896(119899) = 119886
119896(119899 minus 1) + 120578119890(119899) where 119896 = argmin
119895
radic2 minus 2120581(119909119899C119895(119899 minus 1))
(5) otherwise store the new centerC(119899) = C(119899 minus 1) 119909119899 119886(119899) = [119886(119899 minus 1) 120578119890
119899]
(6) Ω(119899 + 1) =size(C(119899))
sum
119895=1
119886119895(119899)120601(C
119895(119899))
end
Algorithm 1 Quantized kernel least mean square (QKLMS) algorithm
The quantized kernel least mean square (Q-KLMS) isselected in our work to decrease the filter growth Algo-rithm 1 shows the pseudocode for the Q-KLMS algorithmwith a simple online vector quantization (VQ)method wherethe quantization is performed based on the distance betweenthe new input and each existing center In this work thisdistance between a center and the input is defined by theirdistance in RKHS which for a shift-invariant normalizedkernel for all 119909120581(119909 119909) = 1 is 120601(119909
119899) minus 120601(119909
119894)2
2= 2minus2120581(119909
119899 119909119894)
If the smallest distance is less than a prespecified quantizationsize 120576 the new coefficient 120578119890
119899adjusts the weight of the closest
center otherwise a new center is added Compared to othertechniques [47ndash50] that have been proposed to curb thegrowth of the networks the simple online VQ method isnot optimal but is very efficient Since in our work thealgorithm must be applied several times to the same data forconvergence after the first iteration over the data we choose120576 = 0 whichmerges the repeated centers and enjoys the sameperformance as KLMS
We use the Q-KLMS framework with the multiunitspike kernels the multichannel LFP kernels and the tensor-product kernel using the joint samples of LFPs and spiketrains This is quite unlike previous work in adaptive filteringthat almost exclusively uses the Gaussian kernel with real-valued time series
4 Adaptive Inverse Control of theSpatiotemporal Patterns of Neural Activity
As the name indicates the basic idea of adaptive inverse con-trol is to learn an inverse model of the plant as the controllerin Figure 2(a) such that the cascade of the controller and theplant will perform like a unitary transfer function that is aperfect wire with some delay The target plant output is usedas the controllerrsquos command inputThe controller parametersare updated to minimize the dissimilarity between the target
output and the plantrsquos output during the control processwhich enables the controller to track the plant variation andcancel system noises The filtered-120598 LMS adaptive inversecontrol diagram [51] shown in Figure 2(a) represents thefiltered-120598 approach to find119862(119911) If the ideal inverse controller119862(119911) is the actual inverse controller the mean square of theoverall system error 120598
119896would be minimized The objective
is to make 119862(119911) as close as possible to the ideal 119862(119911) Thedifference between the outputs of 119862(119911) and 119862(119911) both drivenby the command input is therefore an error signal 1205981015840 Sincethe target stimulation is unknown instead of 1205981015840 a filterederror 120598 obtained by filtering the overall system error 120598
119896
through the inverse plantmodel minus1(119911) is used for adaptationin place of 1205981015840
If the plant has a long response time a modeling delayis advantageous to capture the early stages of the responsewhich is determined by the sliding window length that isused to obtain the inverse controller input There is noperformance penalty from the delay Δ as long as the input to119862(119911) undergoes the same delayThe parameters of the inversemodel minus1(119911) are initially modeled offline and updated dur-ing the whole system operation which allows minus1(119911) to in-crementally identify the inverse system and thus make 120598approach 1205981015840 Moreover the adaptation enables minus1(119911) to trackchanges in the plant Thus minimizing the filter error ob-tained from
minus1(119911) makes the controller follow the system
variation
41 Relation toNeuralDecoding In this control scheme thereare only two models 119862(119911) and minus1(119911) adjusted during thecontrol process which share the same input (neural activ-ity) and output types (continuous stimulation waveforms)Therefore bothmodels perform like neural decoders and canbe implemented using the Q-KLMS method we introducedin the previous section Since all the mathematical models
Computational Intelligence and Neuroscience 7
Ideal inverse
Command
Overall system error
Plant disturbance and noise
Control system output
+
+
+
+
C(z)
C(z)
C(z)Delay Δ
Delay Δ
minus
minus
Filtered error 120598
120598998400
sum
sum
sum
P(z)
Pminus1(z)
120598k
input
(a) A filtered-120598 LMS adaptive inverse control diagram
Plant disturbance and noise
Control system output
+
++Command
input
Delay Δ
Delay Δ
Δ
Δ
minus
sum
sum
P(z)120601(x)y = times 120601(x)
120601(z)
120598 = times 120601(xΔ) minus times 120601(z)
120601(x )
120598k = 120601(x ) minus 120601(z)
WC
WC
WC
WPminus1
WPminus1
WPminus1
(b) An adaptive inverse control diagram in RKHS
Figure 2 Adaptive inverse control diagram
in this control scheme are kernel-based models the wholecontrol scheme can be mapped into an RKHS space whichholds several advantages as follows (1) No restriction isimposed on the signal type As long as a kernel can map theplant activity to RKHS the plant can be controlled with thisscheme (2) Both the plant inverse minus1(119911) and the controller119862(119911) have linear structure in RKHS which avoids the dangerof converging to local minima
Specifically the controller 119862(119911) and the plant inverseminus1(119911) are separatelymodeled with the tensor-product kernel
that we described in Section 2 and the model coefficientsare updated with Q-KLMS This structure is shown in Fig-ure 2(b) The model coefficients W
and W
minus1 represent
the weight matrix of 119862(119911) and minus1(119911) obtained by Q-
KLMS respectively As this is a multiple-input multiple-output model W
and W
minus1 are the concatenation of the
filter weights for each stimulation channelThe variables x y and z denote the concatenation of
the windowed target spike trains and LFPs as the commandinput of the controller the estimated stimulation and the
plant output respectively xΔis delayed target signal which is
aligned with the plant output z 120601(sdot) represents the mappingfunction from input space to the RKHS associated with thetensor-product kernel
The overall system error is defined as 120598119896= 120601(x
Δ) minus 120601(z)
which means that the controllerrsquos parameter adaptation seeksto minimize the distance in the RKHS between the targetspike trainLFP and the output of the plant inverse minus1(119911)In this way since the inverse model minus1(119911) has a linearstructure in RKHS the filtered error for stimulation channel119895 isin 1 119872 is
120598 (119895) =Wminus1
119895120601 (xΔ) minusW
minus1
119895120601 (z) (8)
The controller model 119862(119911) has a single input x cor-responding to the concatenation of the spike trains andLFPs and has an119872-channel output y corresponding to themicrostimulation Q-KLMS is used to model 119862(119911) with 119873input samples The target spike trains are repeated amongdifferent trials which means that the repeated samples will
8 Computational Intelligence and Neuroscience
Touch sites
Electrodes
Tactile
Neural responses
Electrical
VPL
VPL
S1
S1
stimulation microstimulation
spike trains and local field potentials
2mm
250120583m
500120583m
Figure 3 Neural elements in tactile stimulation experiments To the left is the ratrsquos hand with representative cutaneous receptive fieldsWhenthe tactor touches a particular ldquoreceptive fieldrdquo on the hand VPL thalamus receives this information and relays it to S1 cortex To emulateldquonatural touchrdquo with microstimulation the optimized spatiotemporal microstimulus patterns are injected into the same receptive field onVPL thalamus through a microarray so that the target neural activity pattern can be replicated in somatosensory regions (S1) to convey thenatural touch sensation to the animal
be merged on the same kernel center of the first pass throughthe data by the quantization and thus the network size of theinverse controller is fixed (119873 centers) Only the coefficientmatrix a is updated with the filtered error 120598 during the wholecontrol operation The output of 119862(119911) can be calculated by
[
[
[
[
[
1199101
1199102
119910119872
]
]
]
]
]
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
[
120601(c1198881)
1015840
120601(c1198882)
1015840
120601(c119888119873)
1015840
]
]
]
]
]
]
]
]
]
120601 (x)
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
120581 (c1198881 x)
120581 (c1198882 x)
120581 (c119888119873 x)
]
]
]
]
]
]
]
]
(9)
where c119888119899is the 119899th center and 119886
119898119899is the coefficient assigned
to the 119899th kernel center for the119898 channel of the output
5 Sensory Stimulation Experiment
51 Experimental Motivation and Framework We appliedthese methods to the problem of converting touch in-formation to electrical stimulation in neural prostheses So-matosensory information originating in the peripheral ner-vous system ascends through the ventral posterior lateral(VPL) nucleus of the thalamus on its way to the primary
somatosensory cortex (S1) Since most cutaneous and pro-prioceptive information is relayed through this nucleus weexpect that a suitably designed electrode array could be usedto selectively stimulate a local group of VPL neurons so asto convey similar information to cortex Electrophysiologicalexperiments [52] suggest that the rostral portion of the ratVPL nucleus carries a large amount of proprioceptive infor-mation while themedial and caudal portions codemostly forcutaneous stimuli Since the body maps for both VPL thal-amus and S1 are known and fairly consistent it is possible toimplant electrode arrays in somatotopically overlapping areasof both regions
We applied the proposed control method to generatemultichannel electrical stimulation in VPL so as to evokea naturalistic neural trajectory in S1 Figure 3 shows aschematic depiction of our experiment whichwas conductedin rats After implanting arrays in both VPL and S1 theresponses to natural stimulation delivered by a motorizedtactor were recorded in S1 Then we applied randomlypatterned microstimulation in the VPL while recording theresponses in S1 Using these responses we then trained ourcontroller to output themicrostimulation patterns that wouldmost accurately reproduce the neural responses to naturaltouch in S1 To solve this control problemwe first investigatedhow reliably the two types of stimulation natural touch andelectrical microstimulation can be decoded
52 Data Collection All animal procedures were approvedby the SUNY Downstate Medical Center IACUC and con-formed to National Institutes of Health guidelines A sin-gle female Long-Evans rat (Hilltop Scottsdale PA) wasimplanted with two microarrays while under anesthesia
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 5
high-dimensionality for model building are also avoided re-sulting in enhanced robustness and reduced computationalcomplexity especially when the application requires fine timeresolution [44]
In order to be applicable the methodology must lead toa simple estimation of the quantities of interest (eg thekernel) from experimental data A practical choice used inour work estimates the conditional intensity function using akernel smoothing approach [42 45] which allows estimatingthe intensity function from a single realization and nonpara-metrically and injectively maps a windowed spike train intoa continuous function The estimated intensity function isobtained by simply convolving 119904(119905)with the smoothing kernel119892(119905) yielding
120582 (119905) =
119872
sum
119898=1
119892 (119905 minus 119905119898) 119905
119898isin T 119898 = 1 119872 (4)
where the smoothing function 119892(119905) integrates to 1 Here 120582(119905)can be interpreted as an estimation of the intensity functionThe rectangular and exponential functions [42 46] are bothpopular smoothing kernels which guarantee injective map-pings from the spike train to the estimated intensity functionIn order to decrease the kernel computation complexity therectangular function 119892(119905) = (1T)(119880(119905) minus 119880(119905 minusT)) (T ≫
the interspike interval) is used in our work where 119880(119905) is aHeaviside function This rectangular function approximatesthe cumulative density function of spikes counts in the win-dow 119879 and compromises the locality of the spike trains thatis the mapping places more emphasis on the early spikesthan the later ones However our experiments show that thiscompromise only causes a minimal impact on the kernel-based regression performance
Let 119904119899119894(119905) denote the spike train for the 119894th sample of the
119899th spiking unit The multiunit spike kernel is taken as theunweighted sum over the kernels on the individual units
120581119904(s119894(119905) s119895(119905)) = sum
119899
120581119904(119904119899
119894(119905) 119904119899
119895(119905)) (5)
22 Kernels for LFPs In contrast with spike trains LFPsexhibit less spatial and temporal selectivity [15] In the timedomain LFP features can be obtained by sliding a windowon the signal which describes the temporal LFP structureThe length of the window is selected based on the durationof neural responses to certain stimuli the extent of theduration can be assessed by its autocorrelation function aswe will discuss in Section 531 In the frequency domain thespectral power and phase in different frequency bands arealso known to be informative features for decoding but herewe concentrate only on the time-domain decompositions Inthe time domain we can simply treat single channel LFPsas a time series and apply the standard Schoenberg kernelto the sequence of signal samples in time The Schoenberg
kernel defined in continuous spaces maps the correlationtime structure of the LFP 119909(119905) into a function in RKHS
120581119909(119909119894(119905) 119909119895(119905))
= exp(minus10038171003817100381710038171003817119909119894(119905) minus 119909
119895(119905)
10038171003817100381710038171003817
2
1205902
119909
)
= exp(minus
intT119909
(119909119894(119905) minus 119909
119895(119905))
2
d119905
1205902
119909
)
= exp(minus(intT119909
119909119894(119905) 119909119894(119905)
+119909119895(119905) 119909119895(119905) minus 2119909
119894(119905) 119909119895(119905) d119905)
times (1205902
119909)
minus1
)
(6)
whereT119909= [0 119879
119909]
Let 119909119899119894(119905) denote the LFP waveform for the 119894th sample of
the 119899th channel Similar to the multiunit spike kernel themultichannel LFP kernel is defined by the direct sum kernel
120581119909(x119894(119905) x119895(119905)) = sum
119899
120581119909(119909119899
119894(119905) 119909119899
119895(119905)) (7)
23 Discrete Time Sampling Assuming a sampling rate withperiod 120591 let x
119894= [1199091
119894 1199091
119894+1 119909
1
119894minus1+119879120591 1199092
119894 119909
119873
119894minus1+119879120591] de-
note the 119894th multichannel LFP vector obtained by sliding the119879-length window with step 120591 Let s
119894= 119905119898minus(119894minus1)120591 119905
119898isin [(119894minus
1)120591 (119894 minus 1)120591 + 119879] 119898 = 1 119872 denote the corresponding119894th window of the multiunit spike timing sequence The timescale both in terms of the window length and sampling rateof the analysis for LFPs and spikes is very important andneeds to be defined by the characteristics of each signal Thetensor-product kernel allows the time scales of the analysis forLFPs and spike trains to be specified individually that is thewindow length 119879 and sampling rate for spike trains and LFPscould be different The suitable time scale can be estimatedthrough autocorrelation coefficients of the signal as will beexplained below
3 Adaptive Neural Decoding Model
For neural decoding applications a regression model withmultiscale neural activities as the input is built to reconstructa stimulus The appeal of kernel-based filters is the usage ofthe linear structure of RKHS to implement well-establishedlinear adaptive algorithms and to obtain a nonlinear filterin the input space that leads to universal approximationcapability without the problem of local minima There areseveral candidate kernel-based regressionmethods [32] suchas support vector regression (SVR) [33] kernel recursive leastsquares (KRLS) and kernel least mean square (KLMS) [34]The KLMS algorithm is preferred here because it is an onlinemethodology of low computation complexity
6 Computational Intelligence and Neuroscience
Input 119909119899 119889119899 119899 = 1 2 119873
Initialization initialize the weight vector Ω(1) codebook (set of centers)C(0) = and coefficient vector 119886(0) = []ComputationFor 119899 = 1 2 119873(1) compute the output
119910119899= ⟨Ω(119899) 120601(119909
119899)⟩ =
size(C(119899minus1))
sum
119895=1
119886119895(119899 minus 1)120581(119909
119899C119895(119899 minus 1))
(2) compute the error 119890119899= 119889119899minus 119910119899
(3) compute the minimum distance in RKHS between 119909119899and each 119909
119894isin C(119899 minus 1)
119889(119909119899) = min
119895
(2 minus 2120581(119909119899C119895(119899 minus 1)))
(4) if 119889(119909119899) le 120576 then keep the codebook unchangedC(119899) = C(119899 minus 1) and update the coefficient of the center closest to 119909
119899
119886119896(119899) = 119886
119896(119899 minus 1) + 120578119890(119899) where 119896 = argmin
119895
radic2 minus 2120581(119909119899C119895(119899 minus 1))
(5) otherwise store the new centerC(119899) = C(119899 minus 1) 119909119899 119886(119899) = [119886(119899 minus 1) 120578119890
119899]
(6) Ω(119899 + 1) =size(C(119899))
sum
119895=1
119886119895(119899)120601(C
119895(119899))
end
Algorithm 1 Quantized kernel least mean square (QKLMS) algorithm
The quantized kernel least mean square (Q-KLMS) isselected in our work to decrease the filter growth Algo-rithm 1 shows the pseudocode for the Q-KLMS algorithmwith a simple online vector quantization (VQ)method wherethe quantization is performed based on the distance betweenthe new input and each existing center In this work thisdistance between a center and the input is defined by theirdistance in RKHS which for a shift-invariant normalizedkernel for all 119909120581(119909 119909) = 1 is 120601(119909
119899) minus 120601(119909
119894)2
2= 2minus2120581(119909
119899 119909119894)
If the smallest distance is less than a prespecified quantizationsize 120576 the new coefficient 120578119890
119899adjusts the weight of the closest
center otherwise a new center is added Compared to othertechniques [47ndash50] that have been proposed to curb thegrowth of the networks the simple online VQ method isnot optimal but is very efficient Since in our work thealgorithm must be applied several times to the same data forconvergence after the first iteration over the data we choose120576 = 0 whichmerges the repeated centers and enjoys the sameperformance as KLMS
We use the Q-KLMS framework with the multiunitspike kernels the multichannel LFP kernels and the tensor-product kernel using the joint samples of LFPs and spiketrains This is quite unlike previous work in adaptive filteringthat almost exclusively uses the Gaussian kernel with real-valued time series
4 Adaptive Inverse Control of theSpatiotemporal Patterns of Neural Activity
As the name indicates the basic idea of adaptive inverse con-trol is to learn an inverse model of the plant as the controllerin Figure 2(a) such that the cascade of the controller and theplant will perform like a unitary transfer function that is aperfect wire with some delay The target plant output is usedas the controllerrsquos command inputThe controller parametersare updated to minimize the dissimilarity between the target
output and the plantrsquos output during the control processwhich enables the controller to track the plant variation andcancel system noises The filtered-120598 LMS adaptive inversecontrol diagram [51] shown in Figure 2(a) represents thefiltered-120598 approach to find119862(119911) If the ideal inverse controller119862(119911) is the actual inverse controller the mean square of theoverall system error 120598
119896would be minimized The objective
is to make 119862(119911) as close as possible to the ideal 119862(119911) Thedifference between the outputs of 119862(119911) and 119862(119911) both drivenby the command input is therefore an error signal 1205981015840 Sincethe target stimulation is unknown instead of 1205981015840 a filterederror 120598 obtained by filtering the overall system error 120598
119896
through the inverse plantmodel minus1(119911) is used for adaptationin place of 1205981015840
If the plant has a long response time a modeling delayis advantageous to capture the early stages of the responsewhich is determined by the sliding window length that isused to obtain the inverse controller input There is noperformance penalty from the delay Δ as long as the input to119862(119911) undergoes the same delayThe parameters of the inversemodel minus1(119911) are initially modeled offline and updated dur-ing the whole system operation which allows minus1(119911) to in-crementally identify the inverse system and thus make 120598approach 1205981015840 Moreover the adaptation enables minus1(119911) to trackchanges in the plant Thus minimizing the filter error ob-tained from
minus1(119911) makes the controller follow the system
variation
41 Relation toNeuralDecoding In this control scheme thereare only two models 119862(119911) and minus1(119911) adjusted during thecontrol process which share the same input (neural activ-ity) and output types (continuous stimulation waveforms)Therefore bothmodels perform like neural decoders and canbe implemented using the Q-KLMS method we introducedin the previous section Since all the mathematical models
Computational Intelligence and Neuroscience 7
Ideal inverse
Command
Overall system error
Plant disturbance and noise
Control system output
+
+
+
+
C(z)
C(z)
C(z)Delay Δ
Delay Δ
minus
minus
Filtered error 120598
120598998400
sum
sum
sum
P(z)
Pminus1(z)
120598k
input
(a) A filtered-120598 LMS adaptive inverse control diagram
Plant disturbance and noise
Control system output
+
++Command
input
Delay Δ
Delay Δ
Δ
Δ
minus
sum
sum
P(z)120601(x)y = times 120601(x)
120601(z)
120598 = times 120601(xΔ) minus times 120601(z)
120601(x )
120598k = 120601(x ) minus 120601(z)
WC
WC
WC
WPminus1
WPminus1
WPminus1
(b) An adaptive inverse control diagram in RKHS
Figure 2 Adaptive inverse control diagram
in this control scheme are kernel-based models the wholecontrol scheme can be mapped into an RKHS space whichholds several advantages as follows (1) No restriction isimposed on the signal type As long as a kernel can map theplant activity to RKHS the plant can be controlled with thisscheme (2) Both the plant inverse minus1(119911) and the controller119862(119911) have linear structure in RKHS which avoids the dangerof converging to local minima
Specifically the controller 119862(119911) and the plant inverseminus1(119911) are separatelymodeled with the tensor-product kernel
that we described in Section 2 and the model coefficientsare updated with Q-KLMS This structure is shown in Fig-ure 2(b) The model coefficients W
and W
minus1 represent
the weight matrix of 119862(119911) and minus1(119911) obtained by Q-
KLMS respectively As this is a multiple-input multiple-output model W
and W
minus1 are the concatenation of the
filter weights for each stimulation channelThe variables x y and z denote the concatenation of
the windowed target spike trains and LFPs as the commandinput of the controller the estimated stimulation and the
plant output respectively xΔis delayed target signal which is
aligned with the plant output z 120601(sdot) represents the mappingfunction from input space to the RKHS associated with thetensor-product kernel
The overall system error is defined as 120598119896= 120601(x
Δ) minus 120601(z)
which means that the controllerrsquos parameter adaptation seeksto minimize the distance in the RKHS between the targetspike trainLFP and the output of the plant inverse minus1(119911)In this way since the inverse model minus1(119911) has a linearstructure in RKHS the filtered error for stimulation channel119895 isin 1 119872 is
120598 (119895) =Wminus1
119895120601 (xΔ) minusW
minus1
119895120601 (z) (8)
The controller model 119862(119911) has a single input x cor-responding to the concatenation of the spike trains andLFPs and has an119872-channel output y corresponding to themicrostimulation Q-KLMS is used to model 119862(119911) with 119873input samples The target spike trains are repeated amongdifferent trials which means that the repeated samples will
8 Computational Intelligence and Neuroscience
Touch sites
Electrodes
Tactile
Neural responses
Electrical
VPL
VPL
S1
S1
stimulation microstimulation
spike trains and local field potentials
2mm
250120583m
500120583m
Figure 3 Neural elements in tactile stimulation experiments To the left is the ratrsquos hand with representative cutaneous receptive fieldsWhenthe tactor touches a particular ldquoreceptive fieldrdquo on the hand VPL thalamus receives this information and relays it to S1 cortex To emulateldquonatural touchrdquo with microstimulation the optimized spatiotemporal microstimulus patterns are injected into the same receptive field onVPL thalamus through a microarray so that the target neural activity pattern can be replicated in somatosensory regions (S1) to convey thenatural touch sensation to the animal
be merged on the same kernel center of the first pass throughthe data by the quantization and thus the network size of theinverse controller is fixed (119873 centers) Only the coefficientmatrix a is updated with the filtered error 120598 during the wholecontrol operation The output of 119862(119911) can be calculated by
[
[
[
[
[
1199101
1199102
119910119872
]
]
]
]
]
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
[
120601(c1198881)
1015840
120601(c1198882)
1015840
120601(c119888119873)
1015840
]
]
]
]
]
]
]
]
]
120601 (x)
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
120581 (c1198881 x)
120581 (c1198882 x)
120581 (c119888119873 x)
]
]
]
]
]
]
]
]
(9)
where c119888119899is the 119899th center and 119886
119898119899is the coefficient assigned
to the 119899th kernel center for the119898 channel of the output
5 Sensory Stimulation Experiment
51 Experimental Motivation and Framework We appliedthese methods to the problem of converting touch in-formation to electrical stimulation in neural prostheses So-matosensory information originating in the peripheral ner-vous system ascends through the ventral posterior lateral(VPL) nucleus of the thalamus on its way to the primary
somatosensory cortex (S1) Since most cutaneous and pro-prioceptive information is relayed through this nucleus weexpect that a suitably designed electrode array could be usedto selectively stimulate a local group of VPL neurons so asto convey similar information to cortex Electrophysiologicalexperiments [52] suggest that the rostral portion of the ratVPL nucleus carries a large amount of proprioceptive infor-mation while themedial and caudal portions codemostly forcutaneous stimuli Since the body maps for both VPL thal-amus and S1 are known and fairly consistent it is possible toimplant electrode arrays in somatotopically overlapping areasof both regions
We applied the proposed control method to generatemultichannel electrical stimulation in VPL so as to evokea naturalistic neural trajectory in S1 Figure 3 shows aschematic depiction of our experiment whichwas conductedin rats After implanting arrays in both VPL and S1 theresponses to natural stimulation delivered by a motorizedtactor were recorded in S1 Then we applied randomlypatterned microstimulation in the VPL while recording theresponses in S1 Using these responses we then trained ourcontroller to output themicrostimulation patterns that wouldmost accurately reproduce the neural responses to naturaltouch in S1 To solve this control problemwe first investigatedhow reliably the two types of stimulation natural touch andelectrical microstimulation can be decoded
52 Data Collection All animal procedures were approvedby the SUNY Downstate Medical Center IACUC and con-formed to National Institutes of Health guidelines A sin-gle female Long-Evans rat (Hilltop Scottsdale PA) wasimplanted with two microarrays while under anesthesia
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
6 Computational Intelligence and Neuroscience
Input 119909119899 119889119899 119899 = 1 2 119873
Initialization initialize the weight vector Ω(1) codebook (set of centers)C(0) = and coefficient vector 119886(0) = []ComputationFor 119899 = 1 2 119873(1) compute the output
119910119899= ⟨Ω(119899) 120601(119909
119899)⟩ =
size(C(119899minus1))
sum
119895=1
119886119895(119899 minus 1)120581(119909
119899C119895(119899 minus 1))
(2) compute the error 119890119899= 119889119899minus 119910119899
(3) compute the minimum distance in RKHS between 119909119899and each 119909
119894isin C(119899 minus 1)
119889(119909119899) = min
119895
(2 minus 2120581(119909119899C119895(119899 minus 1)))
(4) if 119889(119909119899) le 120576 then keep the codebook unchangedC(119899) = C(119899 minus 1) and update the coefficient of the center closest to 119909
119899
119886119896(119899) = 119886
119896(119899 minus 1) + 120578119890(119899) where 119896 = argmin
119895
radic2 minus 2120581(119909119899C119895(119899 minus 1))
(5) otherwise store the new centerC(119899) = C(119899 minus 1) 119909119899 119886(119899) = [119886(119899 minus 1) 120578119890
119899]
(6) Ω(119899 + 1) =size(C(119899))
sum
119895=1
119886119895(119899)120601(C
119895(119899))
end
Algorithm 1 Quantized kernel least mean square (QKLMS) algorithm
The quantized kernel least mean square (Q-KLMS) isselected in our work to decrease the filter growth Algo-rithm 1 shows the pseudocode for the Q-KLMS algorithmwith a simple online vector quantization (VQ)method wherethe quantization is performed based on the distance betweenthe new input and each existing center In this work thisdistance between a center and the input is defined by theirdistance in RKHS which for a shift-invariant normalizedkernel for all 119909120581(119909 119909) = 1 is 120601(119909
119899) minus 120601(119909
119894)2
2= 2minus2120581(119909
119899 119909119894)
If the smallest distance is less than a prespecified quantizationsize 120576 the new coefficient 120578119890
119899adjusts the weight of the closest
center otherwise a new center is added Compared to othertechniques [47ndash50] that have been proposed to curb thegrowth of the networks the simple online VQ method isnot optimal but is very efficient Since in our work thealgorithm must be applied several times to the same data forconvergence after the first iteration over the data we choose120576 = 0 whichmerges the repeated centers and enjoys the sameperformance as KLMS
We use the Q-KLMS framework with the multiunitspike kernels the multichannel LFP kernels and the tensor-product kernel using the joint samples of LFPs and spiketrains This is quite unlike previous work in adaptive filteringthat almost exclusively uses the Gaussian kernel with real-valued time series
4 Adaptive Inverse Control of theSpatiotemporal Patterns of Neural Activity
As the name indicates the basic idea of adaptive inverse con-trol is to learn an inverse model of the plant as the controllerin Figure 2(a) such that the cascade of the controller and theplant will perform like a unitary transfer function that is aperfect wire with some delay The target plant output is usedas the controllerrsquos command inputThe controller parametersare updated to minimize the dissimilarity between the target
output and the plantrsquos output during the control processwhich enables the controller to track the plant variation andcancel system noises The filtered-120598 LMS adaptive inversecontrol diagram [51] shown in Figure 2(a) represents thefiltered-120598 approach to find119862(119911) If the ideal inverse controller119862(119911) is the actual inverse controller the mean square of theoverall system error 120598
119896would be minimized The objective
is to make 119862(119911) as close as possible to the ideal 119862(119911) Thedifference between the outputs of 119862(119911) and 119862(119911) both drivenby the command input is therefore an error signal 1205981015840 Sincethe target stimulation is unknown instead of 1205981015840 a filterederror 120598 obtained by filtering the overall system error 120598
119896
through the inverse plantmodel minus1(119911) is used for adaptationin place of 1205981015840
If the plant has a long response time a modeling delayis advantageous to capture the early stages of the responsewhich is determined by the sliding window length that isused to obtain the inverse controller input There is noperformance penalty from the delay Δ as long as the input to119862(119911) undergoes the same delayThe parameters of the inversemodel minus1(119911) are initially modeled offline and updated dur-ing the whole system operation which allows minus1(119911) to in-crementally identify the inverse system and thus make 120598approach 1205981015840 Moreover the adaptation enables minus1(119911) to trackchanges in the plant Thus minimizing the filter error ob-tained from
minus1(119911) makes the controller follow the system
variation
41 Relation toNeuralDecoding In this control scheme thereare only two models 119862(119911) and minus1(119911) adjusted during thecontrol process which share the same input (neural activ-ity) and output types (continuous stimulation waveforms)Therefore bothmodels perform like neural decoders and canbe implemented using the Q-KLMS method we introducedin the previous section Since all the mathematical models
Computational Intelligence and Neuroscience 7
Ideal inverse
Command
Overall system error
Plant disturbance and noise
Control system output
+
+
+
+
C(z)
C(z)
C(z)Delay Δ
Delay Δ
minus
minus
Filtered error 120598
120598998400
sum
sum
sum
P(z)
Pminus1(z)
120598k
input
(a) A filtered-120598 LMS adaptive inverse control diagram
Plant disturbance and noise
Control system output
+
++Command
input
Delay Δ
Delay Δ
Δ
Δ
minus
sum
sum
P(z)120601(x)y = times 120601(x)
120601(z)
120598 = times 120601(xΔ) minus times 120601(z)
120601(x )
120598k = 120601(x ) minus 120601(z)
WC
WC
WC
WPminus1
WPminus1
WPminus1
(b) An adaptive inverse control diagram in RKHS
Figure 2 Adaptive inverse control diagram
in this control scheme are kernel-based models the wholecontrol scheme can be mapped into an RKHS space whichholds several advantages as follows (1) No restriction isimposed on the signal type As long as a kernel can map theplant activity to RKHS the plant can be controlled with thisscheme (2) Both the plant inverse minus1(119911) and the controller119862(119911) have linear structure in RKHS which avoids the dangerof converging to local minima
Specifically the controller 119862(119911) and the plant inverseminus1(119911) are separatelymodeled with the tensor-product kernel
that we described in Section 2 and the model coefficientsare updated with Q-KLMS This structure is shown in Fig-ure 2(b) The model coefficients W
and W
minus1 represent
the weight matrix of 119862(119911) and minus1(119911) obtained by Q-
KLMS respectively As this is a multiple-input multiple-output model W
and W
minus1 are the concatenation of the
filter weights for each stimulation channelThe variables x y and z denote the concatenation of
the windowed target spike trains and LFPs as the commandinput of the controller the estimated stimulation and the
plant output respectively xΔis delayed target signal which is
aligned with the plant output z 120601(sdot) represents the mappingfunction from input space to the RKHS associated with thetensor-product kernel
The overall system error is defined as 120598119896= 120601(x
Δ) minus 120601(z)
which means that the controllerrsquos parameter adaptation seeksto minimize the distance in the RKHS between the targetspike trainLFP and the output of the plant inverse minus1(119911)In this way since the inverse model minus1(119911) has a linearstructure in RKHS the filtered error for stimulation channel119895 isin 1 119872 is
120598 (119895) =Wminus1
119895120601 (xΔ) minusW
minus1
119895120601 (z) (8)
The controller model 119862(119911) has a single input x cor-responding to the concatenation of the spike trains andLFPs and has an119872-channel output y corresponding to themicrostimulation Q-KLMS is used to model 119862(119911) with 119873input samples The target spike trains are repeated amongdifferent trials which means that the repeated samples will
8 Computational Intelligence and Neuroscience
Touch sites
Electrodes
Tactile
Neural responses
Electrical
VPL
VPL
S1
S1
stimulation microstimulation
spike trains and local field potentials
2mm
250120583m
500120583m
Figure 3 Neural elements in tactile stimulation experiments To the left is the ratrsquos hand with representative cutaneous receptive fieldsWhenthe tactor touches a particular ldquoreceptive fieldrdquo on the hand VPL thalamus receives this information and relays it to S1 cortex To emulateldquonatural touchrdquo with microstimulation the optimized spatiotemporal microstimulus patterns are injected into the same receptive field onVPL thalamus through a microarray so that the target neural activity pattern can be replicated in somatosensory regions (S1) to convey thenatural touch sensation to the animal
be merged on the same kernel center of the first pass throughthe data by the quantization and thus the network size of theinverse controller is fixed (119873 centers) Only the coefficientmatrix a is updated with the filtered error 120598 during the wholecontrol operation The output of 119862(119911) can be calculated by
[
[
[
[
[
1199101
1199102
119910119872
]
]
]
]
]
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
[
120601(c1198881)
1015840
120601(c1198882)
1015840
120601(c119888119873)
1015840
]
]
]
]
]
]
]
]
]
120601 (x)
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
120581 (c1198881 x)
120581 (c1198882 x)
120581 (c119888119873 x)
]
]
]
]
]
]
]
]
(9)
where c119888119899is the 119899th center and 119886
119898119899is the coefficient assigned
to the 119899th kernel center for the119898 channel of the output
5 Sensory Stimulation Experiment
51 Experimental Motivation and Framework We appliedthese methods to the problem of converting touch in-formation to electrical stimulation in neural prostheses So-matosensory information originating in the peripheral ner-vous system ascends through the ventral posterior lateral(VPL) nucleus of the thalamus on its way to the primary
somatosensory cortex (S1) Since most cutaneous and pro-prioceptive information is relayed through this nucleus weexpect that a suitably designed electrode array could be usedto selectively stimulate a local group of VPL neurons so asto convey similar information to cortex Electrophysiologicalexperiments [52] suggest that the rostral portion of the ratVPL nucleus carries a large amount of proprioceptive infor-mation while themedial and caudal portions codemostly forcutaneous stimuli Since the body maps for both VPL thal-amus and S1 are known and fairly consistent it is possible toimplant electrode arrays in somatotopically overlapping areasof both regions
We applied the proposed control method to generatemultichannel electrical stimulation in VPL so as to evokea naturalistic neural trajectory in S1 Figure 3 shows aschematic depiction of our experiment whichwas conductedin rats After implanting arrays in both VPL and S1 theresponses to natural stimulation delivered by a motorizedtactor were recorded in S1 Then we applied randomlypatterned microstimulation in the VPL while recording theresponses in S1 Using these responses we then trained ourcontroller to output themicrostimulation patterns that wouldmost accurately reproduce the neural responses to naturaltouch in S1 To solve this control problemwe first investigatedhow reliably the two types of stimulation natural touch andelectrical microstimulation can be decoded
52 Data Collection All animal procedures were approvedby the SUNY Downstate Medical Center IACUC and con-formed to National Institutes of Health guidelines A sin-gle female Long-Evans rat (Hilltop Scottsdale PA) wasimplanted with two microarrays while under anesthesia
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 7
Ideal inverse
Command
Overall system error
Plant disturbance and noise
Control system output
+
+
+
+
C(z)
C(z)
C(z)Delay Δ
Delay Δ
minus
minus
Filtered error 120598
120598998400
sum
sum
sum
P(z)
Pminus1(z)
120598k
input
(a) A filtered-120598 LMS adaptive inverse control diagram
Plant disturbance and noise
Control system output
+
++Command
input
Delay Δ
Delay Δ
Δ
Δ
minus
sum
sum
P(z)120601(x)y = times 120601(x)
120601(z)
120598 = times 120601(xΔ) minus times 120601(z)
120601(x )
120598k = 120601(x ) minus 120601(z)
WC
WC
WC
WPminus1
WPminus1
WPminus1
(b) An adaptive inverse control diagram in RKHS
Figure 2 Adaptive inverse control diagram
in this control scheme are kernel-based models the wholecontrol scheme can be mapped into an RKHS space whichholds several advantages as follows (1) No restriction isimposed on the signal type As long as a kernel can map theplant activity to RKHS the plant can be controlled with thisscheme (2) Both the plant inverse minus1(119911) and the controller119862(119911) have linear structure in RKHS which avoids the dangerof converging to local minima
Specifically the controller 119862(119911) and the plant inverseminus1(119911) are separatelymodeled with the tensor-product kernel
that we described in Section 2 and the model coefficientsare updated with Q-KLMS This structure is shown in Fig-ure 2(b) The model coefficients W
and W
minus1 represent
the weight matrix of 119862(119911) and minus1(119911) obtained by Q-
KLMS respectively As this is a multiple-input multiple-output model W
and W
minus1 are the concatenation of the
filter weights for each stimulation channelThe variables x y and z denote the concatenation of
the windowed target spike trains and LFPs as the commandinput of the controller the estimated stimulation and the
plant output respectively xΔis delayed target signal which is
aligned with the plant output z 120601(sdot) represents the mappingfunction from input space to the RKHS associated with thetensor-product kernel
The overall system error is defined as 120598119896= 120601(x
Δ) minus 120601(z)
which means that the controllerrsquos parameter adaptation seeksto minimize the distance in the RKHS between the targetspike trainLFP and the output of the plant inverse minus1(119911)In this way since the inverse model minus1(119911) has a linearstructure in RKHS the filtered error for stimulation channel119895 isin 1 119872 is
120598 (119895) =Wminus1
119895120601 (xΔ) minusW
minus1
119895120601 (z) (8)
The controller model 119862(119911) has a single input x cor-responding to the concatenation of the spike trains andLFPs and has an119872-channel output y corresponding to themicrostimulation Q-KLMS is used to model 119862(119911) with 119873input samples The target spike trains are repeated amongdifferent trials which means that the repeated samples will
8 Computational Intelligence and Neuroscience
Touch sites
Electrodes
Tactile
Neural responses
Electrical
VPL
VPL
S1
S1
stimulation microstimulation
spike trains and local field potentials
2mm
250120583m
500120583m
Figure 3 Neural elements in tactile stimulation experiments To the left is the ratrsquos hand with representative cutaneous receptive fieldsWhenthe tactor touches a particular ldquoreceptive fieldrdquo on the hand VPL thalamus receives this information and relays it to S1 cortex To emulateldquonatural touchrdquo with microstimulation the optimized spatiotemporal microstimulus patterns are injected into the same receptive field onVPL thalamus through a microarray so that the target neural activity pattern can be replicated in somatosensory regions (S1) to convey thenatural touch sensation to the animal
be merged on the same kernel center of the first pass throughthe data by the quantization and thus the network size of theinverse controller is fixed (119873 centers) Only the coefficientmatrix a is updated with the filtered error 120598 during the wholecontrol operation The output of 119862(119911) can be calculated by
[
[
[
[
[
1199101
1199102
119910119872
]
]
]
]
]
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
[
120601(c1198881)
1015840
120601(c1198882)
1015840
120601(c119888119873)
1015840
]
]
]
]
]
]
]
]
]
120601 (x)
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
120581 (c1198881 x)
120581 (c1198882 x)
120581 (c119888119873 x)
]
]
]
]
]
]
]
]
(9)
where c119888119899is the 119899th center and 119886
119898119899is the coefficient assigned
to the 119899th kernel center for the119898 channel of the output
5 Sensory Stimulation Experiment
51 Experimental Motivation and Framework We appliedthese methods to the problem of converting touch in-formation to electrical stimulation in neural prostheses So-matosensory information originating in the peripheral ner-vous system ascends through the ventral posterior lateral(VPL) nucleus of the thalamus on its way to the primary
somatosensory cortex (S1) Since most cutaneous and pro-prioceptive information is relayed through this nucleus weexpect that a suitably designed electrode array could be usedto selectively stimulate a local group of VPL neurons so asto convey similar information to cortex Electrophysiologicalexperiments [52] suggest that the rostral portion of the ratVPL nucleus carries a large amount of proprioceptive infor-mation while themedial and caudal portions codemostly forcutaneous stimuli Since the body maps for both VPL thal-amus and S1 are known and fairly consistent it is possible toimplant electrode arrays in somatotopically overlapping areasof both regions
We applied the proposed control method to generatemultichannel electrical stimulation in VPL so as to evokea naturalistic neural trajectory in S1 Figure 3 shows aschematic depiction of our experiment whichwas conductedin rats After implanting arrays in both VPL and S1 theresponses to natural stimulation delivered by a motorizedtactor were recorded in S1 Then we applied randomlypatterned microstimulation in the VPL while recording theresponses in S1 Using these responses we then trained ourcontroller to output themicrostimulation patterns that wouldmost accurately reproduce the neural responses to naturaltouch in S1 To solve this control problemwe first investigatedhow reliably the two types of stimulation natural touch andelectrical microstimulation can be decoded
52 Data Collection All animal procedures were approvedby the SUNY Downstate Medical Center IACUC and con-formed to National Institutes of Health guidelines A sin-gle female Long-Evans rat (Hilltop Scottsdale PA) wasimplanted with two microarrays while under anesthesia
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
8 Computational Intelligence and Neuroscience
Touch sites
Electrodes
Tactile
Neural responses
Electrical
VPL
VPL
S1
S1
stimulation microstimulation
spike trains and local field potentials
2mm
250120583m
500120583m
Figure 3 Neural elements in tactile stimulation experiments To the left is the ratrsquos hand with representative cutaneous receptive fieldsWhenthe tactor touches a particular ldquoreceptive fieldrdquo on the hand VPL thalamus receives this information and relays it to S1 cortex To emulateldquonatural touchrdquo with microstimulation the optimized spatiotemporal microstimulus patterns are injected into the same receptive field onVPL thalamus through a microarray so that the target neural activity pattern can be replicated in somatosensory regions (S1) to convey thenatural touch sensation to the animal
be merged on the same kernel center of the first pass throughthe data by the quantization and thus the network size of theinverse controller is fixed (119873 centers) Only the coefficientmatrix a is updated with the filtered error 120598 during the wholecontrol operation The output of 119862(119911) can be calculated by
[
[
[
[
[
1199101
1199102
119910119872
]
]
]
]
]
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
[
120601(c1198881)
1015840
120601(c1198882)
1015840
120601(c119888119873)
1015840
]
]
]
]
]
]
]
]
]
120601 (x)
=
[
[
[
[
[
11988611
11988612
sdot sdot sdot 1198861119873
11988612
11988622
sdot sdot sdot 1198862119873
1198861198721
1198861198722
sdot sdot sdot 119886119872119873
]
]
]
]
]
[
[
[
[
[
[
[
[
120581 (c1198881 x)
120581 (c1198882 x)
120581 (c119888119873 x)
]
]
]
]
]
]
]
]
(9)
where c119888119899is the 119899th center and 119886
119898119899is the coefficient assigned
to the 119899th kernel center for the119898 channel of the output
5 Sensory Stimulation Experiment
51 Experimental Motivation and Framework We appliedthese methods to the problem of converting touch in-formation to electrical stimulation in neural prostheses So-matosensory information originating in the peripheral ner-vous system ascends through the ventral posterior lateral(VPL) nucleus of the thalamus on its way to the primary
somatosensory cortex (S1) Since most cutaneous and pro-prioceptive information is relayed through this nucleus weexpect that a suitably designed electrode array could be usedto selectively stimulate a local group of VPL neurons so asto convey similar information to cortex Electrophysiologicalexperiments [52] suggest that the rostral portion of the ratVPL nucleus carries a large amount of proprioceptive infor-mation while themedial and caudal portions codemostly forcutaneous stimuli Since the body maps for both VPL thal-amus and S1 are known and fairly consistent it is possible toimplant electrode arrays in somatotopically overlapping areasof both regions
We applied the proposed control method to generatemultichannel electrical stimulation in VPL so as to evokea naturalistic neural trajectory in S1 Figure 3 shows aschematic depiction of our experiment whichwas conductedin rats After implanting arrays in both VPL and S1 theresponses to natural stimulation delivered by a motorizedtactor were recorded in S1 Then we applied randomlypatterned microstimulation in the VPL while recording theresponses in S1 Using these responses we then trained ourcontroller to output themicrostimulation patterns that wouldmost accurately reproduce the neural responses to naturaltouch in S1 To solve this control problemwe first investigatedhow reliably the two types of stimulation natural touch andelectrical microstimulation can be decoded
52 Data Collection All animal procedures were approvedby the SUNY Downstate Medical Center IACUC and con-formed to National Institutes of Health guidelines A sin-gle female Long-Evans rat (Hilltop Scottsdale PA) wasimplanted with two microarrays while under anesthesia
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 9
minus20
minus10
minus30
0
10
20
30
2 4 6 8 10 12 14 16
Stim
ulat
ion
ampl
itude
(120583A)
Stimulation channels (16 channels)
123456789101112131415161718192021222324
Stim
ulat
ion
patte
rns (
24 cl
asse
s)
Figure 4 Bipolar microstimulation patterns applied in sensorystimulation experiment
After induction using isoflurane urethane was used tomaintain anesthetic depth The array in VPL was a 2 times 8
grid of 70 platinum 30 iridium 75120583m diameter micro-electrodes (MicroProbes Inc) with 500120583mbetween the rowsand 250120583m interelectrode spacing within the rows Themicroelectrodes had a 25 1 taper on the distal 5mm with atip diameter of 3 120583mThe approximate geometric surface areaof the conducting tips was 1250 120583m2 The shank lengths werecustom designed to fit the contour of the rat VPL [52] Bothrows were identical and the shaft lengths for each row frommedial to lateral were (8 8 8 8 8 78 76 74)mmThe longaxis of the VPL array was oriented along the ratrsquos mediolateralaxis
The cortical electrode array (Blackrock Microsystems)was a 32-channel Utah arrayThe electrodes are arranged in a6times6 grid excluding the 4 corners and each electrode is 15mmlong A single craniotomy that exposed the cortical insertionssites for both arrays was made and after several probinginsertions with a single microelectrode (FHC) in an area1mm surrounding the stereotaxic coordinates for the dig-it region of S1 (40mm lateral and 05mmanterior to bregma)[53 54] the Utah array was inserted using a pneumatic pis-tonThe electrodes cover somatosensory areas of the S1 cortexand the VPL nucleus of the thalamus [52] Neural recordingswere made using a multichannel acquisition system (TuckerDavis)
Spike and field potential data were collected while therat was maintained under anesthesia The electrode voltageswere preamplified with a gain of 1000 filtered with cutoffs at07Hz and 88 kHz and digitized at 25 kHz LFPs are furtherfiltered from 1 to 300Hz using a 3rd-order Butterworth filterSpike sorting is achieved using 119896-means clustering of the first2 principal components of the detected waveforms
The experiment involves delivering microstimulation toVPL and tactile stimulation to the ratrsquos fingers in separatesections Microstimulation is administered on adjacent pairs
Local field potential
Spike train
Time (s)
Time (s)
Chan
nels
Chan
nels
Chan
nels
01 02 03 04 05
05
06 07
01 02 03 04 05 06 070
01 02 03 04 05 06 07
0
1
0
0
510
10
1520
20
2530
30
Volta
ge (m
V)
minus05minus1
3210
minus1minus2
Figure 5 Rat neural response elicited by tactile stimulation Theupper plot shows the normalized derivative of tactile force Theremaining two plots show the corresponding LFPs and spike trainsstimulated by tactile stimulation
(bipolar configurations) of the thalamic array The stimula-tion waveforms are single symmetric biphasic rectangularcurrent pulses each rectangular pulse is 200 120583s long and hasan amplitude of either 10120583A 20120583A or 30 120583A Interstimulusintervals are exponentially distributed with mean intervalof 100ms Stimulus isolation used a custom built switchingheadstage The bipolar microstimulation pulses are deliveredin the thalamus There are 24 patterns of microstimulation 8different sites and 3 different amplitude levels for each site asshown in Figure 4 Each pattern is delivered 125 times
The experimental procedure also involves delivering 30ndash40 short 100ms tactile touches to the ratrsquos fingers (repeatedfor digit pads 1ndash4) using a hand-held probeThe rat remainedanesthetized for the recording duration The applied force ismeasured using a lever attached to the probe that pressedagainst a thin-film resistive force sensor (Trossen Robotics)when the probe tip contacted the ratrsquos body The resistivechanges were converted to voltage using a bridge circuit andwere filtered and digitized in the same way as describedaboveThe digitized waveforms were filtered with a passbandbetween 1 and 60Hz using a 3rd-order Butterworth filterThefirst derivative of this signal is used as the desired stimulationsignal which is shown in Figure 5
53 Decoding Results We now present the decoding resultsfor the tactile stimulus waveform andmicrostimulation usingQ-KLMS operating on the tensor-product kernelThe perfor-mance using the multiscale neural activity both spike trainsand LFPs is compared with the decoder using single-typeneural activity This illustrates the effectiveness of the tensor-product-kernel-based framework to exploit the complemen-tarity information from multiscale neural activities
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
10 Computational Intelligence and Neuroscience
0 5 10 15 20 25
002040608
1
Sam
ple a
utoc
orre
latio
n
Spike train
0 5 10 15 20 25
002040608
1
Lag (ms)
Sam
ple a
utoc
orre
latio
n LFP
minus02
minus02
Figure 6 Autocorrelation of LFPs and spike trains for window size estimation
531 Time Scale Estimation The tensor-product kernelallows the time scales of the analysis for LFPs and spike trainsto be specified individually based on their own propertiesIn order to find reasonable time scales we estimate the au-tocorrelation coefficients of LFPs and spike trains whichindicates the response duration induced by the stimulationFor this purpose spike trains are binned with bin size of1msTheLFPs are also resampledwith sampling rate 1000HzThe autocorrelation coefficients of each signal average overchannels are calculated by
120588ℎ=
sum119879
119905=ℎ+1(119910119905minus 119910) (119910
119905minusℎminus 119910)
sum119879
119905=1(119910119905minus 119910)2
(10)
The 95 confidence bounds of the hypothesis that the auto-correlation coefficient is effectively zero are approximatelyestimated by plusmn2SE120588 where
SE120588 = radic(1 + 2sum
ℎminus1
119894=11205882
119894)
119873
(11)
The average confidence bounds for LFPs and spike trains are[minus0032 0032] and [minus0031 0031] respectivelyThe autocor-relation coefficients of LFPs fall into the confidence intervalafter 20ms while the autocorrelation coefficients of spiketrains die out after 9ms as shown in Figure 6 Therefore thedecoder inputs are obtained by sliding the windowwith a sizeof 119879119904= 9ms for spike trains and 119879
119909= 20ms for LFPs In
addition the time discretization for the stimuli is 5msThe learning rates for each decoder are determined by the
best cross-validation results after scanning the parametersThe kernel sizes 120590
119904and 120590119909are determined by the average dis-
tance in RKHS of each pair of training samples The normal-ized mean square error (NMSE) between the estimated stim-ulus (y) and the desired stimulus (d) is utilized as an accuracycriterion
0 02 04 06 08 1 12 14 16 18
012345
Time (s)
Nor
mal
ized
der
ivat
ive
of ta
ctile
forc
e
TargetSpike decoder
LFP decoderSpike and LFP decoder
minus1minus2minus3minus4
Figure 7 Qualitative comparison of decoding performance of thefirst tactile stimulation trial among LFP decoder spike decoder andspike and LFP decoder
Table 1 Comparison among neural decoders
Property InputLFP and spike LFP Spike
NMSE (meanSTD) 048005 055003 063011
532 Results for Decoding Tactile Stimulation NMSEs oftac-tile stimulation are obtained across 8 trial data sets Foreach trial we use 20 s data to train the decoders and computean independent test error on the remaining 25 s data Theresults are shown in Table 1 where we can observe that theLFP and spike decoder significantly outperformed both theLFP decoder and the spike decoder with 119875 value lt005
In order to illustrate the details of the decoding perfor-mance a portion of the test results of the first trial are shownin Figure 7 It is observed that the output of the spike decoderfluctuates and misses some pulses (eg around 065 s) dueto the sparsity and variability of spike trains In contrast the
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 11
1 2 3 4 5 6 7 802040608
112
Stimulation channels
Nor
mal
ized
MSE
Spike decoderSpike and LFP decoderLFP decoder
(mea
n sq
uare
erro
r)
Figure 8 Performance comparison of the microstimulation recon-struction performance among spike decoder LFPdecoder and spikeand LFP decoder in terms of NMSE for each microstimulationchannel
output estimated by LFP decoder is smooth and more robustthan the spike decoder but the decoded signal undershootsthe maximum force deflections The LFP and spike decoderperformed better than the LFP decoder by reincorporatingthe precise pulse timing information from spike trains
533 Results for Decoding Electrical Microstimulation Wealso implemented a decoder to reconstruct the microstimu-lation pattern First we mapped the 8 different stimulationconfigurations to 8 channels We dismissed the shape ofeach stimulus since the time scale of the stimulus width isonly 200120583s The desired stimulation pattern of each channelis represented by a sparse time series of the stimulationamplitude
NMSEs are obtained with ten subsequence decodingresults We used 120 s data to train the decoders and computean independent test error on the remaining 20 s data Thespike and LFP decoder also outperformed both the LFPdecoder and the spike decoder The comparison of resultsis shown in Figure 8 which indicates that spike and LFPdecoder is able to obtain the best performance amongst thestimulation channels especially for channels 2 4 6 and 7It is observed that stimulations on channels 4 5 6 7 and 8cannot be decoded from the LFP decoder at all since the finetime information is averaged out in LFPs For the spike trainsdecoder the stimulation channels are not well discriminatedHowever the combination of spike trains and LFPs enrichedthe stimulation information which contributes to betterdiscrimination of stimulation patterns among channels andalso enables the model to capture the precise stimulationtiming
54 Open Loop Adaptive Inverse Control Results The chal-lenge of implementing a somatosensory prosthesis is toprecisely control the neural response in order to mimicthe neural response induced by natural stimulation Asdiscussed the kernel-based adaptive inverse control diagramwith tensor-product kernel is applied to address this problemThe adaptive inverse control model is based on a decoderwhich maps the neural activity in S1 to the microstimulation
delivered in VPL We proceed to show how the adaptiveinverse control model can emulate the neural response toldquonatural touchrdquo using optimized microstimulation
In the same recording open loop adaptive inverse controlvia optimized thalamic (VPL) microstimulations is imple-mented First the inverse controller119862(119911) is trained with 300 sof the data generated by recording the response to randomlypatterned thalamic microstimulation Then the first 60 s ofthe neural response recorded during tactile stimulation ateach touch site is used as the target pattern and control inputWhen this entire neural response sequence is fed offlineto the controller it generates a corresponding sequence ofmultichannel microstimulation amplitudes
However the generatedmicrostimulation sequence needsfurther processing to meet the restrictions of bipolar micros-timulation before it applied to VPL The restrictions andprocessing are the following
(1) The minimal interval between two stimuli 10ms issuggested by the experimental settingThemean shiftalgorithm [55] is used to locate the local maximaof a subsequence of stimuli (10ms) for each singlechannel The maximum amplitude and correspond-ing timing are used to set the amplitude and time ofstimuli
(2) At any given time point only a single pulse across allchannels can be stimulated Therefore at each timepoint only the maximum value across channels isselected for stimulation The values at other channelsare set to zero
(3) The maximumminimum stimulation amplitude isset in the range [8 120583Andash30 120583A] which has been sug-gested as the effective and safe amplitude range inprevious experiments
After this processing the generatedmultichannelmicros-timulation sequence (60 s in duration) is ready to be appliedto the microstimulator immediately following computation
The neural response to the microstimulation is recordedand compared with the target natural response Ideally thesetwo neural response sequences should be time-locked andbe very similar In particular the portions of the controlledresponse inwindows corresponding to a natural touch shouldmatch As this corresponds to a virtual touch delivered by theoptimizedmicrostimulation we define the term virtual touchto refer to the sequence of microstimulation patternsmdashtheoutput of the controllermdashcorresponding to a particular targetnatural touch
Portions of the neural response for both natural andvirtual touches are shown in Figure 9 The responses arealigned to the same time scale even though they were notrecorded concurrently It is clear that the multiunit neuralresponses recorded during the controlled microstimulationshare similar spatiotemporal patterns as those in the tar-get set Each virtual touch is achieved by a sequence ofmicrostimulation pulses that evokes synchronized burstingacross the neurons In addition microstimulation pulsesare delivered in between touch times to mimic populationspiking that is not associated with the touch timing
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
12 Computational Intelligence and Neuroscience
175 185 19518 19Time (s)
175 185 19518 19Time (s)
Time (s)
175 185 19518 19Time (s)
102030
25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Time (s)25 252 254 256 258 26 262 264 266 268 27
Output
Target
05
LFPs
(V)
00
05
102030
Output
Target
times10minus4 times10minus4
minus5
minus10 LFPs
(V)
minus5
minus10
curr
ent (120583
A)
Stim
ulat
ion
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
(a)
37 372 374 376 378 38 382 384 386 388 39Time (s)
Time (s)
Time (s) Time (s)
Time (s)
Time (s)
Output
Target
Output
Target
37 372 374 376 378 38 382 384 386 388 39
37 372 374 376 378 38 382 384 386 388 39 52
0
0
5
102030
0102030
LFPs
(V)
times10minus4 times10minus4
minus50
5
minus5minus10
curr
ent (120583
A)
Stim
ulat
ion
LFPs
(V)
curr
ent (120583
A)
Stim
ulat
ion
Touch timeTarget (natural touch response)Output (microstimulation response)
Touch timeTarget (natural touch response)Output (microstimulation response)
522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
52 522 524 526 528 53 532 536534 538 54
(b)
Figure 9 Neural responses to natural and virtual touches for touch on digit 1 (d1) along with the microstimulation corresponding to thevirtual touches Each of the four subfigures corresponds to a different segment of the continuous recording In each subfigure the timing of thetouches spatiotemporal pattern of spike trains and LFPs are shown in the top two panels the bottom panel shows the spatiotemporal patternof microstimulation where different colors represent different microstimulation channels The neural responses are time-locked but notconcurrently recorded as the entire natural touch response is given as input to the controller which generates the optimizedmicrostimulationpatterns When the optimized microstimulation is applied in the VPL it generates S1 neural responses that qualitatively match the naturalthat is the target response
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 13
003 006 009 012 015006
012
018
024
03
036
042
Time (s) Time (s)
Cor
relat
ion
coeffi
cien
t
Cor
relat
ion
coeffi
cien
t
Spike train
minus015 minus012 minus009 minus006 minus003 0 003 006 009 012 015
minus03
0
0
015
03
045Local field potential
minus015 minus012 minus009minus045
minus015
minus006minus003
Figure 10 Correlation coefficients between the controlled neural system output and the corresponding target neural response stimulatedby actual touch Boxplot of correlation coefficients represents the results of 6 test trials Each trial is corresponding to a particular touch site(digits d1 d2 d4 p3 p1 and mp)
To evaluate performance we concentrate on the followingtwo aspects of virtual touches
(i) Touch Timing Whether the neural response to virtualtouch is capable of capturing the timing information of theactual target touch is studied
(ii) Touch Site Whether the actual target touch site infor-mation can be discriminately represented by neural activitycontrolled by the microcirculation is studied
For touch timing we estimate the correlation coefficients(CC) over time between virtual touch responses and thecorresponding target natural touch responses To simplifythe spike train correlation estimation we bin the data using5ms binsThe correlation coefficients of both spike trains andLFPs are calculated Figure 10 shows the boxplot plot of thecorrelation coefficients (CC) over 6 test trials each of whichcorresponds to a particular natural touch site a forepaw digitor pad (d1 d2 d4 p3 p1 or mp) It is observed that themaximum correlation coefficient is at lag zero for each trialmeaning that the virtual touch response is correctly time-locked For each touch site we estimate the similarity betweenthe natural touch response and virtual touch response in thefollowing two cases
(i) Matched Virtual Pairs consist of a virtual touch trial and anatural touch trial corresponding to the same touch site
(ii) Unmatched Virtual Pairs consist of a virtual touch trialand a natural touch trial corresponding to different touchsites
We extract all the neural responses in the 300ms windowafter touch onset and calculate the correlation coefficientsbetween natural touch responses and virtual touch responseacross each pair of trials The one-tailed Kolmogorov-Smirnov test (KS) is implemented to test the alternative
Table 2 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch spike train responses and virtualtouch spike train responses (matched or unmatched) The P value isfor the one-sided KS test between the matched and unmatched CCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 006 035 plusmn 006 000d2 040 plusmn 005 037 plusmn 006 001d4 040 plusmn 005 037 plusmn 005 002p3 038 plusmn 005 037 plusmn 006 011p2 040 plusmn 007 036 plusmn 005 000mp 041 plusmn 007 037 plusmn 006 000
hypothesis that the distribution of the correlation coefficientsfor the matched virtual case is higher than the distributionfor the unmatched virtual case (the null hypothesis is that thedistributions are the same)The correlation coefficients and119875value of KS test for spike trains and LFPs are shown in Tables2 and 3 The similarity between natural touch responsesand virtual touch responses in the unmatched virtual caseis found to be significantly lower than the matched virtualcase for most touch sites (119875 value lt005) except for touchsite p3 Without psychophysical testing it is unclear howeffective the microstimulations are in producing true sensorysensations Nonetheless these are promising results to showthe effectiveness of a controller utilizing themultiscale neuraldecoding methodology
6 Conclusions
This work proposes a novel tensor-product-kernel-basedmachine learning framework which provides a way to
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
14 Computational Intelligence and Neuroscience
Table 3 Average and standard deviation of the correlation coeffi-cient (CC) between natural touch LFP responses and virtual touchLFP responses (matched or unmatched) The P value is for the one-sidedKS test between thematched andunmatchedCCdistributions
Touch site CCMatched virtual Unmatched virtual P value
d1 042 plusmn 020 028 plusmn 023 000d2 046 plusmn 013 028 plusmn 022 000d4 041 plusmn 019 026 plusmn 021 000p3 038 plusmn 018 029 plusmn 022 007p2 033 plusmn 019 026 plusmn 023 020mp 034 plusmn 017 025 plusmn 021 000
decode stimulation information from the spatiotemporalpatterns of multiscale neural activity (eg spike trains andLFPs) It has been hypothesized that spike trains and LFPscontain complementary information that can enhance neuraldata decoding However a systematic approach to combinein a single signal processing framework these two distinctneural responses has remained elusive The combinationof positive definite kernels which can be defined in boththe spike train space and the LFP space seems to be avery productive approach to achieve our goals We havebasically used two types of combination kernels to achievethe multiscale combination sum kernels to ldquoaveragerdquo acrossdifferent spike channels as well as across LFP channels whichcombine evidence for the neural event in each modalityand product kernels across the spike and LFP modalitiesto emphasize events that are represented in both multiscalemodalities The results show that this approach enhancesthe accuracy and robustness of neural decoding and controlHowever this paper should be interpreted as a first step ofa long process to optimize the joint information containedin spike trains and LFPs The first question is to understandwhy this combination of sum and product kernels worksOur analyses show that the sum kernel (particularly for thespike trains) brings stability to the neural events because itdecreases the variability of the spike responses to stimuliOn the other hand the product kernel requires that theneural event presents at both scales to be useful for decodingwhich improves specificity If we look carefully at Figure 6we can understand the effect of decoding with the productkernel Notice that the correlation times of spikes and LFPsare very different (LFPs have a much longer correlationtime)Moreover composite kernel definition can be naturallyconfigured to different brain areas and even neuronal typeswith distinctive firing patterns Each pattern will lead todifferent correlation profiles which will immediately tunethe properties of the kernels across brain areas and neuralpopulations If only LFPs are used we can expect that theresponse time of the decoder will be very long andmiss someevents The product kernel in fact limits the duration of theLFP kernel to that of the spike kernel and brings stability tothe spike kernel This explains exactly the decoding resultsTherefore results show that the proposed tensor-product-kernel framework can effectively integrate the information
from spikes and LFPs into the same model and enhance theneural decoding robustness and accuracy
Furthermore we applied the tensor-product-kernelframework in a more complex BMI scenario how to emulateldquonatural touchrdquo with microstimulation Our preliminaryresults show that the kernel-based adaptive inverse controlscheme employing tensor-product-kernel framework alsoachieves better optimization of the microstimulation thanspikes and LFPs alone (results not shown) This result canbe expected because the inverse controller is basically adecoder However we have to realize that not all the tasksof interest reduce to neural decoding and we do not evenknow if neural control can be further improved by a differentkernel design This is where further research is necessary tooptimize the joint kernels For instance we can weight boththe channel information and the multiscale information tomaximize the task performance using metric learning [40]
Overall this tensor-product-kernel-based frameworkproposed in this work provides a general and practicalframework to leverage heterogeneous neural activities indecoding and control scenario which is not limited to spiketrains and LFPs applications
Conflict of Interests
The authors declare that there is no conflict of interests re-garding the publication of this paper
Acknowledgments
The authors would like to thank Ryan Burt for proofreadingthe paper This work was supported in part by the US NSFPartnerships for Innovation Program 0650161 and DARPAProject N66001-10-C-2008
References
[1] J K Chapin K A Moxon R S Markowitz and M A LNicolelis ldquoReal-time control of a robot arm using simultane-ously recorded neurons in the motor cortexrdquo Nature Neuro-science vol 2 no 7 pp 664ndash670 1999
[2] M Velliste S Perel M C Spalding A S Whitford and A BSchwartz ldquoCortical control of a prosthetic arm for self-feedingrdquoNature vol 453 no 7198 pp 1098ndash1101 2008
[3] DM Taylor S I H Tillery and A B Schwartz ldquoDirect corticalcontrol of 3D neuroprosthetic devicesrdquo Science vol 296 no5574 pp 1829ndash1832 2002
[4] M D Serruya N G Hatsopoulos L Paninski M R Fellowsand J P Donoghue ldquoInstant neural control of a movementsignalrdquo Nature vol 416 no 6877 pp 141ndash142 2002
[5] J M Carmena M A Lebedev R E Crist et al ldquoLearning tocontrol a brain-machine interface for reaching and grasping byprimatesrdquo PLoS Biology vol 1 no 2 pp 193ndash208 2003
[6] J DiGiovanna B Mahmoudi J Fortes J C Principe and JC Sanchez ldquoCoadaptive brain-machine interface via reinforce-ment learningrdquo IEEE Transactions on Biomedical Engineeringvol 56 no 1 pp 54ndash64 2009
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 15
[7] A Belitski S Panzeri CMagri N K Logothetis andC KayserldquoSensory information in local field potentials and spikes fromvisual and auditory cortices time scales and frequency bandsrdquoJournal of Computational Neuroscience vol 29 no 3 pp 533ndash545 2010
[8] J R Huxter T J Senior K Allen and J Csicsvari ldquoThetaphase-specific codes for two-dimensional position trajectoryand heading in the hippocampusrdquo Nature Neuroscience vol 11no 5 pp 587ndash594 2008
[9] M J Rasch A Gretton Y Murayama W Maass and N KLogothetis ldquoInferring spike trains from local field potentialsrdquoJournal of Neurophysiology vol 99 no 3 pp 1461ndash1476 2008
[10] G Buzsaki and A Draguhn ldquoNeuronal oscillation in corticalnetworksrdquo Science vol 304 no 5679 pp 1926ndash1929 2004
[11] D L Snyder and M I Miller Random Point Processes in Timeand Space Springer 1991
[12] R E Kass V Ventura and E N Brown ldquoStatistical issues in theanalysis of neuronal datardquo Journal of Neurophysiology vol 94no 1 pp 8ndash25 2005
[13] R D Flint E W Lindberg L R Jordan L E Miller and MW Slutzky ldquoAccurate decoding of reaching movements fromfield potentials in the absence of spikesrdquo Journal of NeuralEngineering vol 9 no 4 Article ID 046006 2012
[14] R C Kelly M A Smith R E Kass and T S Lee ldquoLocalfield potentials indicate network state and account for neuronalresponse variabilityrdquo Journal of Computational Neurosciencevol 29 no 3 pp 567ndash579 2010
[15] J Liu andW T Newsome ldquoLocal field potential in cortical areaMT stimulus tuning and behavioral correlationsrdquo Journal ofNeuroscience vol 26 no 30 pp 7779ndash7790 2006
[16] P Berens G Keliris A EckerN Logothetis andA Tolias ldquoFea-ture selectivity of the gamma-band of the local field potential inprimate primary visual cortexrdquo Frontiers in Neuroscience vol 2Article ID 199207 2008
[17] D Xing C-I Yeh andRM Shapley ldquoSpatial spread of the localfield potential and its laminar variation in visual cortexrdquo Journalof Neuroscience vol 29 no 37 pp 11540ndash11549 2009
[18] B Scholkopf and A J Smola Learning With Kernels SupportVectorMachines Regularization Optimization and Beyond SerAdaptive Computation andMachine Learning MIT Press 2002
[19] L Li I M Park A Brockmeier et al ldquoAdaptive inverse controlof neural spatiotemporal spike patterns with a reproducingkernel Hilbert space (RKHS) frameworkrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 21 no 4 pp532ndash543 2013
[20] S Monaco G Kroliczak D J Quinlan et al ldquoContributionof visual and proprioceptive information to the precision ofreaching movementsrdquo Experimental Brain Research vol 202no 1 pp 15ndash32 2010
[21] R S Johansson and G Westling ldquoRoles of glabrous skinreceptors and sensorimotor memory in automatic control ofprecision grip when lifting rougher or more slippery objectsrdquoExperimental Brain Research vol 56 no 3 pp 550ndash564 1984
[22] J E OrsquoDoherty M A Lebedev P J Ifft et al ldquoActive tactileexploration using a brain-machine-brain interfacerdquoNature vol479 no 7372 pp 228ndash231 2011
[23] J E OrsquoDoherty M A Lebedev T L Hanson N AFitzsimmons and M A Nicolelis ldquoA brain-machine interfaceinstructed by direct intracorticalmicrostimulationrdquo Frontiers inIntegrative Neuroscience vol 3 2009
[24] N A Fitzsimmons W Drake T L Hanson M A LebedevandM A L Nicolelis ldquoPrimate reaching cued by multichannelspatiotemporal cortical microstimulationrdquo The Journal of Neu-roscience vol 27 no 21 pp 5593ndash5602 2007
[25] X-J Feng B Greenwald H Rabitz E Shea-Brown andR Kosut ldquoToward closed-loop optimization of deep brainstimulation for Parkinsonrsquos disease concepts and lessons from acomputational modelrdquo Journal of Neural Engineering vol 4 no2 pp L14ndashL21 2007
[26] J Liu H K Khalil and K G Oweiss ldquoModel-based analysisand control of a network of basal ganglia spiking neurons in thenormal and Parkinsonian statesrdquo Journal of Neural Engineeringvol 8 no 4 Article ID 045002 2011
[27] A J Brockmeier J S ChoiMMDistasio J T Francis and J CPrıncipe ldquoOptimizing microstimulation using a reinforcementlearning frameworkrdquo in Proceedings of the 33rd Annual Inter-national Conference of the IEEE Engineering in Medicine andBiology Society (EMBS rsquo11) pp 1069ndash1072 September 2011
[28] A Brockmeier J Choi M Emigh L Li J Francis andJ Principe ldquoSubspace matching thalamic microstimulationto tactile evoked potentials in rat somatosensory cortexrdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society (EMBC rsquo12) pp2957ndash2960 2012
[29] Y Ahmadian A M Packer R Yuste and L Paninski ldquoDesign-ing optimal stimuli to control neuronal spike timingrdquo Journal ofNeurophysiology vol 106 no 2 pp 1038ndash1053 2011
[30] J Moehlis E Shea-Brown and H Rabitz ldquoOptimal inputs forphasemodels of spiking neuronsrdquo Journal of Computational andNonlinear Dynamics vol 1 no 4 pp 358ndash367 2006
[31] D Brugger S Butovas M Bogdan and C Schwarz ldquoReal-timeadaptive microstimulation increases reliability of electricallyevoked cortical potentialsrdquo IEEE Transactions on BiomedicalEngineering vol 58 no 5 pp 1483ndash1491 2011
[32] N Cristianini and J Shawe-Taylor An Introduction to SupportVector Machines and Other Kernel-Based Learning MethodsCambridge University Press 2000
[33] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[34] W Liu J C Prıncipe and S Haykin Kernel Adaptive FilteringEdited by S Haykin John Wiley amp Sons 2010
[35] B Schlkopf and A J Smola Learning With Kernels SupportVector Machines Regularization Optimization and BeyondMIT Press 2001
[36] G R G Lanckriet N Cristianini P Bartlett L El GhaouiandM I Jordan ldquoLearning the kernel matrix with semidefiniteprogrammingrdquo The Journal of Machine Learning Research vol5 pp 27ndash72 2004
[37] C Cortes M Mohri and A Rostamizadeh ldquoAlgorithms forlearning kernels based on centered alignmentrdquo Journal ofMachine Learning Research vol 13 pp 795ndash828 2012
[38] M YamadaW Jitkrittum L Sigal E P Xing andM SugiyamaldquoHigh-dimensional feature selection by feature-wise kernelizedlassordquo Neural Computation vol 26 no 1 pp 185ndash207 2013
[39] R A Horn ldquoOn infinitely divisible matrices kernels and func-tionsrdquo Zeitschrift fur Wahrscheinlichkeitstheorie und VerwandteGebiete vol 8 no 3 pp 219ndash230 1967
[40] A J Brockmeier J S Choi E G Kriminger J T Francisand J C Principe ldquoNeural decoding with kernel-based metriclearningrdquo Neural Computation vol 26 no 6 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
16 Computational Intelligence and Neuroscience
[41] L Shpigelman Y Singer R Paz and E Vaadia SpikernelsEmbedding Spiking Neurons in Inner Product Spaces vol 15 ofAdvances in Neural Information Processing Systems MIT Press2003
[42] A R C Paiva I Park and J C Prıncipe ldquoA reproducing kernelHilbert space framework for spike train signal processingrdquoNeural Computation vol 21 no 2 pp 424ndash449 2009
[43] I M Park S Seth M Rao and J C Principe ldquoStrictly positivedefinite spike train kernels for point process divergencesrdquoNeural Computation vol 24 no 8 pp 2223ndash2250 2012
[44] L Li kernel based machine learning framework for neuraldecoding [PhD dissertation] University of Florida 2012
[45] H RamlauHansen ldquoSmoothing counting process intensities bymeans of kernel functionsrdquo The Annals of Statistics vol 11 no2 pp 453ndash466 1983
[46] P Dayan and L F Abbott Theoretical Neuroscience Computa-tional andMathematicalModeling of Neural SystemsMIT Press2001
[47] J Platt ldquoA resource-allocating network for function interpola-tionrdquo Neural Computation vol 3 no 2 pp 213ndash225 1991
[48] L Csato and M Opper ldquoSparse on-line gaussian processesrdquoNeural Computation vol 14 no 3 pp 641ndash668 2002
[49] Y Engel S Mannor and R Meir ldquoThe kernel recursive least-squares algorithmrdquo IEEE Transactions on Signal Processing vol52 no 8 pp 2275ndash2285 2004
[50] W Liu I Park and J C Prıncipe ldquoAn information theoreticapproach of designing sparse kernel adaptive filtersrdquo IEEETransactions on Neural Networks vol 20 no 12 pp 1950ndash19612009
[51] B Widrow and E Walach Adaptive Inverse Control Edited byE Cliff Prentice-Hall 1995
[52] J T Francis S Xu and J K Chapin ldquoProprioceptive andcutaneous representations in the rat ventral posterolateralthalamusrdquo Journal of Neurophysiology vol 99 no 5 pp 2291ndash2304 2008
[53] G Paxinos and C WatsonThe Rat Brain in Stereotaxic Coordi-nates Academic Press 6th edition 2006
[54] G Foffani J K Chapin and K A Moxon ldquoComputational roleof large receptive fields in the primary somatosensory cortexrdquoJournal of Neurophysiology vol 100 no 1 pp 268ndash280 2008
[55] K Fukunaga andLDHostetler ldquoThe estimation of the gradientof a density function with application in pattern recognitionrdquoIEEE Transactions on Information Theory vol 21 no 1 pp 32ndash40 1975
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014