Adversarial Attacks and Defenses in Physiological ...

12
arXiv:2102.02729v3 [cs.LG] 11 Feb 2021 1 Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review Dongrui Wu, Senior Member, IEEE, Weili Fang, Yi Zhang, Liuqing Yang, Xiaodong Xu, Hanbin Luo and Xiang Yu Abstract—Physiological computing uses human physiological data as system inputs in real time. It includes, or significantly overlaps with, brain-computer interfaces, affective computing, adaptive automation, health informatics, and physiological signal based biometrics. Physiological computing increases the com- munication bandwidth from the user to the computer, but is also subject to various types of adversarial attacks, in which the attacker deliberately manipulates the training and/or test exam- ples to hijack the machine learning algorithm output, leading to possibly user confusion, frustration, injury, or even death. However, the vulnerability of physiological computing systems has not been paid enough attention to, and there does not exist a comprehensive review on adversarial attacks to it. This paper fills this gap, by providing a systematic review on the main research areas of physiological computing, different types of adversarial attacks and their applications to physiological computing, and the corresponding defense strategies. We hope this review will attract more research interests on the vulnerability of physiological computing systems, and more importantly, defense strategies to make them more secure. Index Terms—Physiological computing, brain-computer inter- faces, health informatics, biometrics, machine learning, adversar- ial attack I. I NTRODUCTION Keyboards and mouses, and recently also touchscreens, are the most popular means that a user sends commands to a computer. However, they convey little information about the psychological state of the user, e.g., cognitions, motivations and emotions, which are also very important in the develop- ment of ‘smart’ technology [22]. For example, on emotions, Marvin Minsky, a pioneer in artificial intelligence, pointed out early in the 1980s that [58] “the question is not whether D. Wu is with the Ministry of Education Key Laboratory of Image Process- ing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074 China. Email: [email protected]. W. Fang is with the School of Design and Environment, National University of Singapore, 117566 Singapore. Email: [email protected]. Y. Zhang and X. Xu are with the College of Public Administration, Huazhong University of Science and Technology, Wuhan, 430074 China. Email: [email protected], [email protected]. L. Yang is with Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, 400030 China. Email: [email protected]. H. Luo is with the School of Civil and Hydraulic Engineering, Huazhong University of Science and Technology, Wuhan, 430074 China. Email: luo- [email protected]. X. Yu (Member of the Academy of Europe) is with the School of Management and Sino-European Institute for Intellectual Property, Huazhong University of Science and Technology, Wuhan, 430074 China. Email: yuxi- [email protected]. X. Xu, H. Luo and X. Yu are the corresponding authors. intelligent machines can have any emotions, but whether machines can be intelligent without emotions.Physiological computing [37] is “the use of human phys- iological data as system inputs in real time.” It opens up bandwidth within human-computer interaction by enabling an additional channel of communication from the user to the com- puter [22], which is necessary in adaptive and collaborative human-computer symbiosis. Common physiological data in physiological computing include the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), eye movement, blood pressure (BP), electrodermal activity (EDA), respiration (RSP), skin tem- perature, etc., which are recordings or measures produced by the physiological process of human beings. Their typical measurement locations are shown in Fig. 1. These signals have been widely studied in the literature in various applications, including and beyond physiological computing, as shown in Table I. Fig. 1. Common signals in physiological computing, and their typical measurement locations. Physiological signals are usually single-channel or multi- channel time series, as shown in Fig. 2. In many clinical applications, the recording may last hours, days, or even longer. For example, long-term video-EEG monitoring for seizure diagnostics may need 24 hours, and ECG monitoring in intensive care units (ICUs) may last days or weeks. Wearable ECG monitoring devices, e.g., iRythm Zio Patch, AliveCor KardiaMobile, Apple Watch, and Huawei Band, are being used by millions of users. A huge amount of physiological signals

Transcript of Adversarial Attacks and Defenses in Physiological ...

Page 1: Adversarial Attacks and Defenses in Physiological ...

arX

iv:2

102.

0272

9v3

[cs

.LG

] 1

1 Fe

b 20

211

Adversarial Attacks and Defenses

in Physiological Computing: A Systematic ReviewDongrui Wu, Senior Member, IEEE, Weili Fang, Yi Zhang, Liuqing Yang,

Xiaodong Xu, Hanbin Luo and Xiang Yu

Abstract—Physiological computing uses human physiologicaldata as system inputs in real time. It includes, or significantlyoverlaps with, brain-computer interfaces, affective computing,adaptive automation, health informatics, and physiological signalbased biometrics. Physiological computing increases the com-munication bandwidth from the user to the computer, but isalso subject to various types of adversarial attacks, in which theattacker deliberately manipulates the training and/or test exam-ples to hijack the machine learning algorithm output, leadingto possibly user confusion, frustration, injury, or even death.However, the vulnerability of physiological computing systemshas not been paid enough attention to, and there does not exist acomprehensive review on adversarial attacks to it. This paper fillsthis gap, by providing a systematic review on the main researchareas of physiological computing, different types of adversarialattacks and their applications to physiological computing, and thecorresponding defense strategies. We hope this review will attractmore research interests on the vulnerability of physiologicalcomputing systems, and more importantly, defense strategies tomake them more secure.

Index Terms—Physiological computing, brain-computer inter-faces, health informatics, biometrics, machine learning, adversar-ial attack

I. INTRODUCTION

Keyboards and mouses, and recently also touchscreens, are

the most popular means that a user sends commands to a

computer. However, they convey little information about the

psychological state of the user, e.g., cognitions, motivations

and emotions, which are also very important in the develop-

ment of ‘smart’ technology [22]. For example, on emotions,

Marvin Minsky, a pioneer in artificial intelligence, pointed

out early in the 1980s that [58] “the question is not whether

D. Wu is with the Ministry of Education Key Laboratory of Image Process-ing and Intelligent Control, School of Artificial Intelligence and Automation,Huazhong University of Science and Technology, Wuhan, 430074 China.Email: [email protected].

W. Fang is with the School of Design and Environment, National Universityof Singapore, 117566 Singapore. Email: [email protected].

Y. Zhang and X. Xu are with the College of Public Administration,Huazhong University of Science and Technology, Wuhan, 430074 China.Email: [email protected], [email protected].

L. Yang is with Chongqing University-University of Cincinnati JointCo-op Institute, Chongqing University, Chongqing, 400030 China. Email:[email protected].

H. Luo is with the School of Civil and Hydraulic Engineering, HuazhongUniversity of Science and Technology, Wuhan, 430074 China. Email: [email protected].

X. Yu (Member of the Academy of Europe) is with the School ofManagement and Sino-European Institute for Intellectual Property, HuazhongUniversity of Science and Technology, Wuhan, 430074 China. Email: [email protected].

X. Xu, H. Luo and X. Yu are the corresponding authors.

intelligent machines can have any emotions, but whether

machines can be intelligent without emotions.”

Physiological computing [37] is “the use of human phys-

iological data as system inputs in real time.” It opens up

bandwidth within human-computer interaction by enabling an

additional channel of communication from the user to the com-

puter [22], which is necessary in adaptive and collaborative

human-computer symbiosis.

Common physiological data in physiological computing

include the electroencephalogram (EEG), electrocorticogram

(ECoG), electrocardiogram (ECG), electrooculogram (EOG),

electromyogram (EMG), eye movement, blood pressure (BP),

electrodermal activity (EDA), respiration (RSP), skin tem-

perature, etc., which are recordings or measures produced

by the physiological process of human beings. Their typical

measurement locations are shown in Fig. 1. These signals have

been widely studied in the literature in various applications,

including and beyond physiological computing, as shown in

Table I.

Fig. 1. Common signals in physiological computing, and their typicalmeasurement locations.

Physiological signals are usually single-channel or multi-

channel time series, as shown in Fig. 2. In many clinical

applications, the recording may last hours, days, or even

longer. For example, long-term video-EEG monitoring for

seizure diagnostics may need 24 hours, and ECG monitoring in

intensive care units (ICUs) may last days or weeks. Wearable

ECG monitoring devices, e.g., iRythm Zio Patch, AliveCor

KardiaMobile, Apple Watch, and Huawei Band, are being used

by millions of users. A huge amount of physiological signals

Page 2: Adversarial Attacks and Defenses in Physiological ...

2

TABLE ICOMMON SIGNALS IN PHYSIOLOGICAL COMPUTING, AND THE

CORRESPONDING NUMBER OF PUBLICATIONS IN GOOGLE SCHOLAR WITH

THE KEYWORDS IN TITLE.

Signal Keywords No. Publications

Electro-encephalogram

EEG OR ElectroencephalogramOR Electroencephalography

139,000

Electrocardiogram ECG OR EKG OR Electrocardio-gram

96,000

Electromyogram EMG OR Electromyogram 41,900Electrocorticogram ECoG OR Electrocorticogram OR

Electrocorticography4,100

Electrooculogram EOG OR Electrooculogram 2,540Respiration Respiration 91,800Blood Pressure Blood Pressure 19,300Heart Rate Vari-ability

HRV OR Heart Rate Variability 18,900

ElectrodermalActivity

EDA OR GSR OR EDR OR Elec-trodermal OR Galvanic Skin Re-sponse

17,800

Eye Movement Eye Movement OR Eye Tracking 16,200OxygenSaturation

SpO2 OR Oxygen Saturation ORBlood Oxygen

13,700

Skin Temperature Skin Temperature 5,250Photo-plethysmogram

PPG OR Photoplethysmogram 4,900

Pulse Rate Pulse Rate 3,210

Fig. 2. Examples of common signals in physiological computing, and theirtypical measurement equipment.

are collected during these processes. Manually labelling them

is very labor-intensive, and even impossible for wearable

devices, given the huge number of users.

Machine learning [83] has been used to alleviate this

problem, by automatically classifying the measured physio-

logical signals. Particularly, deep learning has demonstrated

outstanding performances [70], e.g., EEGNet [48], DeepCNN

[74], ShallowCNN [74] and TIDNet [45] for EEG classifica-

tion, SeizureNet for EEG-based seizure recognition [5], CNN

for ECG rhythm classification [28], ECGNet for ECG-based

mental stress monitoring [36], and so on.

However, recent research has shown that both traditional

machine learning and deep learning models are vulnerable to

various types of attacks [27], [57], [68], [78]. For example,

Chen et al. [14] created a backdoor in the target model

by injecting poisoning samples, which contain an ordinary

sunglass, into the training set, so that all test images with

the sunglass would be classified into a target class. Eykholt

et al. [21] stuck a carefully crafted graffiti to road signs,

and caused the model to classify ‘Stop’ as ‘Speed limit 40’.

Finlayson et al. [23], [24] successfully performed adversarial

attacks to deep learning classifiers across three clinical do-

mains (fundoscopy, chest X-ray, and dermoscopy). Rahman

et al. [69] performed adversarial attacks to six COVID-19

related applications, including recognizing whether a subject

is wearing a mask, maintaining deep learning based QR codes

as immunization certificates, recognizing COVID-19 from CT

scan or X-ray images, etc. Ma et al. [51] showed that medical

deep learning models can be more vulnerable to adversarial

attacks than models for natural images, but surprisingly and

fortunately, medical adversarial attacks may also be easily

detected. Kaissis et al. [40] pointed out that various other

attacks, in addition to adversarial attacks, also exist in medical

imaging, and called for secure, privacy-preserving and feder-

ated machine learning to cope with them.

Machine learning models in physiological computing are

not exempt from adversarial attacks [42], [43], [90]. However,

to the best of our knowledge, there does not exist a systematic

review on adversarial attacks in physiological computing. This

paper fills this gap, by comprehensively reviewing different

types of adversarial attacks, their applications in physiological

computing, and possible defense strategies. It will be very

important to the security of physiological computing systems

in real-world applications.

We need to emphasize that this paper focuses on the

emerging adversarial attacks and defenses. For other types of

attacks and defenses, e.g., cybersecurity, the readers can refer

to, e.g., [6].

The remainder of this paper is organized as follows: Sec-

tion II introduces five relevant research areas in physiological

computing. Section III introduces different categorizations of

adversarial attacks. Section IV describes various adversarial

attacks to physiological computing systems. Section V in-

troduces different approaches to defend against adversarial

attacks, and their applications in physiological computing.

Finally, Section VI draws conclusions and points some future

research directions.

II. PHYSIOLOGICAL COMPUTING

Physiological computing includes, or significantly overlaps

with, brain-computer interfaces (BCIs), affective computing,

adaptive automation, health informatics, and physiological

signal based biometrics.

A. BCIs

A BCI system establishes a direct communication pathway

between the brain and an extern device, e.g., a computer or

a robot [47]. Scalp and intracranial EEGs have been widely

used in BCIs [83].

Page 3: Adversarial Attacks and Defenses in Physiological ...

3

The flowchart of a closed-loop EEG-based BCI system is

shown in Fig. 3. After EEG signal acquisition, signal pro-

cessing, usually including both temporal filtering and spatial

filtering, is used to enhance the signal-to-noise ratio. Machine

learning is next performed to understand what the EEG signal

means, based on which a control command may be sent to an

external device.

Fig. 3. Flowchart of a closed-loop EEG-based BCI system.

EEG-based BCI spellers may be the only non-muscular

communication devices for Amyotrophic Lateral Sclerosis

(ALS) patients to express their opinions [75]. In seizure treat-

ment, responsive neurostimulation (RNS) [26], [31] recognizes

ECoG or intracranial EEG patterns prior to ictal onset, and

delivers a high-frequency stimulation impulse to stop the

seizure, improving the patients’ quality-of-life.

B. Affective Computing

Affective computing is “computing that relates to, arises

from, or deliberately influences emotion or other affective

phenomena” [65].

In bio-feedback based relaxation training [15], EDA can be

used to detect the user’s affective state, based on which a re-

laxation training application can provide the user with explicit

feedback to learn how to change his/her physiological activity

to improve health and performance. In software adaptation [3],

the graphical interface, difficulty level, sound effects and/or

content are automatically adapted based on the user’s real-

time emotion estimated from various physiological signals, to

keep the user more engaged.

C. Adaptive Automation

Adaptive automation keeps the task workload demand

within appropriate levels to avoid both underload and overload,

hence to enhance the overall performance and safety of the

human-machine system [4].

In air traffic management [4], an operator’s EEG signal can

be used to estimate the mental workload, and trigger specific

adaptive automation solutions. This can significantly reduce

the operator’s workload during high-demanding conditions,

and increase the task execution performance. A study [18] also

showed that pupil diameter and fixation time, measured from

an eye-tracking device, can be indicators of mental workload,

and hence be used to trigger adaptive automation.

D. Health Informatics

Health informatics studies information and communication

processes and systems in healthcare [16].

A single-lead short ECG recording (9-60 s), collected

from the AliveCor personal ECG monitor, can be used by a

convolutional neural network (CNN) to classify normal sinus

rhythm, atrial fibrillation, an alternative rhythm, or noise, with

an average test accuracy of 88% on the first three classes

[32]. A very recent study [59] also showed that heart rate

data from consumer smart watches, e.g., Apple, Fitbits and

Garmin devices, can be used for pre-symptomatic detection

of COVID-19, sometimes nine or more days earlier.

E. Physiological Signal Based Biometrics

Physiological signal based biometrics [76] use physiological

signals for biometric applications, e.g., digitally identify a

person to grant access to systems, devices or data.

EEG [30], ECG [1], PPG [87] and multimodal physiological

signals [8] have been used in user identification and authentica-

tion, with the advantages of universality, permanence, liveness

detection, continuous authentication, etc.

III. ADVERSARIAL ATTACKS

There are different categorizations of adversarial attacks

[57], [68], as shown in Fig. 4.

Fig. 4. Types of adversarial attacks.

A. Targeted and Non-targeted Attacks

According to the outcome, there are two types of adversarial

attacks [57]: targeted attacks and non-targeted (indiscrimi-

nate) attacks.

Targeted attacks force a model to classify certain examples,

or a certain region of the feature space, into a specific

(usually wrong) class. Non-targeted attacks force a model to

misclassify certain examples or feature space regions, but do

not specify which class they should be misclassified into.

Page 4: Adversarial Attacks and Defenses in Physiological ...

4

For example, in a 3-class classification problem, assume the

class labels are A, B and C. Then, a targeted attack may force

the input to be classified into Class A, no matter what its true

class is. A non-targeted attack forces an input from Class A to

be classified into Class B or C, but does not specify it must

be B or C; as long as it is not A, then the non-targeted attack

is successful.

B. White-Box, Black-Box and Gray-Box Attacks

According to how much the attacker knows about the target

model, there can be three types of attacks [89]:

1) White-box attacks, in which the attacker knows every-

thing about the target model, including its architecture

and parameters. This is the easiest attack scenario and

could cause the maximum damage. It may correspond

to the case that the attacker is an insider, or the model

designer is evaluating the worst-case scenario when the

model is under attack. Popular attack approaches include

L-BFGS [78], DeepFool [60], the C&W method [13],

the fast gradient sign method (FGSM) [27], the basic

iterative method (BIM) [46], etc.

2) Black-box attacks, in which the attacker knows neither

the architecture nor the parameters of the target model,

but can supply inputs to the model and observe its

outputs. This is the most realistic and also the most

challenging attack scenario. One example is that the

attacker purchases a commercial BCI system and tries

to attack it. Black-box attacks are possible, due to the

transferability of adversarial examples [78], i.e., an ad-

versarial example generated from one machine learning

model may be used to fool another machine learning

model at a high success rate, if the two models solve

the same task. So, in black-box attacks [63], the attacker

can query the target model many times to construct a

training set, train a substitute machine learning model

from it, and then generate adversarial examples from

the substitute model to attack the original target model.

3) Gray-box attacks, which assume the attacker knows a

limited amount of information about the target model,

e.g., (part of) the training data that the target model is

tuned on. They are frequently used in data poisoning

attacks, as introduced in the next subsection.

Table II compares the main characteristics of the three attack

types.

TABLE IICOMPARISON OF WHITE-BOX, GRAY-BOX AND BLACK-BOX ATTACKS [89].

‘−’ MEANS THAT WHETHER THE INFORMATION IS AVAILABLE OR NOT

DOES NOT AFFECT THE ATTACK STRATEGY, SINCE IT IS NOT USED IN THE

ATTACK.

Target model information White-Box Gray-Box Black-Box

Know its architecture X × ×

Know its parameters X × ×

Know its training data − X ×

Can observe its response − − X

C. Poisoning and Evasion Attacks

According to the stage that the adversarial attack is per-

formed, there are two types of attacks: poisoning attacks and

evasion attacks.

Poisoning attacks [84] happen at the training stage, to

create backdoors in the machine learning model by adding

contaminated examples to the training set. They are usually

white-box or gray-box attacks, achieved by data injection, i.e.,

adding adversarial examples to the training set [54], or data

modification, i.e., poisoning the training data by modifying

their features or labels [9].

Evasion attacks [27] happen at the test stage, by adding

deliberately designed tiny perturbations to benign test samples

to mislead the machine learning model. They are usually

white-box or black-box attacks. An example of evasion attack

in EEG classification is shown in Fig. 5.

Fig. 5. Evasion attack in BCIs [89].

IV. ADVERSARIAL ATTACKS IN PHYSIOLOGICAL

COMPUTING

Most adversarial attack studies considered computer vision

applications, where the inputs are 2D images. Physiological

signals are continuous time series, which are quite different

from images. There are relatively few adversarial attack studies

on time series [41], and even fewer on physiological signals.

A summary of them is shown in Table III.

A. Adversarial Attacks in BCIs

Attacking the machine learning models in BCIs could cause

serious damages, ranging from user frustration to serious in-

juries. For example, in seizure treatment, attacks to RNS’s [26]

seizure recognition algorithm may quickly drain its battery or

make it completely ineffective, and hence significantly reduces

the patient’s quality-of-life. Adversarial attacks to an EEG-

based BCI speller may hijack the user’s true input and output

wrong letters, leading to user frustration or misunderstanding.

In BCI-based driver drowsiness estimation [81], adversarial

attacks may make a drowsy driver look alert, increasing the

risk of accidents.

Although pioneers in BCIs have thought of neurosecurity

[38], i.e., “devices getting hacked and, by extension, behavior

unwillfully and unknowingly manipulated for nefarious pur-

poses,” most BCI research so far focused on making the BCIs

faster and more accurate, paying little attention to its security.

In 2019, Zhang and Wu [89] first pointed out that adversarial

examples exist in EEG-based BCIs, i.e., deep learning models

in BCIs are also vulnerable to adversarial attacks. They suc-

cessfully performed white-box, gray-box and black-box non-

targeted evasion attacks to three CNN classifiers, i.e., EEGNet

Page 5: Adversarial Attacks and Defenses in Physiological ...

5

TABLE IIISUMMARY OF EXISTING ADVERSARIAL ATTACK APPROACHES IN PHYSIOLOGICAL COMPUTING.

Application Problem ReferenceOutcome Knowledge Stage

Targeted Non-targeted White-box Gray-box Black-box Poisoning Evasion

BCIClassification

[89] X X X X X

[39] X X X

[90] X X X

[50] X X X X X

[55] X X X

Regression [56] X X X X

Health Informatics Classification

[32] X X X X

[2] X X X

[61] X X X X X X

[80] X X X

Biometrics Classification

[52] X X X

[20] X X X

[42] X X X

[43] X X X

[48], DeepCNN and ShallowCNN [74], in three different BCI

paradigms, i.e., P300 evoked potential detection, feedback

error-related negativity detection, and motor imagery classi-

fication. The basic idea, shown in Fig. 6, is to add a jamming

module between EEG signal processing and machine learning

to generate adversarial examples, optimized by unsupervised

FGSM. The generated adversarial perturbations are too small

to be noticed by human eyes (an example is shown in Fig. 5),

but can significantly reduce the classification accuracy.

Fig. 6. The BCI evasion attack approach proposed in [89]. A jamming moduleis inserted between signal preprocessing and machine learning to generateadversarial examples.

It is important to note that the jamming module is im-

plementable, as research [77] has shown that BtleJuice, a

framework to perform Man-in-the-Middle attacks on Bluetooth

devices, can be used to intercept the data from a consumer

grade EEG-based BCI system, modify them, and then send

them back to the headset.

Jiang et al. [39] focused on black-box non-targeted evasion

attacks to deep learning models in BCI classification problems,

in which the attacker trains a substitute model to approximate

the target model, and then generates adversarial examples from

the substitute model to attack the target model. Learning a

good substitute model is critical to the success of black-box

attacks, but it requires a large number of queries to the target

model. Jiang et al. [39] proposed a novel query synthesis based

active learning framework to improve the query efficiency, by

actively synthesizing EEG trials scattering around the decision

boundary of the target model, as shown in Fig. 7. Compared

with the original black-box attack approach in [89], the active

learning based approach can improve the attack success rate

with the same number of queries, or, equivalently, reduce the

number of queries to achieve a desired attack performance.

This is the first work that integrates active learning and

adversarial attacks for EEG-based BCIs.

Fig. 7. Query synthesis based active learning in black-box evasion attack[39].

The above two studies considered classification problems,

as in most adversarial attack research. Adversarial attacks

to regression problems were much less investigated in the

literature. Meng et al. [56] were the first to study white-box

targeted evasion attacks for BCI regression problems. They

proposed two approaches, based on optimization and gradi-

ent, respectively, to design small perturbations to change the

regression output by a pre-determined amount. Experiments

on two BCI regression problems (EEG-based driver fatigue

estimation, and EEG-based user reaction time estimation in the

psychomotor vigilance task) verified their effectiveness: both

approaches can craft adversarial EEG trials indistinguishable

from the original ones, but can significantly change the outputs

of the BCI regression model. Moreover, adversarial examples

generated from both approaches are also transferable, i.e.,

adversarial examples generated from one known regression

model can also be used to attack an unknown regression model

in black-box attacks.

The above three attack strategies are theoretically important,

but there are some constraints in applying them to real-world

BCIs:

1) Trial-specificity, i.e., the attacker needs to generate dif-

ferent adversarial perturbations for different EEG trials.

2) Channel-specificity, i.e., the attacker needs to generate

different adversarial perturbations for different EEG

channels.

Page 6: Adversarial Attacks and Defenses in Physiological ...

6

3) Non-causality, i.e., the complete EEG trial needs to

be known in advance to compute the corresponding

adversarial perturbation.

4) Synchronization, i.e., the exact starting time of the EEG

trial needs to be known for the best attack performance.

Some recent studies tried to overcome these constraints.

Zhang et al. [90] performed white-box targeted evasion

attacks to P300 and steady-state visual evoked potential

(SSVEP) based BCI spellers (Fig. 8), and showed that a tiny

perturbation to the EEG trial can mislead the speller to output

any character the attacker wants, e.g., change the output from

‘Y’ to ‘N’, or vice versa. The most distinguishing charac-

teristic of their approach is that it explicitly considers the

causality in designing the perturbations, i.e., the perturbation

should be generated before or as soon as the target EEG trial

starts, so that it can be added to the EEG trial in real-time in

practice. To achieve this, an adversarial perturbation template

is constructed from the training set only and then fixed. So,

there is no need to know the test EEG trial and compute

the perturbation specifically for it. Their approach resolves

the trial-specificity and non-causality constraints, but different

EEG channels still need different perturbations, and it also

requires the attacker to know the starting time of an EEG trial

in advance to achieve the best attack performance, i.e., there

are still channel-specificity and synchronization constraints.

Fig. 8. Workflow of a P300 speller and an SSVEP speller [90]. For eachspeller, the user watches the stimulation interface, focusing on the characterhe/she wants to input, while EEG signals are recorded and analyzed bythe speller. The P300 speller first identifies the row and the column thatelicit the largest P300, and then outputs the letter at their intersection. TheSSVEP speller identifies the output letter directly by matching the user’s EEGoscillation frequency with the flickering frequency of each candidate letter.

Zhang et al. [90] considered targeted attacks to a traditional

and most frequently used BCI speller pipeline, which has

separate feature extraction and classification steps. Liu et

al. [50] considered both targeted and non-targeted white-box

evasion attacks to end-to-end deep learning models in EEG-

based BCIs, and proposed a total loss minimization (TLM) ap-

proach to generate universal adversarial perturbations (UAPs)

for them. Experimental results demonstrated its effectiveness

on three popular CNN classifiers (EEGNet, ShallowCNN, and

DeepCNN) in three BCI paradigms (P300, feedback error

related negativity, and motor imagery). They also verified

the transferability of UAPs in non-targeted gray-box evasion

attacks.

To further simplify the implementation of TLM-UAP, Liu

et al. [50] also considered smaller template size, i.e., mini

TLM-UAP with a small number of channels and time domain

samples, which can be added anywhere to an EEG trial. Mini

TLM-UAPs are more practical and flexible, because they do

not require the attacker to know the exact number of EEG

channels and the exact length and starting time of an EEG

trial. Liu et al. [50] showed that, generally, all mini TLM-

UAPs were effective. However, their effectiveness decreased

when the number of used channels and/or the template length

decrease, which is intuitive. This is the first study on UAPs

of CNN classifiers in EEG-based BCIs, and also the first on

optimization based UAPs for targeted evasion attacks.

In summary, the TLM-UAP approach [50] resolves the trial-

specificity and non-causality constraints, and mini TLM-UAPs

further alleviate the channel-specificity and synchronization

constraints.

All above studies focused on evasion attacks. Meng et

al. [55] were the first to show that poisoning attacks can

also be performed for EEG-based BCIs, as shown in Fig. 9.

They proposed a practically realizable backdoor key, narrow

period pulse, for EEG signals, which can be inserted into the

benign EEG signal during data acquisition, and demonstrated

its effectiveness in black-box targeted poisoning attacks, i.e.,

the attacker does not know any information about the test

EEG trial, including its starting time, and wants to classify the

test trial into a specific class, regardless of its true class. In

other words, it resolves the trial-specificity, channel-specificity,

causality and synchronization constraints simultaneously. To

our knowledge, this is to-date the most practical BCI attack

approach.

Fig. 9. Poisoning attack in EEG-based BCIs [55]. Narrow period pulses canbe added to EEG trials during signal acquisition.

A summary of existing adversarial attack approaches in

EEG-based BCIs is shown in Table IV.

TABLE IVCHARACTERISTICS OF EXISTING ADVERSARIAL ATTACK APPROACHES IN

EEG-BASED BCIS. ‘∼’ MEANS THE CONSTRAINT IS PARTIALLY

RESOLVED.

ReferenceTrial- Channel- Non- Synchro-

Specificity Specificity Causality nization

[89] × × × ×

[39] × × × ×

[56] × × × ×

[90] X × X ×

[50] X ∼ X ∼

[55] X X X X

Page 7: Adversarial Attacks and Defenses in Physiological ...

7

B. Adversarial Attacks in Health Informatics

Adversarial attacks in health informatics can also cause

serious damages, even deaths. For example, adversarial attacks

to the machine learning algorithms in implantable cardioverter

defibrillators could lead to unnecessary painful shocks, dam-

aging the cardiac tissue, and even worse therapy interruptions

and sudden cardiac death [62].

Han et al. [32] proposed both targeted and non-targeted

white-box evasion attack approaches to construct smoothed

adversarial examples for ECG trials that are invisible to

one board-certified medicine specialist and one cardiac elec-

trophysiology specialist, but can successfully fool a CNN

classifier for arrhythmia detection. They achieved 74% attack

success rate (74% of the test ECGs originally classified

correctly were assigned a different diagnosis, after adversarial

attacks) on atrial fibrillation classification from single-lead

ECG collected from the AliveCor personal ECG monitor. This

study suggests that it is important to check if ECGs have been

altered before using them in medical machine learning models.

Aminifar [2] studied white-box targeted evasion attack in

EEG-based epileptic seizure detection, through UAPs. He

computed the UAPs via solving an optimization problem, and

showed that they can fool a support vector machine classifier to

misclassify most seizure samples into non-seizure ones, with

imperceptible amplitude.

Newaz et al. [61] investigated adversarial attacks to machine

learning-based smart healthcare systems, consisting of 10 vital

signs, e.g., EEG, ECG, SpO2, respiration, blood pressure,

blood glucose, blood hemoglobin, etc. They performed both

targeted and non-targeted attacks, and both poisoning and

evasion attacks. For evasion attacks, they also considered

both white-box and black-box attacks. They showed that

adversarial attacks can significantly degrade the performance

of four different classifiers in smart health system in detecting

diseases and normal activities, which may lead to to erroneous

treatment.

Deep learning has been extensively used in health infor-

matics; however, generally it needs a large amount of training

data for satisfactory performance. Transfer learning [83] can

be used to alleviate this requirement, by making use of data

or machine learning models from an auxiliary domain or task.

Wang et al. [80] studied targeted backdoor attacks against

transfer learning with pre-trained deep learning models on

both image and time series (e.g., ECG). Three optimization

strategies, i.e., ranking-based neuron selection, autoencoder-

powered trigger generation and defense-aware retraining, were

used to generate backdoors and retrain deep neural networks,

to defeat pruning based, fine-tuning/retraining based and input

pre-processing based defenses. They demonstrated its effec-

tiveness in brain MRI image classification and ECG heartbeat

type classification.

C. Adversarial Attacks in Biometrics

Physiological signals, e.g., EEG, ECG and PPG, have re-

cently been used in biometrics [76]. However, they are subject

to presentation attacks in such applications. In a physiological

signal based presentation attack, the attacker tries to spoof the

biometric sensors with a fake piece of physiological signal

[20], which would be authenticated as from a specific victim

user.

Maiorana et al. [52] investigated the vulnerability of an

EEG-based biometric system to hill-climbing attacks. They

assumed that the attacker can access the matching scores

of the biometric system, which can then be used to guide

the generation of synthetic EEG templates until a successful

authentication is achieved. This is essentially a black-box

targeted evasion attack in the adversarial attack terminology:

the synthetic EEG signal is the adversarial example, and the

victim’s identify is the target class. It’s a black-box attack,

because the attacker can only observe the output of the

biometric system, but does not know anything else about it.

Eberz et al. [20] proposed an offline ECG biometrics

presentation attack approach, illustrated in Fig. 10(a). The

basic idea was to find a mapping function to transform ECG

trials recorded from the attacker so that they resemble the mor-

phology of ECG trials from a specific victim. The transformed

ECG trials can then be used to fool an ECG biometric system

to obtain unauthorized access. They showed that the attacker

ECG trials can be obtained from a device different from the

one that the victim ECG trials are recorded (i.e., cross-device

attack), and there could be different approaches to present the

transformed ECG trials to the biometric device under attack,

the simplest being the playback of ECG trials encoded as .wav

files using an off-the-shelf audio player.

Unlike [52], the above approach is a gray-box targeted

evasion attack in the adversarial attack terminology: the at-

tacker’s ECG signal can be viewed as the benign example,

the transformed ECG signal is the adversarial example, and

the victim’s identify is the target class. The mapping function

plays the role of the jamming module in Fig. 6. It’s a gray-

box attack, because the attacker needs to know the feature

distributions of the victim ECGs in designing the mapping

function.

Karimian et al. [42] proposed an online ECG biometrics

presentation attack approach, shown in Fig. 10(b). Its proce-

dure is very similar to the offline attack one in Fig. 10(a),

except that the online approach is simpler, because it only

requires as few as one victim ECG segment to compute the

mapping function, and the mapping function is linear. Karim-

ian [43] also proposed a similar presentation attack approach

to attack PPG-based biometrics. Again, these approaches can

be viewed as gray-box targeted evasion attacks.

D. Discussion

Although we have not found adversarial attack studies on

affective computing and adaptive automation in physiological

computing, it does not mean that adversarial attacks cannot

be performed in such applications. Machine learning mod-

els in affective computing and adaptive automation are not

fundamentally different from those in BCIs; so, adversarial

attacks in BCIs can easily be adapted to affective computing

and adaptive automation.

Particularly, Meng et al. [56] have shown that it is possible

to attack the regression models in EEG-based driver fatigue

Page 8: Adversarial Attacks and Defenses in Physiological ...

8

(a)

(b)

Fig. 10. (a) Offline ECG biometrics presentation attack [20]; (b) Online ECG biometrics presentation attack [44].

estimation and EEG-based user reaction time estimation,

whereas driver fatigue and user reaction time could be triggers

in adaptive automation.

V. DEFENSE AGAINST ADVERSARIAL ATTACKS

There are different adversarial defense strategies [7], [68]:

1) Data modification, which modifies the training set in

the training stage or the input data in the test stage,

through adversarial training [78], gradient hiding [79],

transferability blocking [34], data compression [17], data

randomization [85], etc.

2) Model modification, which modifies the target model

directly to increase its robustness. This can be achieved

through regularization [9], defensive distillation [64],

feature squeezing [86], using a deep contractive network

[29] or a mask layer [25], etc.

3) Auxiliary tools, which may be additional auxiliary ma-

chine learning models to robustify the primary model,

e.g., adversarial detection models [67], or defense gen-

erative adversarial nets (defense-GAN) [73], high-level

representation guided denoiser [49], etc.

As researchers just started to investigate adversarial attacks

in physiological computing, there were even fewer studies on

defense strategies against them. A summary of them is shown

in Table V.

A. Adversarial Training

Adversarial training, which trains a robust machine learning

model on normal plus adversarial examples, may be the most

popular data modification based adversarial defense approach.

Hussein et al. [35] proposed an approach to augment deep

learning models with adversarial training for robust prediction

of epilepsy seizures. Though their goal was to overcome

some challenges in EEG-based seizure classification, e.g.,

TABLE VSUMMARY OF EXISTING ADVERSARIAL DEFENSE STUDIES IN

PHYSIOLOGICAL COMPUTING.

Reference ApplicationData Model Adversarial

Modification Modification Detection

[35] BCI X

[72] BCI X

[10] Health Informatics X

[11] Health Informatics X

[42] Biometrics X

individual differences and shortage of pre-ictal labeled data,

their approach can also be used to defend against adversarial

attacks.

They first constructed a deep learning classifier from avail-

able limited amount of labeled EEG data, and then performed

white-box attacks to the classifier to obtain adversarial ex-

amples, which were next combined with the original labeled

data to retrain the deep learning classifier. Experiments on two

public seizure datasets demonstrated that adversarial training

increased both the classification accuracy and classifier robust-

ness.

B. Model Modification

Regularization based model modification to defend against

adversarial attacks usually also considers the model security

(robustness) in the optimization objective function.

Sadeghi et al. [72] proposed an analytical framework for

tuning the classifier parameters, to ensure simultaneously its

accuracy and security. The optimal classifier parameters were

determined by solving an optimization problem, which takes

into account both the test accuracy and the robustness against

adversarial attacks. For k-nearest neighbor (kNN) classifiers,

the two parameters to be optimized are the number of neigh-

bors and the distance metric type. Experiments on EEG-

Page 9: Adversarial Attacks and Defenses in Physiological ...

9

based eye state (open or close) recognition verified that it is

possible to achieve both high classification accuracy and high

robustness against black-box targeted evasion attacks.

C. Adversarial Detection

Adversarial detection uses a separate module to detect if

there is adversarial attack, and takes actions accordingly. The

simplest is to discard adversarial examples directly.

Cai and Venkatasubramanian [11] proposed an approach to

detect signal injection-based morphological alterations (eva-

sion attack) of ECGs. Because multiple physiological signals

based on the same underlying physiological process (e.g.,

cardiac process) are inherently related to each other, any adver-

sarial alteration of one of the signals will lead to inconsistency

in the other signal(s) in the group. Since both ECG and

arterial blood pressure measurements are representations of the

cardiac process, the latter can be used to detect morphological

alterations in ECGs. They demonstrated over 90% accuracy in

detecting even subtle ECG morphological alterations for both

healthy subjects and patients. A similar idea [10] was also

used to detect temporal alternations of ECGs, by making use

of their correlations with arterial blood pressure and respiration

measurements.

Karimian et al. [42] proposed two strategies to protect ECG

biometric authentication systems from spoofing, by evaluating

if ECG signal characteristics match the corresponding heart

rate variability or PPG features (pulse transit time and pulse

arrival time). The idea is actually similar to Cai and Venkata-

subramanian’s [11]. If there is a mismatch, then the system

considers the input to be fake, and rejects it.

VI. CONCLUSIONS AND FUTURE RESEARCH

Physiological computing includes, or significantly overlaps

with, BCIs, affective computing, adaptive automation, health

informatics, and physiological signal based biometrics. It

increases the communication bandwidth from the user to the

computer, but is also subject to adversarial attacks. This paper

has given a comprehensive review on adversarial attacks and

their defense strategies in physiological computing, hopefully

will bring more attention to the security of physiological

computing systems.

Promising future research directions in this area include:

1) Transfer learning has been extensively used in physi-

ological computing [83], to alleviate the training data

shortage problem by leveraging data from other subjects

[33] or tasks [82], or to warm-start the training of a

(deep) learning algorithm by borrowing parameters or

knowledge from an existing algorithm [80], as shown

in Fig. 11. However, transfer learning is particularly

susceptive to poisoning attacks [55], [80]. It’s very

important to develop strategies to check the integrity of

data and models before using them in transfer learning.

2) Adversarial attacks to other components in the ma-

chine learning pipeline (an example on BCI is shown

in Fig. 12), which includes signal processing, feature

engineering, and classification/regression, and the corre-

sponding defense strategies. So far all adversarial attack

Fig. 11. A transfer learning pipeline in motor imagery based BCIs [83].

approaches in physiological computing considered the

classification or regression model only, but not other

components, e.g., signal processing and feature engi-

neering. It has been shown that feature selection is also

subjective to data poisoning attacks [84], and adversarial

feature selection can be used to defend against evasion

attacks [88].

Fig. 12. Adversarial attacks to the BCI machine learning pipeline.

3) Additional types of attacks in physiological computing

[7], [12], [19], [66], [71], as shown in Fig. 13, and the

corresponding defense strategies. For example, Paoletti

et al. [62] performed parameter tampering attacks on

Boston Scientific implantable cardioverter defibrillators,

which use a discrimination tree to detect tachycardia

episodes and then initiate the appropriate therapy. They

slightly modified the parameters of the discrimination

tree to achieve both attack effectiveness and stealthiness.

These attacks are also very dangerous in physiological

computing, and hence deserve adequate attention.

4) Adversarial attacks to affective computing and adaptive

automation applications, which have not been studied

yet, but are also possible and dangerous. Many exist-

ing attack approaches in BCIs, health informatics and

biometrics can be extended to them, either directly or

with slight modifications. However, there could also

be unique attack approaches specific to these areas.

For example, emotions are frequently represented as

continuous numbers in the 3D space of valence, arousal

Page 10: Adversarial Attacks and Defenses in Physiological ...

10

Fig. 13. Additional types of attacks in physiological computing.

and dominance in affective computing [53], and hence

adversarial attacks to regression models in affective

computing should be paid enough attention to.

Finally, we need to emphasize that the goal of adversarial

attack research in physiological computing should be discov-

ering its vulnerabilities, and then finding solutions to make it

more secure, instead of merely causing damages to it.

REFERENCES

[1] F. Agrafioti, J. Gao, D. Hatzinakos, and J. Yang, “Heart biometrics:Theory, methods and applications,” in Biometrics. InTech Shanghai,China, 2011, pp. 199–216.

[2] A. Aminifar, “Universal adversarial perturbations in epileptic seizuredetection,” in Proc. Int’l Joint Conf. on Neural Networks, Jul. 2020, pp.1–6.

[3] R. V. Aranha, C. G. Correa, and F. L. Nunes, “Adapting software withaffective computing: a systematic review,” IEEE Trans. on Affective

Computing, 2021, in press.[4] P. Arico, G. Borghini, G. Di Flumeri, A. Colosimo, S. Bonelli,

A. Golfetti, S. Pozzi, J.-P. Imbert, G. Granger, R. Benhacene et al.,“Adaptive automation triggered by EEG-based mental workload index: apassive brain-computer interface application in realistic air traffic controlenvironment,” Frontiers in Human Neuroscience, vol. 10, p. 539, 2016.

[5] U. Asif, S. Roy, J. Tang, and S. Harrer, “SeizureNet: Multi-spectral deepfeature learning for seizure type classification,” in Machine Learning inClinical Neuroimaging and Radiogenomics in Neuro-oncology, 2020,pp. 77–87.

[6] S. L. Bernal, A. H. Celdran, L. F. Maimo, M. T. Barros, S. Balasubra-maniam, and G. M. Perez, “Cyberattacks on miniature brain implants todisrupt spontaneous neural signaling,” IEEE Access, vol. 8, pp. 152 204–152 222, 2020.

[7] S. L. Bernal, A. H. Celdran, G. M. Perez, M. T. Barros, and S. Bal-asubramaniam, “Security in brain-computer interfaces: State-of-the-art,opportunities, and future challenges,” ACM Computing Surveys, vol. 54,no. 1, pp. 1–35, 2021.

[8] S. Bianco and P. Napoletano, “Biometric recognition using multimodalphysiological signals,” IEEE Access, vol. 7, pp. 83 581–83 588, 2019.

[9] B. Biggio, B. Nelson, and P. Laskov, “Support vector machines underadversarial label noise,” in Proc. Asian Conf. on Machine Learning,Taiwan, China, Nov. 2011, pp. 97–112.

[10] H. Cai and K. K. Venkatasubramanian, “Detecting malicious temporalalterations of ECG signals in body sensor networks,” in Proc. Int’l Conf.

on Network and System Security, New York, NY, Dec. 2015, pp. 531–539.

[11] ——, “Detecting signal injection attack-based morphological alterationsof ECG measurements,” in Proc. Int’l Conf. on Distributed Computing

in Sensor Systems, Washington, DC, May 2016, pp. 127–135.[12] C. Camara, P. Peris-Lopez, and J. E. Tapiador, “Security and privacy

issues in implantable medical devices: A comprehensive survey,” Journal

of Biomedical Informatics, vol. 55, pp. 272–289, 2015.

[13] N. Carlini and D. Wagner, “Towards evaluating the robustness of neuralnetworks,” in Proc. IEEE Symposium on Security and Privacy, San Jose,CA, May 2017, pp. 39–57.

[14] Q. Chen, X. Ma, Z. Zhu, and Y. Sun, “Evolutionary multi-tasking single-objective optimization based on cooperative co-evolutionary memeticalgorithm,” in Proc. 13th Int’l Conf. on Computational Intelligence and

Security, 2017, pp. 197–201.

[15] L. Chittaro and R. Sioni, “Affective computing vs. affective placebo:Study of a biofeedback-controlled game for relaxation training,” Int’l

Journal of Human-Computer Studies, vol. 72, no. 8-9, pp. 663–673,2014.

[16] E. Coiera, Guide to Health Informatics. CRC press, 2015.

[17] N. Das, M. Shanbhogue, S.-T. Chen, F. Hohman, L. Chen, M. E.Kounavis, and D. H. Chau, “Keeping the bad guys out: Protectingand vaccinating deep learning with JPEG compression,” arXiv preprint

arXiv:1705.02900, 2017.

[18] T. de Greef, H. Lafeber, H. van Oostendorp, and J. Lindenberg, “Eyemovement as indicators of mental workload to trigger adaptive automa-tion,” in Int’l Conf. on Foundations of Augmented Cognition, San Diego,CA, Jul. 2009, pp. 219–228.

[19] T. Denning, Y. Matsuoka, and T. Kohno, “Neurosecurity: security andprivacy for neural devices,” Neurosurgical Focus, vol. 27, no. 1, p. E7,2009.

[20] S. Eberz, N. Paoletti, M. Roeschlin, M. Kwiatkowska, I. Martinovic, andA. Patane, “Broken hearted: How to attack ECG biometrics,” in Proc.

Network and Distributed System Security Symposium. San Diego, CA:Internet Society, Feb. 2017.

[21] I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash,A. Rahmati, and D. Song, “Robust physical-world attacks on deeplearning visual classification,” in Proc. IEEE Conf. on Computer Vision

and Pattern Recognition, Salt Lake City, UT, Jun. 2018, pp. 1625–1634.

[22] S. H. Fairclough, “Fundamentals of physiological computing,” Interact-

ing with Computers, vol. 21, pp. 133–145, 2009.

[23] S. G. Finlayson, J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S.Kohane, “Adversarial attacks on medical machine learning,” Science,vol. 363, no. 6433, pp. 1287–1289, 2019.

[24] S. G. Finlayson, H. W. Chung, I. S. Kohane, and A. L. Beam, “Ad-versarial attacks against medical deep learning systems,” arXiv preprint

arXiv:1804.05296, 2018.

[25] J. Gao, B. Wang, Z. Lin, W. Xu, and Y. Qi, “DeepCloak: Masking deepneural network models for robustness against adversarial samples,” arXiv

preprint arXiv:1702.06763, 2017.

[26] E. B. Geller, “Responsive neurostimulation: review of clinical trials andinsights into focal epilepsy,” Epilepsy & Behavior, vol. 88, pp. 11–20,2018.

[27] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessingadversarial examples,” in Proc. Int’l Conf. on Learning Representations,San Diego, CA, May 2015.

[28] S. D. Goodfellow, A. Goodwin, R. Greer, P. C. Laussen, M. Mazwi,and D. Eytan, “Towards understanding ECG rhythm classification usingconvolutional neural networks and attention mappings,” in Proc. 3rd

Machine Learning for Healthcare Conf., Stanford, CA, Aug. 2018, pp.83–101.

[29] S. Gu and L. Rigazio, “Towards deep neural network architectures robustto adversarial examples,” arXiv preprint arXiv:1412.5068, 2014.

[30] Q. Gui, Z. Jin, and W. Xu, “Exploring EEG-based biometrics for useridentification and authentication,” in Proc. IEEE Signal Processing inMedicine and Biology Symposium, Philadelphia, PA, Dec. 2014, pp. 1–6.

[31] A. Gummadavelli, H. P. Zaveri, D. D. Spencer, and J. L. Gerrard,“Expanding brain-computer interfaces for controlling epilepsy networks:novel thalamic responsive neurostimulation in refractory epilepsy,” Fron-

tiers in Neuroscience, vol. 12, p. 474, 2018.

[32] X. Han, Y. Hu, L. Foschini, L. Chinitz, L. Jankelson, and R. Ranganath,“Deep learning models for electrocardiograms are susceptible to adver-sarial attack,” Nature Medicine, vol. 3, pp. 360–363, 2020.

[33] H. He and D. Wu, “Transfer learning for brain-computer interfaces: AEuclidean space data alignment approach,” IEEE Trans. on Biomedical

Engineering, vol. 67, no. 2, pp. 399–410, 2020.

[34] H. Hosseini, Y. Chen, S. Kannan, B. Zhang, and R. Poovendran,“Blocking transferability of adversarial examples in black-box learningsystems,” arXiv preprint arXiv:1703.04318, 2017.

[35] A. Hussein, M. Djandji, R. A. Mahmoud, M. Dhaybi, and H. Hajj,“Augmenting DL with adversarial training for robust prediction ofepilepsy seizures,” ACM Trans. on Computing for Healthcare, vol. 1,no. 3, pp. 1–18, 2020.

Page 11: Adversarial Attacks and Defenses in Physiological ...

11

[36] B. Hwang, J. You, T. Vaessen, I. Myin-Germeys, C. Park, and B.-T. Zhang, “Deep ECGNet: An optimal deep learning frameworkfor monitoring mental stress using ultra short-term ECG signals,”TELEMEDICINE and e-HEALTH, vol. 24, no. 10, pp. 753–772, 2018.

[37] G. Jacucci, S. Fairclough, and E. T. Solovey, “Physiological computing,”Computer, vol. 48, no. 10, pp. 12–16, 2015.

[38] I. Jarchum, “The ethics of neurotechnology,” Nature Biotechnology,vol. 37, pp. 993–996, 2019.

[39] X. Jiang, X. Zhang, and D. Wu, “Active learning for black-box adver-sarial attacks in EEG-based brain-computer interfaces,” in Proc. IEEESymposium Series on Computational Intelligence, Xiamen, China, Dec.2019.

[40] G. A. Kaissis, M. R. Makowski, D. Ruckert, and R. F. Braren, “Secure,privacy-preserving and federated machine learning in medical imaging,”Nature Machine Intelligence, vol. 2, pp. 305–311, 2020.

[41] F. Karim, S. Majumdar, and H. Darabi, “Adversarial attacks on timeseries,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2021,in press.

[42] N. Karimian, D. Woodard, and D. Forte, “ECG biometric: Spoofing andcountermeasures,” IEEE Trans. on Biometrics, Behavior, and Identity

Science, vol. 2, no. 3, pp. 257–270, 2020.

[43] N. Karimian, “How to attack PPG biometric using adversarial machinelearning,” in Autonomous Systems: Sensors, Processing, and Security forVehicles and Infrastructure, vol. 11009, Baltimore, MD, Apr. 2019, p.1100909.

[44] N. Karimian, D. L. Woodard, and D. Forte, “On the vulnerability of ECGverification to online presentation attacks,” in IEEE Int’l Joint Conf. on

Biometrics, Denver, CO, Oct. 2017, pp. 143–151.

[45] D. Kostas and F. Rudzicz, “Thinker invariance: enabling deep neuralnetworks for BCI across more people,” Journal of Neural Engineering,vol. 17, no. 5, p. 056008, 2020.

[46] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples inthe physical world,” in Proc. Int’l Conf. on Learning Representations,Toulon, France, Apr. 2017.

[47] B. J. Lance, S. E. Kerick, A. J. Ries, K. S. Oie, and K. McDowell,“Brain-computer interface technologies in the coming decades,” Proc.of the IEEE, vol. 100, no. 3, pp. 1585–1599, 2012.

[48] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung,and B. J. Lance, “EEGNet: a compact convolutional neural network forEEG-based brain-computer interfaces,” Journal of Neural Engineering,vol. 15, no. 5, p. 056013, 2018.

[49] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, “Defense againstadversarial attacks using high-level representation guided denoiser,” inProc. IEEE Conf. on Computer Vision and Pattern Recognition, SaltLake City, UT, Jun. 2018, pp. 1778–1787.

[50] Z. Liu, L. Meng, X. Zhang, W. Fang, D. Wu, and L. Ding,“Universal adversarial perturbations for CNN classifiers in EEG-based BCIs,” Engineering, 2021, submitted. [Online]. Available:https://arxiv.org/abs/1912.01171

[51] X. Ma, Y. Niu, L. Gu, Y. Wang, Y. Zhao, J. Bailey, and F. Lu,“Understanding adversarial attacks on deep learning based medicalimage analysis systems,” Pattern Recognition, vol. 110, p. 107332, 2020.

[52] E. Maiorana, G. E. Hine, D. L. Rocca, and P. Campisi, “On thevulnerability of an EEG-based biometric system to hill-climbing attacksalgorithms’ comparison and possible countermeasures,” in Proc. IEEE

6th Int’l Conf. on Biometrics: Theory, Applications and Systems, 2013,pp. 1–6.

[53] A. Mehrabian, Basic Dimensions for a General Psychological Theory:Implications for Personality, Social, Environmental, and Developmental

Studies. Oelgeschlager, Gunn & Hain, 1980.

[54] S. Mei and X. Zhu, “Using machine teaching to identify optimaltraining-set attacks on machine learners,” in Proc. AAAI Conf. on

Artificial Intelligence, vol. 29, no. 1, Austin, TX, Jan. 2015.

[55] L. Meng, J. Huang, Z. Zeng, X. Jiang, S. Yu, T.-P. Jung,C.-T. Lin, R. Chavarriaga, and D. Wu, “EEG-based brain-computer interfaces are vulnerable to backdoor attacks,” Nature

Computational Science, 2021, submitted. [Online]. Available:https://www.researchsquare.com/article/rs-108085/v1

[56] L. Meng, C.-T. Lin, T.-P. Jung, and D. Wu, “White-box target attack forEEG-based BCI regression problems,” in Proc. Int’l Conf. on NeuralInformation Processing, Sydney, Australia, Dec. 2019.

[57] D. J. Miller, Z. Xiang, and G. Kesidis, “Adversarial learning targetingdeep neural network classification: A comprehensive review of defensesagainst attacks,” Proc. IEEE, vol. 108, no. 3, pp. 402–433, 2020.

[58] M. Minsky, The Society of Mind. New York, NY: Simon and Schuster,1988.

[59] T. Mishra, M. Wang, A. A. Metwally, G. K. Bogu, A. W. Brooks,A. Bahmani, A. Alavi, A. Celli, E. Higgs, O. Dagan-Rosenfeld et al.,“Pre-symptomatic detection of COVID-19 from smartwatch data,” Na-

ture Biomedical Engineering, vol. 4, no. 12, pp. 1208–1220, 2020.

[60] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “DeepFool: a simpleand accurate method to fool deep neural networks,” in Proc. IEEE Conf.

on Computer Vision and Pattern Recognition, Las Vegas, NV, Jun. 2016,pp. 2574–2582.

[61] A. Newaz, N. I. Haque, A. K. Sikder, M. A. Rahman, and A. S. Ulu-agac, “Adversarial attacks to machine learning-based smart healthcaresystems,” arXiv preprint arXiv:2010.03671, 2020.

[62] N. Paoletti, Z. Jiang, M. A. Islam, H. Abbas, R. Mangharam, S. Lin,Z. Gruber, and S. A. Smolka, “Synthesizing stealthy reprogrammingattacks on cardiac devices,” in Proc. 10th ACM/IEEE Int’l Conf. onCyber-Physical Systems, 2019, pp. 13–22.

[63] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, andA. Swami, “Practical black-box attacks against machine learning,” inProc. Asia Conf. on Computer and Communications Security, AbuDhabi, United Arab Emirates, Apr. 2017, pp. 506–519.

[64] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillationas a defense to adversarial perturbations against deep neural networks,”in Proc. IEEE Symp. on Security and Privacy, San Jose, CA, May 2016,pp. 582–597.

[65] R. Picard, Affective Computing. Cambridge, MA: The MIT Press, 1997.

[66] L. Pycroft, S. G. Boccard, S. L. Owen, J. F. Stein, J. J. Fitzgerald, A. L.Green, and T. Z. Aziz, “Brainjacking: implant security issues in invasiveneuromodulation,” World Neurosurgery, vol. 92, pp. 454–462, 2016.

[67] A. Qayyum, J. Qadir, M. Bilal, and A. Al-Fuqaha, “Secure and robustmachine learning for healthcare: A survey,” IEEE Reviews in Biomedical

Engineering, vol. 14, pp. 156–180, 2021.

[68] S. Qiu, Q. Liu, S. Zhou, and C. Wu, “Review of artificial intelligenceadversarial attack and defense technologies,” Applied Sciences, vol. 9,no. 5, p. 909, 2019.

[69] A. Rahman, M. S. Hossain, N. A. Alrajeh, and F. Alsolami, “Adversarialexamples – security threats to COVID-19 deep learning systems inmedical IoT devices,” IEEE Internet of Things Journal, 2021, in press.

[70] B. Rim, N.-J. Sung, S. Min, and M. Hong, “Deep learning in physio-logical signal data: A survey,” Sensors, vol. 20, no. 4, p. 969, 2020.

[71] M. Rushanan, A. D. Rubin, D. F. Kune, and C. M. Swanson, “SoK:Security and privacy in implantable medical devices and body areanetworks,” in Proc. IEEE Symposium on Security and Privacy, 2014,pp. 524–539.

[72] K. Sadeghi, A. Banerjee, and S. K. Gupta, “An analytical frameworkfor security-tuning of artificial intelligence applications under attack,”in IEEE Int’l Conf. On Artificial Intelligence Testing, San Francisco,CA, Apr. 2019, pp. 111–118.

[73] P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-GAN: Protect-ing classifiers against adversarial attacks using generative models,” arXiv

preprint arXiv:1805.06605, 2018.

[74] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter,K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball,“Deep learning with convolutional neural networks for EEG decodingand visualization,” Human Brain Mapping, vol. 38, no. 11, pp. 5391–5420, 2017.

[75] E. W. Sellers and E. Donchin, “A P300-based brain-computer interface:initial tests by ALS patients,” Clinical Neurophysiology, vol. 117, no. 3,pp. 538–548, 2006.

[76] Y. N. Singh, S. K. Singh, and A. K. Ray, “Bioelectrical signals asemerging biometrics: Issues and challenges,” Int’l Scholarly Research

Notices, vol. 2012, 2012.

[77] K. Sundararajan, “Privacy and security issues inbrain computer interfaces,” Master’s thesis, AucklandUniversity of Technology, 2017. [Online]. Available:http://orapp.aut.ac.nz/bitstream/handle/10292/11449/SundararajanK.pdf

[78] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J.Goodfellow, and R. Fergus, “Intriguing properties of neural networks,”in Proc. Int’l Conf. on Learning Representations, Banff, Canada, Apr.2014.

[79] F. Tramer, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, andP. McDaniel, “Ensemble adversarial training: Attacks and defenses,”arXiv preprint arXiv:1705.07204, 2017.

[80] S. Wang, S. Nepal, C. Rudolph, M. Grobler, S. Chen, and T. Chen,“Backdoor attacks against transfer learning with pre-trained deep learn-ing models,” IEEE Trans. on Services Computing, 2020, in press.

[81] D. Wu, V. J. Lawhern, S. Gordon, B. J. Lance, and C.-T. Lin, “Driverdrowsiness estimation from EEG signals using online weighted adap-

Page 12: Adversarial Attacks and Defenses in Physiological ...

12

tation regularization for regression (OwARR),” IEEE Trans. on Fuzzy

Systems, vol. 25, no. 6, pp. 1522–1535, 2017.[82] D. Wu, V. J. Lawhern, W. D. Hairston, and B. J. Lance, “Switching

EEG headsets made easy: Reducing offline calibration effort using activewighted adaptation regularization,” IEEE Trans. on Neural Systems and

Rehabilitation Engineering, vol. 24, no. 11, pp. 1125–1137, 2016.[83] D. Wu, Y. Xu, and B.-L. Lu, “Transfer learning for EEG-based brain-

computer interfaces: A review of progress made since 2016,” IEEE

Trans. on Cognitive and Developmental Systems, 2021, in press.[84] H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli, “Is

feature selection secure against training data poisoning?” in Proc. 32nd

Int’l Conf. on Machine Learning, Lille, France, Jul. 2015, p. 1689–1698.[85] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial

examples for semantic segmentation and object detection,” in Proc. IEEE

Int’l Conf. on Computer Vision, Venice, Italy, Oct. 2017, pp. 1369–1378.[86] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial

examples in deep neural networks,” arXiv preprint arXiv:1704.01155,2017.

[87] U. Yadav, S. N. Abbas, and D. Hatzinakos, “Evaluation of PPGbiometrics for authentication in different states,” in Proc. Int’l Conf.on Biometrics, Queensland, Australia, Feb. 2018, pp. 277–282.

[88] F. Zhang, P. P. K. Chan, B. Biggio, D. S. Yeung, and F. Roli, “Adversarialfeature selection against evasion attacks,” IEEE Trans. on Cybernetics,vol. 46, no. 3, pp. 766–777, 2016.

[89] X. Zhang and D. Wu, “On the vulnerability of CNN classifiers inEEG-based BCIs,” IEEE Trans. on Neural Systems and Rehabilitation

Engineering, vol. 27, no. 5, pp. 814–825, 2019.[90] X. Zhang, D. Wu, L. Ding, H. Luo, C.-T. Lin, T.-P. Jung, and R. Chavar-

riaga, “Tiny noise, big mistakes: Adversarial perturbations induce errorsin brain-computer interface spellers,” National Science Review, 2021, inpress.