Machine learning approaches to Attack Detection in Collaborative Recommender Systems Runa Bhaumik...

31
Machine learning approaches to Attack Detection in Collaborative Recommender Systems Runa Bhaumik College of Computing and Digital Media DePaul University Chicago, Illinois
  • date post

    19-Dec-2015
  • Category

    Documents

  • view

    226
  • download

    0

Transcript of Machine learning approaches to Attack Detection in Collaborative Recommender Systems Runa Bhaumik...

Machine learning approaches to Attack Detection in Collaborative Recommender Systems

Runa Bhaumik

College of Computing and Digital MediaDePaul UniversityChicago, Illinois

Runa Bhaumik

College of Computing and Digital MediaDePaul UniversityChicago, Illinois

2

Outline• Vulnerabilities in collaborative

recommendation– Background, types of attacks and examples– Basic attack models– Effectiveness of different attacks against common

CF algorithms• Possible solutions

– Attack Detection and Response

Motivation and Objectives

“UserSubmitter.com” website that once operated as a “pay-per-digg” service allowed publishers to promote their content on Digg.com by paying other Digg users to “digg” the submitted article

Motivation and Objectives• Several real-world examples of

suspicious behavior related to recommender systems and social tagging networks.– Amazon. COM– Spur .NET

Introduction• User-Adaptive Systems

– Ex. collaborative recommender systems, social tagging networks

– Depend on user input• Problem we are addressing

– User-Adaptive systems present a security problem• Malicious users may try to distort the system's behavior

• Solution– Understanding different kinds of attack is crucial to

evaluate the system– Investigating how such public systems can be made more

robust through algorithmic solutions and detection

Example Collaborative SystemItem1 Item 2 Item 3 Item 4 Item 5 Item 6 Correlation with

Alice

Alice 5 2 3 3 ?

User 1 2 4 4 1 -1.00

User 2 2 1 3 1 2 0.33

User 3 4 2 3 2 1 .90

User 4 3 3 2 3 1 0.19

User 5 3 2 2 2 -1.00

User 6 5 3 1 3 2 0.65

User 7 5 1 5 1 -1.00

Bestmatch

Prediction

Using k-nearest neighbor with k = 1

A Successful Push AttackItem1 Item 2 Item 3 Item 4 Item 5 Item 6 Correlatio

n with Alice

Alice 5 2 3 3 ?

User 1 2 4 4 1 -1.00

User 2 2 1 3 1 2 0.33

User 3 4 2 3 2 1 .90

User 4 3 3 2 3 1 0.19

User 5 3 2 2 2 -1.00

User 6 5 3 1 3 2 0.65

User 7 5 1 5 1 -1.00

Attack 1 2 3 2 5 -1.00

Attack 2 3 2 3 2 5 0.76

Attack 3 3 2 2 2 5 0.93

Prediction

Best

Match

“user-based” algorithm using k-nearest neighbor with k = 1

Profile Injection Attacks• Goal: To learn an attacker’s behavior• Profile Injection Attacks

– Consists of a number of "attack profiles"– profiles engineered to bias the system's

recommendations• Called “Shilling” in some previous work• "Push attack"

– designed to promote a particular product– attack profiles give a high rating to the pushed item– includes other ratings as necessary

• Other attack types– “nuke” attacks

Previous Work

• O'Mahoney, et al. 2004 – Theoretical basis for vulnerability; upper bound on

prediction shift– Assumes full knowledge of rating data

• Lam & Riedl, 2004 – Empirical study of simple attack types– Impact on user-based and item-based algorithms– Assumes knowledge of average & std. dev. of ratings

for all items• General conclusion:

– Substantial vulnerabilities exist

A Generic Attack Profile

• Previous work considered simple attack profiles:– No selected items, i.e., IS =

– No unrated items, i.e., I = – Attack models differ based on ratings assigned to filler

items, e.g., random attack, average attack

… … … it

… … null null null

Ratings for k selected items

Rating for the target item

1Si S

ki

IS

1Fi F

li

IF

1i

vi

Ratings for l filler items

Unrated items in the attack profile

1( )Fi ( )Fli1( )Si ( )Ski ( )ti

Vulnerabilities Against Collaborative Filtering Systems (2005 -2006)

• RandomRandom ratings drawn from overall rating

distribution among all items• Average

Random ratings drawn from overall rating distribution among individual items

• Bandwagon /AOP Target popular items (e.g., “blockbuster”

movies)• Segment Attack

Target a particular segment of users (fans of Harrison Ford or fans of horror)

Experimental Methodology• Data Set

– MovieLens 100K data set– 943 users and 1682 movies

• Evaluation Metrics– Prediction shift

• How much the rating of the pushed movie differs before and after the attack

• Attack Size – Percentage of the number of profiles in the database before the attack

(3% means 28 attack users)• Profile Size

– Number of filler items in attack profile filler size as a proportion of the set of all items ( 3% means 50 items)

• Algorithms– User-based collaborative filtering– Item-based collaborative filtering

Effectiveness of Push and Nuke Attacks

User-Based(push attack)

00.20.40.60.8

11.21.41.61.8

0% 3% 6% 9% 12% 15%

Attack Size

Pre

dic

tio

n S

hif

t

Average(3%) Bandwagon(3%)Random(6%) Segment(alluser)Segment(in-segment)

Item-Based(push attack)

0

0.5

1

1.5

0.0% 5.0% 10.0% 15.0%

Attack Size

Pre

dic

tio

n S

hif

t

Average Bandwagon

Random Segment(in-segment)

Segment(all-user)

-4

-2

0

0 0.05 0.1 0.15

Pre

dic

tio

n S

hif

t

Attack size

User-Based(Nuke Attack)Average(3%) Bandwagon(3%)Random(6%) Love/Hate(3%)Reverse Bandwagon

-1-0.8-0.6-0.4-0.2

00.2

0 0.05 0.1 0.15

Pre

dic

tio

n S

hif

t

Attack size

Item-Based (Nuke Attack)Average BandwagonRandom Love/Hate(3%)Reverse Bandwagon

15

Possible Solutions• Algorithmic Solutions

– Design algorithms that are less susceptible to the types of attacks

• Hybrid Approach, model-based

• Detection and Response– Identify fake user profiles and remove them from the

system• Implement Captcha

– A program that protects websites against bots

16

Approaches to Detection & Response

Single Profile Classification• Classification model to identify attack profiles and exclude these profiles in

computing predictions

Group Profile Classification• Clustering Model to identify a group of attack profiles

Anomaly Detection• Classify Items (as being possibly under attack)

– Not dependent on known attack models– Can shed some light on which type of items are most vulnerable to which

types of attacks

But, what if the attack does not closely correspond to known attack signature

17

Classification-Based Approach to Detection

• Profile Classification – Automatically identify attack profiles and exclude them from

predictions– Reverse-engineered profiles likely to be most damaging– Increase cost of attacks by detecting most effective attacks– Characteristics of known attack models are likely to appear in other

effective attacks as well

• Basic Approach– Create attributes that capture characteristics of suspicious profiles– Use attributes to build classification models– Apply model to user profiles to identify and discount potential attacks

• Type of Detection Attributes– Generic – modeled on basic descriptive statistics– Model-specific- attempt to detect characteristics of profiles that are

generated by specific attack models.

18

Examples of Generic Attributes• Weighted Deviation from Mean Agreement (WDMA)

– Average difference in profile’s rating from mean rating on each item weighted by the item’s inverse rating frequency squared

• Weighted Degree of Agreement (WDA)– Sum of profile’s rating agreement with mean rating on each

item weighted by inverse rating frequency

• Average correlation of the profile's k nearest neighbors

– Captures rogue profiles that are part of large attacks with similar characteristics

• Variance in the number of ratings in a profile compared to the average number of ratings per user

– Few real users rate a large # of items

,

20WDMA

un u i i

i iu

u

r r

l

n

,

0

WDAun u i i

ui i

r r

l

2

0

# #LengthVar

(# # )

j

j N

ji

ratings ratings

ratings ratings

1DegSim

k

iji

j

W

k

19

Methodological Note for Detection Results• Data set

– Using MovieLens 100K data set– Data split 50% training, 50% test

• Profile classifier - Supervised training approach– kNN classifier, k=9– Training data

• Half of actual data labeled as “Authentic”• Insert a mix of attack profiles built from several attack models labeled as “Attack”

– Test data• Start with second half of actual data• Insert test attack profiles targeting different movies than targeted in training data

• Recommendation Algorithm– User based kNN, k = 20

• Evaluating results– 50 different target movies

• selected randomly but mirroring overall distribution– 50 users randomly pre-selected

• Results were averaged over all runs for each movie-user pair

20

Evaluation MetricsDetection attribute value:– Information Gain – attack profile vs. authentic profile Classification performance:

True positive = # of attack profiles correctly identifiedFalse positive = # of authentic profiles misclassified as attacksFalse negatives = # of attack profiles misclassified as authentic

– Precision = true positives / (true pos. + false pos.)Percent of profiles identified as attacks that are attacks

– Recall = true positives / (true pos. + false negatives)Percent of attack profiles that were identified correctly

Recommender robustness:– Prediction shift – change in recommender’s prediction resulting from

the attack

21

Classification Effectiveness: Bandwagon and Segment Push Attacks

Push attack precision

0%

10%

20%

30%

40%

50%

60%

0% 20% 40% 60% 80% 100%

Filler Size

Pre

cis

ion

Bandw agon-Model detection Segment-Model detection

Bandw agon-Chirita detection Segment-Chirita detection

Push attack recall

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

0% 20% 40% 60% 80% 100%

Filler Size

Re

call

Bandw agon-Model detection Segment-Model detection

Bandw agon-Chirita detection Segment-Chirita detection

1. Detecting Profile Injection Attacks in Collaborative Recommender Systems in Proceedings of the IEEE Joint Conference on E-Commerce Technology and

Enterprise Computing (2006)2. Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness in ACM Transactions on Internet Technology (TOIT) (2007)

Classification Approach

• Limitations– Didn’t perform well when the spam profiles are

obfuscated– Ignored the combined effect of malicious users– Exploited signatures of attack profile

• With million users in the database it is not possible to label the users

Possible Solutions• Unsupervised Approaches

– Clustering based on principal component analysis ( Mehta et.al. 2007)– UnRAP algorithm based on residue-based metric used in gene

expression analysis ( Bryan et.al. 2008)– N-P detection algorithm, a statistical approach ( Hurley et.al.

2009)

• Limitations– Model-based, parameterized, high false alarms, not suitable for all

attack models

Clustering Approach

– Unsupervised Detection Technique• Trains on unlabeled data• Creates attributes that capture characteristics of suspicious profiles

– Generic attributes – RDMA, WDA, WDMA, Profile variance– Residue-Based Attribute (Bryan et.al. 2008)

• Divides the dataset into clusters – k-means clustering – Plot within-groups sum of squares against the number of

clusters – Run several times and select the lowest squared error value as

the final clustering• Identifies anomalous clusters based on cluster statistics

– Select clusters with highest RDMA,WDA and coefficient of variation

Information Gain Results

Average Attack(1% attack size)

0

0.02

0.04

0.06

0.08

0.1

0% 10% 20% 30% 40% 50% 60% 70%

Filler Size

Info

-Gai

n

RDMA Length Variance WDA

WDMA Residue Score

Segment Attack(1% attack size)

00.020.040.060.080.1

0% 10% 20% 30% 40% 50% 60% 70%

Filler Size

Info

-Gai

n

RDMA Length Variance WDA

WDMA Residue Score

Cluster EntropyAverage Attack

24.96

918.04

46.06 0.940

500

1000

0.645505575 0.007361254Entropy(k=2)

authentic attack

Average Attack

20.2259.94

662.86

44.4 2.6 00

500

1000

0.619891203 0.053883882 0

Entropy( k=3)

authentic attack

Segment Attack

16.1

926.9

45.9 1.10

500

1000

0.569767294 0.008461158

Entropy(k=2)

authentic attack

Segment Attack

18.08

676.3

248.744.2 0 2.8

0

500

1000

0.523444886 0 0.052812308

Entropy(k=3)

authentic attack

Our conjecture: smaller cluster will have higher entropy 943 real profiles and 47 attack profiles

Our conjecture: smaller cluster will have higher entropy 943 real profiles and 47 attack profiles

Obfuscated Attacks

• Noise Injection– involves adding a noise to ratings according to a standard normal

distribution multiplied by a constant• User Shifting

– involves incrementing or decrementing (shifting) all ratings for a subset of items per attack profile by a constant

• Target Shifting– simply shifting the rating given to the target item from the maximum

rating to a rating one step lower, or in the case of nuke attacks increasing the target rating to one step above the lowest rating.

• Mixed Attack– involves attacking the same target item and producing from different– attack models.

Clustering Effectiveness:

avera

ge

random

segmen

t

obsavera

ge

obsrandom

AOP0%

20%

40%

60%

80%

100%

Movielens ( 100,000 ratings)943 users and 1682 movies

SensitivitySpecificity

avera

ge

random

segmen

t

obsavera

ge

obsrandom

AOP0%

20%

40%

60%

80%

100%

Movielens ( 1 million ratings)6040 users and 3952 movies

SensitivitySpecificity

Clustering Effectiveness:

0%

20%

40%

60%

80%

100%

Sensitivity

k-Means UnRAP PCA

0%

20%

40%

60%

80%

100%

Specificity

k-Means UnRAP PCA

Clustering Effectiveness

Average Over Popular (AOP 20%) Attack (Hurley et.al. 2009)

Summary of Clustering Results

• Advantages– Scalable– High degree of accuracy– Detection is effective against “segment” and “AOP” attack– Does not depend on attack models

• Disadvantages– Detection of the wrong cluster can bias the predicting

accuracy– Real attackers might employ strategies fooling the system