Social spam-special-report-tele sign-impermium dec 2012

20
Evolution of the Password Exploring Emerging Authentication and Verification Techniques TELESIGN WHITE PAPER Prevent Social Spam and Fraud From Sabotaging Your Brand Special Report DECEMBER 2012

description

Impermium has teamed with TeleSign to create this whitepaper on social spam. With TeleSign, suspicious customers are routed through a simple, user-friendly verification process, ensuring legitimate users move through while fraudsters and criminals stay out. In conjunction with the Impermium real-time threat detection capabilities and global threat network, site owners can control how tightly to lock down their site, balancing a great experience for trustworthy users with an impenetrable one for the bad guys. The combined solution allows administrators to rest assured that transactions such as registration, commenting, and login are safe and secure, with a minimum of inconvenience to users and the business.

Transcript of Social spam-special-report-tele sign-impermium dec 2012

Page 1: Social spam-special-report-tele sign-impermium dec 2012

Evolution of the Password Exploring Emerging Authentication and Verification Techniques TELESIGN WHITE PAPER

Prevent Social Spam and FraudFrom Sabotaging Your BrandSpecial ReportDECEMBER 2012

Page 2: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 2

...........................................................................................................................................Introduction 3

........................................................................Different Types and Sources of Social Spam 4

..................................................................................................................................What’s at Stake? 7

...............................................Existing Approaches to Defending Against Social Spam 8

...........................................................................................................................Summary of Tactics 11

.................................................................Shut the Front & Back Doors and Clean House 13

............................................................................................................ Shut the Front Door 14

............................................................................................................................ Clean House 15

........................................................................................................... Close the Back Door 16

.............................................................................................................................................Conclusion 18

......................................................................................................About TeleSign & Impermium 19

...........................................................................................................................................References 20

Table of Contents

Page 3: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 3

Introduction

The rise of social media hasIled

to the proliferation ofIsocial

spam. Cybercriminals are

scamming users of social

networks, online directories, and

online dating sites, which

devalues the reputation of these

sites and exposes them to new

risks in the process. Social spam

has become so pervasive that, to

combat the issue, social

network administrators are

increasing security staff to

protect their users and preserve

their brand identification.

The rise of social media has led

to the proliferation of social

spam.QCybercriminals are

attackingQsocial networks, web

applications, and media sites,

causing untold harm to the sites and

users in the form of lost revenue,

maintenance costs, user attrition, and

brand reputation damage. In

response, network administrators

have devoted considerable resources

—with decidedly mixed results—to

regain control of their sites from

these attackers.

The number of cyber attacks on

social networks, online directories

and Web 2.0 sites has risen

dramatically in the last few years.

As users have flocked to social

networks, spammers have turned it

into the next battleground, enjoying

success because of lower security

hurdles and the relative ease of

impersonating friends. People trust

spam more when it comes from a

"friend," and thus end up following

links and being victimized more often

compared to email spam.

Many sites are often poorly equipped

to deal with the influx of social spam,

and without clear best practices,

may experience difficulty choosing

between mitigation options. This

special report explores the different

types of social spam, reviews the

current methods used to identify and

stop social spam, and proposes a

new paradigm for social networks to

predict and block malicious behavior.

Sites need to adopt a new, two-fold

approach to combatting social spam.

First, they need to ensure that only

legitimate users gain access to a

social network. Second, social

networks need to use automated

tools to flag and remove malicious

and offensive content in real time.

In sites across the web, this dual-

pronged approach has proven

effective at preventing criminals and

hackers from crippling brands and

damaging their relationship with

customers, advertisers, and partners.

Page 4: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 4

Understanding the types of social

spam is essential to designing

effective countermeasures. Let’s start

with a quick primer of the different

types of social spam being promoted

within the leading social network

sites today:

Account hijacking: Spammers often

disguise themselves by hijacking

normal users’ accounts for their

personal gain. Cyber miscreants steal

login information from existing social

media users via fraudulent phishing

websites or by installing keystroke-

logging malware. This is why social

media users need to be on the

lookout for suspicious messages from

friends that include dangerous links

or promote dubious offers. Different

from the spammers in traditional

systems, such as SMS

and email, social media spammers

behave like normal social network

users and continuously evolve their

spamming strategies to fool anti-

spam detection systems.

Registration fraud: Bots and

“mechanical turks” are registering

fake accounts by the millions.

Attackers can distribute malicious

content to all of the friends or

followers associated with an account.

Facebook, for example, recently

acknowledged [A] that a total of 8.7

percent (or 83 million) accounts on

the network are bogus.

Moreover, there are many sites that

openly promote their social media

attack services to spammers, from

selling fraudulent accounts in bulk

to delivering the software and

services to perpetrate these

attacks directly, including:

‣ TweetBuddy.com created software

to automate the creation of fake

accounts, mass distribution

of tweets, and sell Twitter

accounts to spammers,

according to court filings’ [B]

Sites like www.jetbots.com offer a variety of bots that social spam scammers can purchase in order to automatically create accounts, add friends, initiate spam chats, and much more.

‣ Automatic/manual CAPTCHA

recognition tools or use of

CAPTCHA farms, which employ

people to crack CAPTCHAs for just

pennies a piece (see section below

to learn more about the pitfalls of

CAPTCHA).

‣ Tools to automate the account

creation and verification process by

creating unlimited numbers of

Google Gmail or Microsoft Hotmail

accounts (such email accounts are

usually required in order to create

new social media accounts).

Services like CLAD Genius

automate processes such as ad

scheduling (auto-posting scam ads

within pre-defined time intervals).

Different Types and Sources of Social Spam

DEFINITION: Socialbot

A “socialbot” is a computer software

program that creates bogus accounts on a

particular social network and has the ability

to perform basic activities such as posting a

message and sending a friend request. If a

user accepts a socialbot’s friend request, the

bot gains access to the individual’s

information and contacts, which it will also

try to befriend, and so on.

Page 5: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 5

Not surprisingly, the rate for

purchasing a fraudulent social media

account has dropped significantly

over the last year. Sites like

BulkAccounts.com allow customers

to buy Twitter and Facebook

accounts in bulk. For example,

customers can purchase 1,000

Facebook accounts with complete

profiles and email logins (including

date of birth) for just $250.

Malware Spam: Social spam often

lurks in embedded links attached to

photos, making it less obvious for

users to spot. The problematic issue

with social spam is that the message

is personalized to appear as if it

comes from a user’s actual friend’s

account. QFacebook stated [C] that less

than 4 percent of all posts were spam

while Twitter reported that 1.5

percent of all Tweets were spam.

Comment spam: Spammers use the

sharing features on social sites to

spread their messages. Click on a

spammer’s link and it may ask you

to like or share a page or allow an

app to gain access to your profile.

Using bots, fraudsters flood social

media news sites with tens of

thousands of comments that, in

many cases, are posted by the same

spam networks that are paid to

promote online pharmacies and

knockoff designer handbags.

Like-jacking: There are two goals in

most social media scams: spread

content quickly and make money.

Like buttons help achieve prompt

and widespread propagation,

particularly as social media users get

wise to traditional scams. Like-

jacking is a common social-spam

tactic that involves duping users into

clicking on images that appear as if

those users’ friends clicked the like

buttons associated with the images

thereby recommending them.

In another ploy, users are offered an

enticing video. Hidden behind the

play button could be an invisible like

button. If clicked, the user might now

be taken to a page that requires

some level of personal information

before the video will play. Once

provided the user is redirected to

other pages to complete online

surveys or get pitched dubious

products.

A count of “Likes” is displayed on

profiles and pages, so that friends

think a video has been watched by

one of their friends and assume it is

interesting or safe for them to watch.

When they click play, the same

sequence of events happens to

them. The scammer, meanwhile, is

collecting a handsome commission

from its shady merchants for each

like referral generated.

Malware placement: Hackers

commonly sow social spam by

creating false profiles and then

friending people they don't know.

Once a hacker’s new friend clicks

on a questionable link, the spam

propagates as other friends in that

user’s network do the same. Some

social malware impersonates users

initiating chat sessions with friends.

Security experts warn that a growing

volume of sophisticated hacker

attacks take information gleaned

from social-networking profiles to

trick people with convincing targeted

messages.

Source: BulkAccounts.com (November 2012)

Page 6: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 6

Third-party apps: Malware can also

be embedded in third-party apps

that when installed give hackers

control of users’ computers. There

are tens of thousands of applications

available to Facebook users and

while Facebook may make every

reasonable effort to provide

protection against malware,

some third-party applications

may not be safe.

Some have the potential to infect

computers with malicious code,

which is used to collect data from the

users’ sites. For example, there are

stalker-like offers, promising to let

users “see who viewed your profile”

or “view my top profile stalker.”

Unfortunately, sometimes installing a

bad app can also give it access to

your personal information, which

could be stored by the app creator

and possibly sold. Most of these

malware apps get shut down

eventually by Facebook, which tracks

apps that are flagged by users and

also monitors apps for patterns that

look like spam and malware.

Personal information theft: Social

media sites generate revenue with

targeted advertising based on

personal information. As such, they

encourage registered users to

provide as much information as

possible. While everyone knows

they should never share their

social security and driver’s license

numbers, many social networking

sites ask for, if not require, similar

sensitive information that if exploited

can and will be used in a variety of

malicious ways.

Due to limited government oversight

and lack of industry standards or

incentives to educate regarding

security, privacy, and identity

protection, users are left exposed to

identity theft and fraud. Additionally,

social media websites and platforms

store confidential user information,

which, if not properly secured and

encrypted, could be vulnerable to

any number of exploits.

With the increased global use of

social media, there are more

opportunities than ever before for

criminals to steal identities or

perpetrate fraud online. For example,

status updates posted on Twitter,

Facebook and many other social

media and online dating sites can be

used maliciously. If you post that

you’re out of town on vacation or

away on business, you could be

exposing yourself or your family to

burglary, assault, or robbery. When it

comes to stalking or stealing an

identity, use of photo- and video-

sharing sites like Flickr and YouTube

provide deeper insights into you,

your family and friends, your house,

favorite hobbies, and interests. Often

this information can be used to

answer common security questions

for password recovery.Source: The Wall Street Journal Online (Spam Finds New Target, January 4, 2012) [E]

Social spam can appear in many forms of user-generated content:Social spam can appear in many forms of user-generated content:

Fraudulent user signups

Blog posts

Chat messages

Reviews & listings

Discussion forum threads

Message board posts

Direct messages

Comments

URL & link submissions

Page 7: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 7

Impermium estimates suggest that

spammers account for up to

40Qpercent of all social media

accounts and up to 8Qpercent of

social media messages sent;

approximately twice the volume

of six months ago. Spam affects

over 4 million users every day on

Facebook alone. [B] It’s not stopping

either; the volume of spam on

Facebook is growing faster than

their user base.

The volume of social spam and

resulting online fraud can completely

alter the perception of a brand or

individual, making a product or

person appear far more popular or

relevant than it/he/she actually is.

Fake accounts and artificial levels

of engagement are problematic

for all social networks with

consequences including:

Lost UsersLow-quality content and security

threats cause legitimate customers to

lose confidence and interest in a

social networks and related services.

It is thought that pervasive spam was

a major contributor to the mass user

exodus from MySpace.

Damaged ReputationIt is difficult to quantify the impact

that social spam exacts on social

networking sites and online

directories, but it is a definite

concern. A host site’s PageRank

and spam filtration can suffer

significantly by questionably

genuine social content.

Untrustworthy AnalyticsFraudulent activity makes it hard to

know how many users of a social site

are real. If the numbers can’t be

trusted, then the information is

worthless and social media sites

and online directories lose their

relevancy. By reducing the number of

fake accounts, a social website can

give the public both a more realistic

indication of the genuine number of

users as well as offer accurate

assessments of brand popularity for

individuals on the network.

Lost Ad RevenueEstimates suggest that customer

attrition costs social networking sites

$9.50 per lost-user in annual

advertising revenue. This has a

knock-on effect for potential

advertisers on the platform too. If a

significant number of the website’s

user base is made of fake accounts,

then the potential audience for an

advertising campaign would be far

smaller than it initially appears.

A corollary to lost advertising

revenues is loss of advertisers. In a

recent TechCrunch article[D] Limited

Run, a startup that offers a software

platform through which musicians

and labels can sell physical products

such as vinyl records, claimed that 80

percent of its Facebook ad clicks

came from bots, as opposed to real

people. Bots were loading pages and

driving up the advertising costs of

Limited Run. This type of negative

press can be disastrous for social

sites that rely on advertising revenue

streams, as it can create a butterfly

effect on other potential advertisers.

Call Center CostsAccount-users suspected of fraud

may spend an average of 15 minutes

on the phone with call center

representatives for identity

verification. The fully loaded cost of a

call center employee is estimated to

be at $30 per hour, meaning each call

would cost the social media website

$7.50. For a social network of 20

million users, the total fraud cost

related to call centers alone could go

up to $9 million a year.

Manual Review CostsSurprisingly, almost one-third of all

Facebook employees fight spam in

some fashion or another. That means

there are hard costs associated with

social spam as well. The larger social

sites are using more automated

algorithms and smaller sites are often

relying on manual processes, but

either way there’s a price tag

associated with addressing this

problem.

Polluted Search ResultsSocial spam pollutes the Internet

by adding noise. Everyone, save

the polluters, pays a price: Search

engines are less effective; users

waste time and attention on junk

sites; and honest publishers lose

income. As a result, social spam

spoils the rich spirit of sharing

that is a hallmark of social

media websites.

What’s at Stake?

Page 8: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 8

Without tight, seamless controls, fake

accounts become prevalent. Social

media sites, dating sites, and online

directories need to practice and

evolve their spam-handling

approaches in order to reduce fraud,

preserve brand awareness, and keep

consumers safe. This means battling

cyber criminals and security threats

by investing in the necessary tools

that ensure that the benefits of the

time and cost commitment of their

efforts far exceed the revenues

criminals might otherwise generate.

Here are a few ways social media

sites and online directories are

combatting social spam and

preventing registration and creation

of fake accounts:

CAPTCHAMany social media sites and online

directories rely on Completely

Automated Public Turing Test To

Tell Computers and Humans Apart

(CAPTCHA) implementation for

preventing bogus accounts from

being created. CAPTCHA is a

program that can generate and grade

tests that humans can pass, but

many current computer programs

(i.e., bots) cannot.

CAPTCHAs can provide a simple

defense against most bots, but they

can still be cracked. Social spammers

can leverage OCR (optical character

recognition) technology to decipher

CAPTCHAs, even when they are

distorted. Some sites have fought

back by incorporating images into

CAPTCHAs, but this is only effective

against bot-driven CAPTCHA

crackers. While automated attackers

may be responsible for a majority of

the CAPTCHA-breaking attempts

that occur every day, they no longer

account for it in entirety.

In India and other countries across

the world, CAPTCHA-breaking

companies employ people whose

sole job is to crack CAPTCHA codes.

These CAPTCHA-crackers can earn

more per day than they can as

legitimate data-processing centers;

most earn between 1/10 and 1/8 of a

cent per CAPTCHA solved and, in

turn, charge spammers between

$1.30 and $2.00 for every 1,000

solved CAPTCHAs.

In order to stay ahead of the bots,

sites have made CAPTCHAs even

more distorted and difficult. This has

led to increased end-user frustration

as legitimate users—including but not

limited to the elderly, non-English-

speaking users, and those with visual

disabilities—often fail to decipher the

letters several times before properly

translating the CAPTCHA.

Ban the SpammersBanning members from the network

is another way to get rid of spam, but

there’s no easy answer as to

how it should be executed. Creating a

functional, fully automated algorithm

to catch and filter spammers is

difficult at best. Moreover,

experienced spammers will continue

to create new accounts using fresh IP

addresses and registration info.

Existing Approaches to Defend Against Social Spam

Page 9: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 9

GhostingGhosting is something that several

social media sites do to reduce

spamming. Once a social network or

online directory decides a user is

spamming, it will allow that user to

keep up his/her spamming activities,

but will “ghost” all activities, making

them invisible to all other users on

the site.

With ghosting, a spammer may be

absolutely unaware of the fact that

he/she has been banned. He/she may

go around submitting and voting on

stuff, but what he/she doesn’t know

is that his votes and submissions are

invisible to everyone except himself/

herself. The intent of ghosting was

good: To crack down on flagrant

spammers. However, because it

employs deception, any mistakes or

bugs in the system can be extremely

difficult to diagnose and infuriating

for legitimate users.

Closed CommunityIn an effort to avoid spam some

emerging communities prefer to stick

to an application-based registration

process. It works like this: The user

submits an application to join, the

editor reviews the application and

decides whether to approve the user

or not. Manual review is surely one

way to guarantee genuine

membership. However, this inherently

anti-social approach is costly, and

may scare away many people from

even

trying to join.

Human ModerationMany companies begin with manual

processes for moderation that are

either performed by employees or by

outside firms. While people-based

approaches do have low start-up

costs and may initially present

greater flexibility for defining policies,

the slow pace, high cost, and

inconsistent quality limit the value of

such solutions.

Neighborhood Policing

Some social media sites rely on their

users to identify and report spam.

Many sites have a “report spam/

abuse” email address link. Pinterest,

for example, encourages users to

form a virtual neighborhood watch

and report spam using its "Report

Pin" button to tag spam. Spammers,

however, frequently change their

address from one disposable account

to another, rendering this tactic

impotent and ineffectual. And forcing

this burden on your most trusted

users can erode the long-term

engagement with your site.

Site Integrity SystemsDue to spammers’ negative effects

on users and brands, beyond

prosecuting them, social websites are

staffing up to address the issue.

Facebook claims it has 300

dedicated employees to oversee

security. Facebook and Twitter have

hired programmers and security

specialists to deflect the flotsam.

“Tens of millions of dollars are spent

on our site-integrity systems,

including hundreds of full-time

employees,” says Facebook

spokesman Frederic Wolens.[F]

BlacklistingFacebook has been expanding its

URL blacklist system, which uses

data from partners including Intel’s

McAfee, Google, and Websense to

detect and block known threats.

Facebook Immune System inspects

every action on the site, using the

reputation of the cookie or IP address

involved to halt any suspicious

action.

Facebook also employs a tool called

“link shim” to flag blacklisted URLs:

Every time a link on the site is

clicked, the link shim will check that

URL against its own internal list of

malicious links. If Facebook detects

that a URL is malicious, it will display

an interstitial page before the

browser actually requests the

suspicious page. Unfortunately, the

“link shim” solution and the

comprehensive blacklisting service

are proprietary to Facebook and not

available to other social media sites

and online directories. Another

weakness of the blacklist approach

is that it is reactive and only locks

known URLs. But, when spammers

regularly register hundreds of

different URLs for a single

campaign, existing blacklists

provide little defense.

Page 10: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 10

Automated ToolsSocial networks are also employing

automated services to crack down on

the problem. There are now tools that

search user/subscriber news feeds

for suspected social malware and

scams. When such a tool finds a

suspect post, it leaves a comment

indicating that the item is likely a

scam or malware.

Other solutions, like Impermium’s

Intelligent Content Protection (ICP),

remove offensive and unsolicited

content in real time. This allows an

organization to flag comments and

posts as soon as they are submitted

on their website. Content is analyzed

across hundreds of dimensions to

identify violence, racism, hate speech,

profanity, and other

forms of offensive content and

communication. From there, entries

can be blocked, allowed, or handled

in a custom workflow based on the

company’s site policy.

Impermium’s ICP provides protection

for many diverse types of user-

generated content, including

comments, reviews, captions, chat

messages, and message board posts.

The Impermium ICP system relies on

an artificial intelligence-based

language and content analysis

engine, its “user reputation” database

for detecting repeat offenders, and

its global threat network of more

than 300,000

web sites, portals, social networks,

and related properties around

the Internet.

Community Rating ServicesSocial media sites can also partner

with crowd-sourced rating

community websites like Web of

Trust (WOT) to help educate their

users. WOT widens the scope of Web

safety from purely technical security

to helping people find sites that they

can trust. Based on ratings from

millions of web users and trusted

technical sources, WOT calculates

the reputation for websites, using

traffic light-style icons displayed via

search results, social media

platforms, webmail, and many

popular sites. Green indicates a

trustworthy site; yellow tells users

that they should be cautious,

while red indicates a potentially

dangerous site.

Despite the rapidly growing problem

of social spam, there remain few

commercial products that provide

adequate protection. Left with few

alternatives, many sites try to

develop their own, only to find that

the cost of monitoring and clean up

quickly becomes a major expense.

Ignore this warning Return to previous page

The link you are trying to visit has been classified as potentially abusive by Facebook partners.To learn more about staying safe on the internet, visit our Facebookʼs Security Page. Please also read the Wikipedia articles on malware and phishing.

Website reported for spam, malware, phishing or other abuseThis warning is provided in collaboration with Web of Trust. Learn More

Sorry

Page 11: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 11

Methodology Description Phase LIkes Dislikes

CAPTCHA A CAPTCHA is a program used to verify that a human, rather than a computer, is entering data. CAPTCHAs ask users to enter text from distorted images.

Account registration, posting comments

Provides basic layer to prevent basic bot attacks

Users’ struggle tointerpret characters

Easily cracked byCAPTCHA (human-based) farms, bots and auto-solving programs

Close the Community

The site limits users by having them complete an online application and the site approves or rejects membership based on a strict set of criteria.

Account registration

May work for small, more niche-oriented social media sites and online directories

Strict control over site membership

Requires manual review

May turn off legitimate users

High cost

Runs counter to the spirit of openness cultivated by most social media sites

Phone Verification Phone verification sends a one-time verification code to a user via an automated voice call or SMS (text) message. The user then enters this one-time verification code onto the website to verify that the number provided is valid and belongs to that user.

Risky account registrations and account changes

Leverages the phone as second factor for authentication

SMS or voice messages to verify account changes

Global coverage

No start-up costs

Ability to determine high-risk phone types (pre-disposed to fraud)

Inability to flag spam or fraudulent posts/comments

Doesn’t prevent access for real but possibly harmful users.

Ghosting If a user is identified as a spammer, sites will allow the user to keep up his/her spamming activities, but will “ghost” all of his/her activities such that they are invisible to everyone in the community except him/her.

Posting and commenting

Silent banning can make the posts of known spammers invisible

Challenge in determining what constitutes spam

Inadvertent banning of legitimate users (false positives)

Complicated business rules are difficult to troubleshoot

Human Moderation

Social sites hire moderators who manually review posts and comments. The moderation is either performed by employees or by outside firms.

Posting and commenting

Low startup costs

Flexibility in establishing policies

High number of false positives

Difficult to manage with higher volumes of traffic

Inconsistent quality across moderators and geographies

Summary of Tactics

continued on next page >>

Page 12: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 12

Methodology Description Phase LIkes Dislikes

Neighborhood Policing

Sites rely on their users to identify and report spam by encouraging users to form a virtual neighborhood.

Posting and commenting

Keeps with the spirit of a social network being an online community

Ineffectual when fraudster can create new accounts with ease

Potential for false positives

Difficult to capture malware attacks (?)

Site Integrity Systems

These are homegrown systems developed by larger social media sites to identify and report spam. Facebook’s automated system, for example, removes Likes gained by malware, compromised accounts, deceived users, or purchased bulk Likes.

Posting and commenting

More comprehensive solution

Often proprietary solutions developed by the major players

Expensive

Labor-intensive

Blacklisting Blacklisting attempts to detect and block known threats using an aggregate list of known URLs involved in previous spam or malicious attack. A site inspects every action on the site, using the reputation of the cookie or IP address involved to halt any suspicious action.

Posting and commenting

Leverages multiple blacklists across industry leaders

Detects malicious links

Reputation-based approach

Proprietary black lists

High costs to create from scratch

Requires partnering with leading industry providers in order to develop a comprehensive and usable blacklist.

Reactive measure because blacklists only include existing and known URLS that are malicious (i.e. they don’t include new URLs created by fraudsters and spammers)

Automated Tools Automated tools remove offensive and unsolicited content in real-time. Content is analyzed and categorized and either blocked, allowed, or handled in a custom workflow based on the company’s custom site policy.

Posting and commenting

Higher accuracy

Comprehensive categorization algorithm

Fewer false positives

Rapid time-to-value

Non-proprietary

Minimal start-up costs

Costly to develop and maintain in-house.

Require sophisticated analysis of attacks in real-time.

May require calibration to site specifics.

Community Rating Service

Community rating services leverage crowd-sourced ratings from millions of web users and trusted technical sources, in order to calculate website reputation

Posting and commenting

Verifies reputation of embedded links in posts

Relies on worldwide community to rate websites

No ability to prevent fraudulent accounts from being created

No ability to prevent account compromise and hijacking

Inability to accurately capture and score fresh websites/URLs given the speed in which new sites are created by spammers and fraudsters

<< continued from previous page

Page 13: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 13

Shut the Front & Back Doors and Clean House

Social networks, online directories, and online

dating sites need to adopt a layered approach

to spam prevention using multiple tools to ensure

that they and their legitimate users are protected.

TeleSign’s recommendation is to take a

holistic, three-prong approach:

1Shut the front doorShut the front door by phone-verifying risky new accounts.

2Clean houseClean house with automated spam cleansing.

3Close the back doorClose the back door by validating key account changes. (e.g., password resets)

Page 14: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 14

A key ingredient of this plan is

validating your user base; something

TeleSign calls “shutting the front

door.” Shutting the front door

means preventing spammers and

fraudsters from getting into the

network by flagging them during

the registration process.

Social media sites often employ user-

unfriendly solutions like CAPTCHA or

rely on email verification in isolation,

which can easily be sidestepped by

bots and other technologies or

techniques that create bogus

accounts.

Instead, online sites can now take

advantage of new automated

solutions that make real-time risk

assessments about whether to

accept, flag, or reject an online

registration. Solutions like

Impermium’s Intelligent Content

Protection are particularly well suited

for account validation since they

analyze a number of data points to

determine the likeliness that a given

registration is fraudulent.

These data points can be aggregated

into a spam likeliness score that

recommends a specific action

including allowing, blocking, or

flagging an online registration. Most

registrations are allowed, but any

suspicious signups can be challenged

with phone verification. This helps

ensure that the site strikes the right

balance between security and user

experience. Many social media sites

and online directories do not want to

introduce unnecessary friction to the

signup process (i.e., add extra hurdles

that legitimate customers have to

jump through in order to complete

their online registrations).

For example, a social network or

online directory can create certain

velocity triggers on accounts

created such as:

‣ Number of accounts created by the

same IP address during a certain

period of time.

‣ Low-volume passwords that have

been used five+ times over the past

48 hours for account registration.

‣ Location of IP address on any login

(especially first three logins) is

more than 150 miles away from the

location of the IP address used to

create the account.

‣ Irregular activity such as a flurry of

friend requests/comments shortly

after account creation (i.e., track

the deviation of the network’s

average behavior versus outlier

behavior that is out of the norm).

Phone VerificationIf a specific registration is deemed to

be risky, social media sites and online

directories can utilize phone-based

verification to authenticate legitimate

users and repel fraudulent ones.

Here’s how it works:

1. User is prompted to provide

a phone number at account

registration

2. Site sends a one-time verification

code to that phone

3. User enters that verification

code onto the website to

activate the account

So what does phone

verification accomplish? Phone verification accomplishes two

things: 1) it verifies that the phone

number provided is valid, and 2) it

verifies that the user is in possession

of that phone. This adds friction to

the registration process, but given

the ubiquity of phones they’ve

become an extension of our own

identities. Phone verification is a

practical and simple way to verify

your user’s online identity.

1Shut the front doorShut the front door by phone-verifying risky new accounts.

Page 15: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 15

Phone verification can also be used in

the future when a user logs in from a

different IP address or a different

device. This can trigger an

authentication event to the verified

phone number on record and

prevents fraudsters that successfully

have phished credentials from taking

over a user’s account.

Companies like Google have

employed phone-based verification

to add an extra layer of security. In

addition to username and password,

users with two-step verification get

prompted to enter a code that

Google sends via text or voice

message when they attempt to

negotiate a login from a different IP

address from that on record.

This two-step verification drastically

reduces the chances of having the

personal information in a user’s

online account stolen by someone

else. Why? Because hackers would

have to execute on two fronts: 1) they

would have to steal a user’s

username/password and 2) they

would have to steal the user’s phone.

Importance of Phone TypeAdding phone verification to the

process is a crucial first step, but

some sites are going further still.

An increasing number of social media

sites and directories now require new

registrants to use low-risk phone

types such as mobile or landlines for

phone verification. Many social sites

also block higher-risk phone types

such as prepaid mobile phones and

VoIP phones which are correlated to

higher levels of fraud and spam.

Companies like TeleSign can

determine the phone type and

other important phone

characteristics such as whether

the phone is active, roaming status,

and the name registered to the

phone. These details provide

additional business intelligence

and powerful fraud signals.

Phones that can be purchased

anonymously or that do not require

the end user to be contracted with a

mobile phone company are often a

higher risk for fraud and spam. VoIP

phone numbers are Internet-based

telephone numbers that can

easily be obtained by users in other

countries. They are untraceable

and disposable; some can even

be obtained for free. This means

that a fraudster in a foreign country

could easily obtain a U.S.-based

telephone number (using a

non-fixed VoIP service) to receive the

verification call.

TeleSign’s PhoneID solutions provide

merchants access to real-time

business intelligence to predict and

prevent online fraud. PhoneID

identifies user phone type, provides

merchants with accurate data to

assess high-risk transactions, and can

simply determine if a phone can

receive an SMS. PhoneID enables

social media sites and online

directories to quickly identify high-

risk registrations and at the same

time, reduces undeliverable messages

through identification of

SMS-enabled devices before

sending verification SMS messages.

Phone Type Risk LevelRisk Level Rationale

Fixed Line (landline)

Low Risk Traced back to specific address.

Cannot be obtained by a user in another country.

Mobile Low Risk Users must sign contracts with carriers.

Numbers are traceable.

Prepaid Mobile

Medium Risk

Users are not contracted.

Low cost phones.

Non-Fixed VoIP

High Risk Easily obtained in other countries.

Untraceable.

Disposable.

Page 16: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 16

While many believe human

moderation to be the gold standard

of social spam defense, the combined

pressures of speed, monotony, and

cost controls often lead to significant

drops in performance.

New online cleansing tools actively

monitor all user-generated content

from blog posts, comments, message

board posts, chat messages, reviews

and listings. Unlike human

moderation teams, automated tools

work proactively and in real time,

removing offensive content before

users even see it.

A recent performance test compared

Impermium’s automated solution

against a top human moderation

service firm that specializes in

removing bad social content for

websites of major consumer brands.

Both services were given 10,000

social comments and tasked with

identifying social spam. Here are

the results:

Impermium can also flag potential

spammers and work with two-factor

authentication providers, like

TeleSign, to enforce phone-based

verification before posts are

validated. Another benefit of using a

leading automated spam cleansing

solution is that these vendors can

spot spam trends across different

social networks and incorporate that

intelligence into their scoring

algorithms. Collectively, these

tools and intelligent scoring can

dramatically reduce the amount

of social spam and improve the

user experience.

2Clean houseClean house with automated spam cleansing.

Metric Imperium Human Moderation

Time to Process 19 seconds 2-3 days

Accuracy 99.5% 95%

False Positives 4 79

Page 17: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 17

Once a user’s phone number is on

record, a social media site or online

directory can use that same number

to verify any key account changes or

reset the user’s password. These are

often backdoors that fraudsters can

crack open to hijack an account.

Verifying users for these high-risk

changes makes it much more difficult

for a hacker to break into someone’s

account. In fact, some websites

regularly verify their end users (e.g.

after every 30 days).

With valid user phone numbers on

record, social media sites and online

directories can take a more

frictionless approach to verifying

users by conducting a series of

analytics in the background based on

their online activities. A user’s phone

number and the activities associated

with that phone number provide

important insights. Phone numbers

tied to fraudulent activity can be

blocked early on instead of letting

the fraudster stay on the website. By

analyzing user data more closely,

online properties have the

opportunity to stop and block fraud

faster and more efficiently.

Social media sites and online

directories no longer have to

maintain their own dedicated

engineering teams to keep their sites

safe from spam and abuse.

Combining phone-based verification

with automated spam cleansing frees

webmasters to focus on the things

that matter most to their users and

customers.

3Close the back door.Close the back door by validating key account changes. (e.g., password resets)

Page 18: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 18

It’s clear that yesterday’s email

spammers are today’s social

spammers. Social media exploitation

techniques are evolving fast.

Spammers on the social web exploit

nearly every large consumer brand or

significant news event. This should

come as no surprise. The payoffs are

better, detection is more difficult, and

the social networks are only just

starting to develop strategies to

tackle the issue.

Social networks, online directories,

and online dating sites have

struggled to keep up with spammers

who have adapted defensive

techniques to avoid detection and at

the same time have created new

vectors to exploit. Bulk accounts for

popular social networks can now be

purchased on the black market for

pennies. Automated tools are freely

available to create posts, add bogus

comments, inject malware within

links, and generate friend requests.

Social spam is starting to take a

significant toll. Sites that fail to

address social spam face business

risks that include lost customers,

reduced advertising revenues,

increased customer support costs,

distorted analytics, and the inability

to accurately evaluate their user

bases and determine the real costs of

new customer acquisition. However,

most importantly, social spam can

eviscerate the brand

and reputation of the site for

which the host site is trying to

build an audience.

In response, these sites have resorted

to a variety of measures including

human moderation, ghosting,

neighborhood policing, and

blacklisting to thwart fraud. But these

methods have inherent shortcomings

that neither adequately prevent

bogus account creation nor

accurately flag potential fraudulent

posts and comments.

It’s time to take a more holistic

approach with new tools that are

now available to give these social

networks and online directories an

upper hand in combatting fraud. It

starts with shutting the front door

and preventing bogus accounts from

being created. This means adding

some friction to the registration

process by asking the right

questions, leveraging data, and

phone-verifying high-risk registrants.

This is a delicate balance between

maintaining privacy without exposing

these sites to the rampant

proliferation of fake, bulk accounts.

It also means going beyond human

moderation to a more automated

approach that scans all posts and

comments in real time minimizing

the number of false positives while

preserving a sense of openness

within the community and ease of

use.

It’s inevitable that spammers will

continue to evolve their tactics to

circumvent new approaches and

technologies, so too must the social

networks. Remember, it was only

after the advent of antivirus and anti-

spam software, in conjunction with

widespread user education, that

email spam started its decline.

To reverse the trend of social spam,

social networks, online directories,

and online dating sites (and their

users) need to raise their collective

games by adopting the right

technologies, injecting the right

processes, and raising the awareness

levels of consumers to a healthy dose

of skepticism before they

click on any links – even those that

appear to come from your user’s best

friends.

Conclusion

It’s time to take a more holistic approach with new tools.

Page 19: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 19

Every second of every day, TeleSign protects

the world's largest Internet and cloud properties

against fraud.

TeleSign Intelligent Authentication provides an

easy-to-implement and powerful method for

identifying and substantially reducing online

fraud and spam using the most widely deployed

technology — a user's phone.

The company protects 2.5 billion downstream

accounts in more than 200 countries and

territories, offering localization services in

87 languages.

In 2012, TeleSign ranked #23 on the Deloitte

Technology Fast 500™ and was named Visionary

in Gartner’s User Authentication Magic Quadrant.

TeleSign Corporation4136 Del Rey AveMarina del Rey, CA 90292US +1 310 740 9700UK +44 (0) 330 808 0081

telesign.com@telesign

[email protected]

Impermium provides user-generated content

management for websites and social networks,

defending them against social spam, fake

registrations, racist and inappropriate language,

and other forms of abuse.

Our system combines advanced technology

and broad, Internet-scale threat information

to provide cost-effective, real-time protection

for more than 300,000 sites across the globe.

Founded in 2010, Impermium is backed by

Accel Partners, Charles River Ventures, Greylock

Partners, Highland Capital Partners, and the

Social+Capital Partnership.

Impermium Corporation900 Veterans BoulevardRedwood City, CA 94063888-496-8008

impermium.com@impermium

[email protected]

Page 20: Social spam-special-report-tele sign-impermium dec 2012

SPECIAL REPORT: PREVENT SOCIAL SPAM AND FRAUD FROM SABOTAGING YOUR BRAND IMPERMIUM.COM TELESIGN.COM 20

A. Protalinski , E. CNET. Facebook: 8.7 percent are fake users. Retrieved August 1, 2012, from

http://news.cnet.com/8301-1023_3-57484991-93/facebook-8.7-percent-are-fake-users/.

B. Tarantola, A. GizModo. Twitter Declares War on Spambots, Takes Tool Developers to Court. Retrieved April 6, 2012, from http://gizmodo.com/tweetbuddy/.

C. Finn, G. MarketingLand. The Rise Of Social Spam: 1.5% Of Tweets & < 4% Of Facebook Posts Are Spam.

Retrieved January 4, 2012, from http://marketingland.com/the-rise-of-social-spam-1-5-of-tweets-4-of-

facebook-posts-are-spam-2571.

D. Taylor, C. TechCrunch. Startup Claims 80% Of Its Facebook Ad Clicks Are Coming From5Bots. Retrieved July 30, 2012, from http://techcrunch.com/2012/07/30/startup-claims-80-of-its-facebook-ad-

clicks-are-coming-from-bots/.

E. Fowler, G., Raice, S., and Efrati, A. Wall Street Journal Online. Spam Finds New Target. Retrieved January 4, 2012, from http://online.wsj.com/article/

SB10001424052970203686204577112942734977800.html.

F. Kharif, O. Bloomberg BusinessWeek. ‘Likejacking': Spammers Hit Social Media. Retrieved May 24, 2012

from http://www.businessweek.com/articles/2012-05-24/likejacking-spammers-hit-social-media.

References