Elsevier.network.security.april
-
Upload
funreturns -
Category
Documents
-
view
215 -
download
0
Transcript of Elsevier.network.security.april
-
8/6/2019 Elsevier.network.security.april
1/20
Vulnerability assessment tools:the end of an era?
Vulnerability assessment tools, traditionally, amass a monumental
amount of flaws. A continuously growing number of vulnerabilitiesmeans that the tools need to be constantly updated.
This means that the number of vulnerabilities appears overwhelming. Also, not all of
these flaws are of significance to security. Host-based patch management systems
bring a coherence to the chaos. The clear advantages of using these tools question
the value of traditional vulnerability assessment tools. Andrew Stewart describes the
advantages of using patch management technologies to gather vulnerability data. He
proposes a lightweight method for network vulnerability assessment, which does not
rely on signatures, or suffer from information overload. Turn to page 7....
Featured this month
NEWSTips to defeat DDoS 2
Qualys ticks compliance box 2
Russian hackers are world class 3
FEATURESDe-perimeterisation
Inside out security:de-perimeterisation 4
VulnerabilitiesA contemporary approach to network
vulnerability assessment 7
CryptographyCrypto race for mathematical infinity 10
BiometricsBiometrics: the eye of the storm 11
Proactive securityProactive security: vendors wire the cage
but has the budgie flown... 14
PKIManaging aspects of secure messaging
between organizations 16
RFIDRFID: Misunderstood or untrustworthy 17
SnortNetwork Security Managers preferences for
the Snort IDS and GUI add-ons 19
REGULARNews in brief 3
Contents
April 2005 ISSN 1353-4858
ISSN 1353-4858/05 2005 Elsevier Ltd. All rights reserved
This journal and the individual contributions contained in it are protected under copyright by Elsevier Ltd, and the following terms and conditions apply to their use:PhotocopyingSingle photocopies of single articles may be made for personal use as allowed by national copyright laws. Permission of the publisher and payment of a fee is required for all other photocopying, including multiple orsystematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profiteducational classroom use.
Tips to defeat DDoSFrom the coal face of BluesquareOnline gambling site, Bluesquare, has survived brutal distributed denial-of-service attacks, and CTO, Peter Pederson presented his survival check-list at a recent London event.
Pederson held his ground by refusing to pay DDoS extortionists who took
Bluesquare's website down many times last year. He worked with the National Hi-
Tech Crime Unit to combat the attacks and praised the force for its support.
Speaking at the E-crime congress, Pederson, played a recording of the chilling voice of
an extortionist, who phoned the company switchboard demanding money.
After experiencing traffic at 300 Megabits per second, Pederson said he finds it amus-
ing when vendors phone him with sales pitches boasting that they can stop weakerattacks. He has seen it all before. Story continued on page 2...
RFID misunderstood or untrustworthy?
The biggest concern with RFID is the ability to track the location of a per-son or asset. Some specialized equipment can already pick up a signalfrom an RFID tag over a considerable distance.
But an RFID tag number is incomprehensible to a potential attacker without access to
a backend database. The problem is that an attacker may get access to such a data-
base. Bruce Potter examines if RFID really is a sinister security nightmare. Turn to
page 17...
-
8/6/2019 Elsevier.network.security.april
2/20
NEWS
Editorial office:Elsevier Advanced Technology
PO Box 150Kidlington, Oxford
OX5 1AS, United KingdomTel:+44 (0)1865 843645
Fax: +44 (0)1865 853971E-mail: [email protected]
Website: www.compseconline.com
Editor: Sarah Hilley
Supporting Editor: Ian Grant
Senior Editor: Sarah Gordon
International Editoral Advisory Board:Dario Forte, Edward Amoroso,AT&T Bell Laboratories; FredCohen,Fred Cohen & Associates; Jon David,The Fortress;
Bill Hancock,Exodus Communications;Ken Lindup,Consultant at Cylink; Dennis Longley, Queensland
University of Technology;Tim Myers, Novell; Tom Mulhall;Padget Petterson, Martin Marietta; Eugene Schultz,California University, Berkeley Lab; Eugene Spafford,
Purdue University; Winn Schwartau, Inter.Pact
Production/Design Controller:
Esther Ibbotson
Permissions may be sought directly from Elsevier GlobalRights Department, PO Box 800, Oxford OX5 1DX, UK;phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail:permissions@elsevier. com. You may also contact GlobalRights directly through Elseviers home page (http://www.elsevier.com), selecting first Support & contact, thenCopyright & permission.In the USA, users may clear permissions and makepayments through the Copyright Clearance Center, Inc.,222Rosewood Drive, Danvers, MA 01923, USA; phone: (+1)(978) 7508400, fax: (+1) (978) 7504744, and in the UKthrough the Copyright Licensing Agency Rapid ClearanceService (CLARCS), 90 Tottenham Court Road, London W1P0LP, UK; phone: (+44) (0) 20 7631 5555; fax: (+44) (0) 207631 5500. Other countries may have a local reprographicrights agency for payments.
Derivative WorksSubscribers may reproduce tables of contents or preparelists of articles including abstracts for internal circulationwithin their institutions.Permission of the Publisher is required for resale or distrib-ution outside the institution.Permission of the Publisher is required for all other deriva-tive works, including compilations and translations.
Electronic Storage or UsagePermission of the Publisher is required to store or use elec-tronically any material contained in this journal, includingany article or part of an article.Except as outlined above, no part of this publication may bereproduced, stored in a retrieval system or transmitted inany form or by any means, electronic, mechanical, photo-copying, recording or otherwise, without prior written per-
mission of the Publisher.Address permissions requests to: Elsevier Science GlobalRights Department, at the mail, fax and e-mail addressesnoted above.
NoticeNo responsibility is assumed by the Publisher for any injuryand/or damage to persons or property as a matter of prod-ucts liability, negligence or otherwise, or from any use oroperation of any methods, products, instructions or ideascontained in the material herein. Because of rapid advan-ces in the medical sciences, in particular, independent veri-fication of diagnoses and drug dosages should be made.Although all advertising material is expected to conformto ethical (medical) standards, inclusion in this publicationdoes not constitute a guarantee or endorsement of thequality or value of such product or of the claims made ofit by its manufacturer.
02158
Printed by
Mayfield Press (Oxford) LImited
Qualys ticks complianceboxBrian McKenna
V
ulnerability management vendor,
Qualys, has added new policycompliance features to its
QualysGuard product. This allows
security managers to audit and
enforce internal and external policie
on a 'software as a service' model, the
company says.
In a related development, the company
is trumpeting MasterCard endorsement
for the new features set.
Andreas Wuchner-Bruehl, head of
global IT security at Novartis comment-
ed, in a statement: that: "Regulationssuch as the Sarbanes-Oxley Act and
Basel II [mean that] much of the burden
now falls on IT professionals to assure
the privacy and accuracy of company
data. In this environment, security man-
agers must tie their vulnerability man-
agement and security auditing practices
to broader corporate risk and compliance
initiatives."
Philippe Courtot, chief executive offi-
cer, Qualys said: "security is movingmore and more to policy compliance.
For example: are your digital certificates
up to date? We offer quick deployability
since we are not selling enterprise soft-
ware, but providing it as service.
Customers don't have software to
delploy, and Qualys scans on a continu-
ous basis".
"In 2004 Sarbox was all about keep-
ing C-level executives out of jail, but we
are moving beyond that now. Theopportunity is to streamline the best
practices generated out of Sarbox con-
sulting as it relates to the security in
your network".
The latest version of QualysGuard has
been endorsed by MasterCard. The vul-
nerability management vendor has com-
pleted the MasterCard Site Data
Protection (SDP) compliance testing
process.
From 30 June, this year, MasterCardwill require online merchants
processing over $125,000 in monthly
MasterCard gross volume to
perform an annual self-assessment and
quarterly network scan.
"The payment card industry's security
requirements (PCI, SDP, Visa CISP)
apply to all merchants with an Internet
facing IP, not just those doing E-com-
merce, so the magnitude of retailers this
program affects is significant," said
Avivah Litan, vice president and research
director at Gartner.
Qualys says it achieved compliance sta-
tus by proving their ability to detect,
identify and report vulnerabilities com-
mon to flawed web site architectures and
configurations. These vulnerabilities, if
not patched in actual merchant websites,
could potentially lead to an unautho-
rized intrusion.
"The payment card industry's securitystandards are converging, which will
simplify the compliance process, but
achieving compliance with these stan-
dards can still be very costly for both
merchants and acquiring banks. The
more the process can be streamlined and
automated, the easier it will be for every-
one," said Litan.
Network Security April 20052
Tips to defeat DDoS(continued from page 1)
The DDoS Forum was formed in
response to the extortionist threat to
online gambling sites. Pederson is
adamant about not paying up.
Peter Pedersons survival checklist
against DDoS attacks:
Perform ingress and egressfiltering.
Consolidate logs.
Perform application level checks.
Implement IDS.
Implement IPS.
Check if 3rd party connections
are open.
Capture current network traffic.
Monitor current system states.
Maintain current patches. Put procedure and policies in
place to handle DDos attacks.
-
8/6/2019 Elsevier.network.security.april
3/20
April 2005 Network Security
Microsoft talks up security After 25 years of complaints about thepoor security of its products, Microsofthas published a 19-page booklet, The
Trustworthy Computing Security Develop-ment Lifecycle, that outlines the "cradle tograve" procedures for a mandatory "SecurityDevelopment Lifecycle" for all its Internet-facing products.
The new process "significantly reduces" thenumber and lethality of security vulnerabili-
ties, it says. The new approach comes fromBill Gates and Steve Ballmer, Microsoft'schairman and chief executive. So far softwareproduced using the SDL framework includes Windows Server 2003, SQL Server 2000Service Pack 3 and Exchange 2000 ServerService Pack 3.
Windows Server gets extra protection Windows Server 2003's new Service Pack1 allows Windows servers to turn on theirfirewalls as soon as they're deployed, andto block inbound Internet traffic untilWindows downloads Microsoft's latest securi-ty patches.
A new security configuration wizard detectsa server's role as a file server, Web server, or
database host, for example, and then disablethe software and ports not associated with thatrole. It also makes DCOM, Microsoft's tech-
nology for distributed objects, less prone toattack, the firm says.
VoIP vulnerabilities addressedSecurity worries are holding up adoptionof VoIP. Even so, research from In-Stat/
MDR suggests penetration will reach 34%among mid-sized businesses, and 43% in largeenterprises.
To increase adoption rates, the new Voiceover IP Security Alliance (VOIPSA) has creat-ed a committee to define security standards forInternet telephony networks.
In large networks, the bandwidth and timeassociated with routing traffic and spam createsa latency problem for VoIP traffic through thefirewall. Other topics include security technol-ogy components, architecture and networkdesign, network management, and end-pointaccess and authentication, infrastructure weak-nesses, vulnerabilities and emerging application
attacks.
Warp speed, Mr PlodThe British government has set up six Warps(warning advice and reporting points) to allowbusinesses to share confidential information
about risks, security breaches and successfulcountermeasures, and to receive tailored secu-rity alerts.
The government also promised a Warp to
show home computer users how to improvePC security and lower the risk of them
becoming staging posts for hackers attacking
businesses. The US and Holland are consider-ing creating similar programmes, says the
National Infrastructure Security Co-ordina-tion Centre (NISCC), which is co-ordinating
the scheme.
Don't trust hardwareHardware devices are as insecure as any IT sys-tem, Joe Grand, CEO of Grand Idea told del-egates at the Amsterdam Black Hat confer-ence. Attacks include eavesdropping, disrupt-ing a hardware security product, using undoc-umented features and invasive tampering.
Network appliances, mobile devices, RFIDtokens and access control devices are all poten-tially at risk. The storage of biometric charac-teristics on back-end systems also sets upavenues of attack, and physical characteristicsare often easily stolen or reproduced.
Researchers recently showed how to exploit
cryptographic weaknesses to attack RFID tagsused in vehicle immobilisers and the MobilSpeedPass payment system. SSL cryptographicaccelerators are also potentially hackable, asdemonstrated by a recently documented attackagainst Intel's NetStructure 7110 devices.Wireless Access Points based on Vlinux, such
as the Dell TrueMobile 1184, can also behacked.
Security through obscurity is still widelypracticed in hardware design but hiding some-thing does not solve the problem, Blackhat del-egates were told.
IM creates instant havocSecurity threats from Instant Messages haveincreased 250% this year, according to areport from IMlogic Threat Center. Theresearch tracks viruses, worms, spam and
phishing attacks sent over public IM net-works. It found reported incidents of new IMthreats grew 271% so far. More than half the
incidents happened at work via free IM ser-vices such as AOL Instant Messenger, MSNMessenger, Windows Messenger, and YahooMessenger.
Israel jails colonel for losing PCThe Israeli army jailed the commander of anelite Israel Defense Forces unit for two weeksfor losing a laptop computer containing clas-sified military information. The laptop
should have been locked away, but was appar-ently stolen while he was on a field trip withhis soldiers.
NEWS
Russian hackers areworld classBrian McKenna
R
ussian hackers are the best in the
world Lt. General BorisMiroshnikov told the eCrimes Congress
in London on 5 April. I will tell them
of your applause, he told the clapping
audience at the start of a speech
reporting on cyber crime developments
in the region.
Boroshnikov is head of Department K,
established within Russian law enforce-
ment to deal with computer crime in
1998. His department has worked close-
ly with the UK's National Hi-TechCrime Unit.
Countries, like Russia, he said, that
came late to the internet exhibit its
problems more dramatically. From
2001-3, computer crime in Russia
doubled year on year, he confirmed.
Only in 2004 did we hold back the
growth.
"It used to be naughty boys who com-
mitted these crimes, he said, but now
they have grown up. It now needs theco-operation of telecoms companies,
ISPs, the legal profession, and law
enforcement to tackle the problem, he
said.
Alan Jebson, group COO at HSBC
holdings, echoed the Russians rueful
boast. "We are up against the best,
he said at the same event. Some of
these Russian hackers have day jobs
designing highly secure encryption
technologies."We must have comparable laws and
sanctions. We need to agree what is a
computer crime.
He reported that when Department K
was in its infancy 80% of computer
crime was out of sight. We are now get-
ting better because the victims know
who to come to and we have had no
leaks of victim identity.
He concluded that there is a strong
need in Russia for state standards that
will keep out the charlatans of comput-
er security.
3
In brief
-
8/6/2019 Elsevier.network.security.april
4/20
Network Security April 20054
DEPERIMETERISATION
If youre into IT security, its pretty hard
to avoid discussions about deperimiteri-
sation: the loosening of controls at
boundary level in favour of pervasive
security throughout the network, sys-
tems and applications. The ideas not
new, but its certainly a hot topic right
now, which is being led by some formi-dable CSOs in major blue-chips who
have come together to create the Jericho
Forum, to promote the idea. Everybody
seems to be talking about it and while
there are senior IT managers and securi-
ty experts who are fully and publicly
embracing the idea, there are also those
who are feeling more than a little appre-
hensive about this talk of breaking down
the barriers at the edge of the network.
After all, its just not safe out there and
weve all seen the statistics to prove it.
But opening up the networks provide
us with opportunities as well as threats.
Its time to stop looking at security from
the outside, and focus instead on looking
at security from the inside out.
Manning the battlementsThe fact that de-perimiterisation is caus-
ing some worried muttering within the
security community is not that surpris-
ing. For years we have been working
towards attaining the goal of a network
boundary that is 100 percent secure.
Security managers have tended to adopt
a siege mentality, and softer boundaries
appear to be contrary to everything that
we are working for.But we need to stop thinking of our
network as a medieval citadel under
attack. After all, those fortresses, with
their thick, high stone walls, were excel-
lent at deflecting an enemy for a fixed
period of time. But once that enemy got
inside the walls, the fight was over with-
in a matter of hours. The same is true of
most IT networks. Once the hard outer
shell has been penetrated, it is fairly
straightforward to run rampage through
IT systems and cause untold amounts of
havoc.
And of course, barricading yourself
behind high walls doesnt let the good
guys in, doesnt stop internal attacks
from rebellious subjects, and isnt exactly
flexible. But flexibility is what the mod-
ern business is all about. Firms need to
expand. They want their salespeople to
remain connected through their mobile
devices and remote access. They want to
collaborate easily with partners and inte-
grate business processes with customers
and suppliers. Unlike fixed stone walls,
the boundaries of the modern business
are shifting all the time.
Seizing opportunitiesThis is not the time for security experts
to revert to their negative, jackbooted
stereotype. The trespassers will be prose-
cuted signs along with the negativeexpressions and shaking heads need to
be abandoned. Although we all like to
think of ourselves as knights in shining
armour, rescuing our organizations from
marauding outsiders, its time to update
this self-image. The fact is we need to
be modern, twenty-first century
intelligence agents, not twelfth century
warriors.
Instead we should see these new devel-
opments as an opportunity. Lets face it,
100% security of the network boundary
has always been an almost impossible
task. As Gene Spafford, Director,
Computer Operations, Audit, and
Security Technology at Purdue
University put it: The only system
which is truly secure is one which is
switched off and unplugged, locked in a
titanium lined safe, buried in a concrete
bunker, and is surrounded by nerve gas
and very highly paid armed guards. Even
then, I wouldnt stake my life on it
Nor would you be able to use it.
Added to that of course, is the fact
that boundaries keep moving: new
devices, new locations, additional busi-
ness partners, illicit downloads and the
latest applications, all add to the ever-
expanding perimeter, making it increas-
ingly difficult to define, never mind
secure. And then theres the weakest link
of all: the people. Employees, being
human, insist on making basic mistakes
and leaving their passwords lying around
or opening dubious attachments.
De-perimiterisation can, therefore, be
seen as a chance to stop going after the
impossible, and to focus effort on
achieving acceptable levels of risk. No
more tilting at windmills. No more run-
ning to stand still.
More than that, this is a real opportu-
nity to align security with overall organi-
sational strategy, and to prove the value
that it adds to the organisation. To do
that, we need to understand where the
call for opening up the networks is com-
ing from.
Harnessing the driversDe-perimiterisation is driven by several
business needs. Firstly, the desire for the
Martini principle - anytime, anyplace,anywhere computing. Mobile and flexi-
ble working have become a normal part
of the corporate environment. This is
Inside out security:de-perimeterisationRay Stanton, global head of BT security practice
Gone are the days of fortress security
De-perimiter-isation is a
chance to stop
going after theimpossible
Ray Stanton
-
8/6/2019 Elsevier.network.security.april
5/20
April 2005 Network Security5
DEPERIMETERISATION
happening by default in many organisa-
tions, who now wish to take control and
effectively manage the multitude of ven-
dors, applications, devices and docu-
ments that are springing up throughout
the company.
The second driver is cost. Accessing
applications through a broadband
enabled device, using XML or Web ser-
vices, reduces the costs associated with
connectivity and maintenance of leased
lines, private exchanges and even VPNs.
At the same time it increases availability,
through the always on connection, and
so flexibility.
Finally, there is a need for approved
third parties to gain access. In the digi-
tal networked economy, collaborative
working models with partners, joint ven-tures, outsourcers or suppliers require
secure access to data in real time which
cannot be achieved with a tough impen-
etrable network boundary.
If we look at the oil and gas indus-
tries, which have been early adopters of
de-perimiterisation or radical exter-
nalisation as it is known in BP we
can see clear examples of all of these
drivers. Significant numbers of workers
are on the road or in remote locationsat any given time. Companies tend to
make a great deal of use of outsourcers
and contractors, and undertake joint
ventures with other firms who are part-
ners in one region but competitors in
another. As a result they have long
recognised the need to let partners
have access to one part of the system,
while keeping the doors firmly barred
on others.
In fact around 10% of BPs staff nowaccess the companys business applica-
tions through the public Internet, rather
than through a secure VPN. This is the
first step in a move towards simplifica-
tion of the network and enabling access
for up to 90,000 of the oil companys
third party businesses.
This picture of a flexible, cost effec-
tive, and adaptable business is, not sur-
prisingly, very attractive. And not just to
those in hydrocarbons. But efforts toachieve it can be hampered by current
security thinking. As experts, we need to
reverse this, and be seen as an enabler
once more. Our responsibility is to
make sure that everyone is aware of the
risks and can make informed decisions.
After that, its about putting adequate
controls in place. This shift in thinking
offers us a real possibility that security,
indeed IT as a whole, can be brought in
from the cold and get a much-needed
voice at board level.
Back to basicsBut before we tear down the firewalls
and abandon ourselves to every virus
infestation out there, lets take a look atwhat inside out security really involves.
De-perimiterisation is actually some-
thing of a misnomer. Its not about get-
ting rid of boundaries altogether. Rather
its a question of re-aligning and refocus-
ing them. So instead of a single hard
shell round a soft centre, an organisation
has a more granular approach with inter-
nal partitions and boundaries protecting
core functions and processes hence the
inside out approach. Typically the hardcontrols around the DMZ (demilitarised
zone) will move to sit between the red
and amber areas, rather than the amber
and green.
This takes us back to some basic prin-
cipals of security management: deciding
what bits of your systems and accompa-
nying business processes are key and
focusing on their security. Rather than
taking a one size fits all approach,
inside out security requires us to look atprotecting our information assets from
the perspective of what needs to be
secured and at what level.
The decision should be based upon
another fundamental tenet of good secu-
rity practice: thorough assessment of
risk. That customer database from three
years ago may be of limited value now,
but if the contents are leaked, the conse-
quences could be disastrous.
Although policy control and manage-
ment has always been a fundamental fac-
tor in any security measures, it will take
a far more central role than it has
enjoyed so far. Federated security, gran-
ulated access and rotating users all
demand close control. Updates to policy
that reflect both changes within the
organisation and to its immediate envi-
ronment, will be required on a more reg-
ular basis than ever before.
We also need to make sure that we still
get the basics right. For example, viruses
are not going to go away: there will
always be new variants and new vulnera-
bilities. The 2004 edition of the DTI
information breaches survey shows that a
massive 74% of all companies suffered a
security incident in the previous year,
and 63% had a serious incident. Viruses
still counted for 70% of these, which
seems to indicate that despite their
prevalence, there is still a lack of maturi-
ty in incident management procedures.
Firewall vendors dont need to panic
just yet there is still going to be a need
for their products in a deperimiterised
system. The difference is these will no
longer sit at the very edge of the network,
but will be strategically placed inside it,
at device, data or even application level.
Identity managementWhile firewalls may sort the good
HTTP traffic from the bad, they cannot
Firewalls will
no longer be at
the edge of the
network
Some of the companies that
are breaking down the
barriers as members of the
Jericho Forum: Boeing
British Broadcasting
Corporation
Deutsche Bank Lockheed Martin
Pfizer
Reuters
Unilever
-
8/6/2019 Elsevier.network.security.april
6/20
Network Security April 20056
DEPERIMETERISATION
discern the difference between authorized
and unauthorized traffic. You also needto identify what and who you trust from
both internal and external sources: which
of your own people should have access to
what systems and processes, and where
you are going to allow partners, cus-
tomers and the public to go. That means
that user authentication and identity
management is going to play an increas-
ingly important role with two factor
authentication being the bare minimum.
Access policies will become more pre-cise, based on a least privilege model, to
ensure that only the parts of the system
required for the job will be available.
Like all policies this will need to be
monitored and updated to match
employees moving through the organisa-
tion, and to keep up with changing rela-
tionships with partners.
Identity management will ensure that
no unauthorized personnel have access
to any part of the system, and will be amajor factor in maintaining compliance.
With a more open network, organisa-
tions will still have to prove that
confidential data on personnel or finan-
cial management has not been subject tounauthorized access. With the Data
Protection Act, human rights legislation,
Sarbanes-Oxley, European accounting
standards and a dozen other rules and
regulations to navigate, providing accu-
rate audit trails of who has accessed, or
attempted to access, critical data will
remain a basic legal requirement.
You can never be too thin
It almost goes without saying that iden-tity management is much easier when
the identities belong to an organiza-
tions own employees. Enforcing policy
at a partner organization is that much
harder.
And, given that it is hard enough to
ensure that your own users have config-
ured their devices properly, it seems
unlikely that any of us will be able to
guarantee that partners have done so.
But this is crucial, since ill-configuredlaptops and PDAs represent a significant
security risk at both the outer edge and
in the core of the network.
It seems that inside out security will
act as an impetus towards a more thin-
client based architecture. Centralised
systems are easier to secure than docu-
ments, applications, data and network
connection spread over different gadgets
and different locations. It eliminates the
problems associated with accessing the
network with inappropriate devices.
In one company that has already adopt-
ed de-perimiterisation, employees are
responsible for their own laptops including
the latest patches and anti-virus protection.
But the laptops are thin clients, which
means that IT staff can focus on the secu-
rity of the central server and information
on it, rather than trying to secure an unde-
fined group of peripheral appliances.
Whether there will be a mass migra-
tion to thin client models or even on-
demand, utility computing, which
seems to be the next logical step is
impossible to predict. What we do
know is that the move to inside out
security, radical externalisation, de-
perimiterisation or whatever other names
it acquires, will depend on architecting
the environment correctly and main-
taining the right levels of control. A flex-
ible working model for information
security management systems that can
match the flexibility of the business as a
whole is also going to be vital.
The debates about de-perimiterisation
will doubtless continue. There is still a
lot of work to be done on standards and
interoperability of systems. But what we
can be pretty sure of is that security
experts should prepare themselves for a
fundamental change in approach.
More Information:http://www.opengroup.org/jericho
About the authorRay Stanton is Global Head of Security
Services at BT. He has over six years expe-
rience in Information Services and 21
years in IT Security.Ray has worked for
both government and commercial organi-
zations in a variety of security related rolesincluding project management, security
auditing, policy design, and the develop-
ment of security management strategies.
De-perimeterisation - the end of fortress mentality
http://www.opengroup.org/jerichohttp://www.opengroup.org/jericho -
8/6/2019 Elsevier.network.security.april
7/20
April 2005 Network Security7
VULNERABILITIES
The roots of this problem lie in the
fact that the competitive and
commercial drivers that shaped the
early market for network vulnerabilityassessment products continue to have
influence today.
These historical goals no longer reflect
the needs of modern businesses, howev-
er. A shift in requirements has occurred,
due to the now widespread use of patch
management technologies.
In this paper I describe the advan-
tages in using patch management tech-
nologies to gather vulnerability data. I
also propose a lightweight method fornetwork vulnerability assessment,
which does not rely on signatures, and
which does not suffer from information
overload issues.
The effect of historicalmarket forcesIn the formative years of the commer-
cial network vulnerability assessment
market, the number of vulnerabilitychecks that vulnerability assessment
tools employed was seen as a key metric
by which competing products could be
judged. The thinking was that the
more checks that were employed by a
tool, the more comprehensive it would
be, and thus the more value its use
would provide.
Vendors were also evaluated on how
quickly they could respond to newly
publicised security vulnerabilities. Thequicker a vendor could update their
product to incorporate the checks for
new vulnerabilities, the better they
were perceived to be. In some respects
this is similar to the situation today
where software vendors are judged
by the security community on theirtimeliness to release patches for security
problems that are identified in their
products.
The market's desire for a comprehen-
sive set of vulnerability checks to be
delivered in a timely fashion spurred the
manufacturers of network vulnerability
assessment tools to incorporate ever-larg-
er amounts of checks into their prod-
ucts, and to do so with increasing rapid-
ity. Some vendors even established
research and development teams for the
purpose of finding new vulnerabilities.
(An R&D team was also an opportunity
for vendors to position and publicize
themselves within the marketplace.)
Vendors were said to have sometimes
sought competitive advantage throughduplicitous means, such as by slanting
their internal taxonomy of vulnerability
checks in order to make it appear that
they implemented more checks than in
reality.
A common practice was for vendors
to create checks for any aspect of a
host that can be remotely identified.
This was often done regardless of its
utility for security. As an example,
it is not unusual for network vulnera-
bility scanning tools to determine the
degree of predictability in the IP
identification field within network
traffic that a target host generates.
While this observation may be useful
in certain circumstances, the
pragmatic view must be that there
are far more influential factors that
can influence a host's level of vulnera-
bility. Nonetheless, network
vulnerability assessment products
typically incorporate hundreds of such
checks, many with similarly question-
able value.
Information overloadThe result of these competitive drivers
has been that when a network
vulnerability scanner is run against
any network of reasonable size, the
printout of the report is likely toresemble the thickness of a telephone
directory. An aggressive approach to
information gathering coupled with
an ever increasing set of vulnerabilities
results in an enormous amount of
information that can be reported.
Such a large amount of data is not
only intimidating, but it severely limits
the ability to make key insights about
the security of the network. The
question of where to begin? is adifficult one to answer when you are
told that your network has 10,000
vulnerabilities.
Vendors of network vulnerability
assessment products have tried to
address this information overload prob-
lem in several ways. One approach has
been to attempt to correlate the output
of other systems (such as intrusion
detection systems) together with vul-
nerability data to allow results to beprioritised. Another approach has been
to try and fuse data together on the
basis of connectedness, in order to
Andrew Stewart
Modern network vulnerability assessment tools suffer from an
information overload problem
A contemporaryapproach to
network vulnerabili-ty assessmentAndrew Stewart
A network vuln.
scanner report
can be as thick
as a phone
directory
-
8/6/2019 Elsevier.network.security.april
8/20
Network Security April 20058
VULNERABILITIES
increase the quality of data at a higher
layer. These approaches have spawned
new categories of security product, such
as Enterprise Security Management
(ESM), Security Information
Management (SIM), and
Vulnerability Management.
But rather than add layers of abstrac-
tion (and products to buy), the solution
would logically lie in not gathering so
much data in the first place. This has
now become a viable strategy, because
of the capabilities provided by modern
patch management technologies.
The rise of patchmanagementThe widely felt impact of Internetworms has opened the eyes of businesses
to the importance of patching systems.
Host-based patch management products
such as Microsoft's SMS (Systems
Management Server) and SUS (Software
Update Services) are now in wide
deployment, as are other commercial
and freeware tools on a variety of plat-
forms. See for example, PM (2005) and
Chan (2004).
In many respects, this increased focuson patch management has diminished
the traditional role of network vulnera-
bility assessment tools. If the delta
between current patch status and the
known set of vulnerabilities is already
being directly determined on each indi-
vidual host, then there is less need to use
a network vulnerability assessment tool
to attempt to collect that same informa-
tion (and to do so across the network
and en masse).
An advantage here is that it is a rela-
tively straightforward task for a software
agent running on a host to determine
the hosts patch level. A network vulner-
ability scanner has to attempt to remote-
ly infer that same information, and this
task is made more difficult if the vulner-
ability scanner has no credentials for the
target host.
Another advantage to using a host-
based model for gathering patch data is
that with an ever-increasing set of vul-
nerability checks being built into net-
work vulnerability assessment tools, the
probability increases that a check might
adversely affect a network service on a
box. The result might be that the scan
causes services to crash, restart, or oth-
erwise misbehave. The days when portscanning would crash the simplistic net-
work stack within printers and other
such devices are probably behind us,
but a business might rightly question
the use of increasingly complex vulnera-
bility checks to interrogate production
systems.
With an ever-increasing number of
checks, the impact on network band-
width when a network vulnerability
assessment tool is run also climbs.(Rate-limited and distributed scanning
can help here, but these involve addi-
tional complexity.)
There are disadvantages to employing
a host-based model, however. Products
which require that an agent be installed
on hosts have usually been seen as
time-consuming to deploy and
complex to manage. Indeed, the value
proposition of network vulnerability
assessment tools was, in part, that they
did not require a roll-out of host-based
agents. With the now widespread use
of agent-based patch management
technologies, this barrier has been
overcome.
Given the advantages in using a host-
based model to gather patch status infor-
mation, do network vulnerability assess-
ment tools still have a role to play? In
discovering new vulnerabilities, or for
discovering vulnerabilities in bespoke
applications (such as Web applications),
network vulnerability assessment tools
clearly add value. But this is somewhat
of a niche market. These are not activi-
ties that businesses typically wish to per-
form against every device within their
network environment, or on a regular
basis. (Scanning a DHCP allocated net-
work range provides little value if the
DHCP lease time is short, just as one
example.)
A modern approachIt is a widely held belief amongst securi-
ty practitioners that the majority of
security break-ins take advantage of
known vulnerabilities. While there is
no concrete evidence for this claim, onan intuitive basis it is probably correct.
In most cases, the patch for a known
vulnerability already exists, or the ven-
dor affected is in the process of creating
the patch. (In that latter scenario, the
version numbers of the particular oper-
ating systems or applications that are
known to be vulnerable are usually
known, even if the patch itself is
not yet available.)
A patch management solution candetermine the presence or absence of
patches on hosts, and can also identify
the current version number of operating
systems and installed applications. A
patch management solution can there-
fore be used to determine vulnerability
status. The depth of reporting that
modern patch management tools pro-
vide in this area has in many respects
already surpassed the capabilities of
conventional network vulnerabilityassessment tools. This is possible
because of the advantages inherent in a
host-based model.
service count
telnet 20
ssh 79
rlogin 3
http 52
https 26
ldap 8
vnc 9
ms-term-serv 30
pcanywheredata 2
irc 1
Table 1: Display of services running on hosts
-
8/6/2019 Elsevier.network.security.april
9/20
April 2005 Network Security9
VULNERABILITIES
However, host-based patch manage-
ment tools only have visibility into the
hosts onto which an agent has been
installed. Organizations still need some
form of network assessment in order to
detect changes that lie outside the visibil-
ity of their patch management infra-
structure.I suggest that this task can be accom-
plished using traditional network inter-
rogation techniques, and does not
require a library of vulnerability checks.
Well-documented techniques exist for
gathering data related to the population
of a network, the services running on
hosts within the network, and the identi-
fication of operating systems type
(Fyodor, 1997, 1998). These techniques
do not require a constant research effort
to develop new vulnerability checks.
A port scanner written in 1990 could
still be used today, whereas a vulnerabili-
ty scanner from the same year would
be considered woefully inadequate
because it has no knowledge of modern
vulnerabilities.
The information that can be gatheredusing these relatively simple techniques
has enormous utility for security.
ConsiderTable 1, which displays data
gathered on the number of different ser-
vices running on hosts within a network.
The policy on this network is to use
Microsoft's Terminal Services for remote
administration, and therefore the two
installations of pcAnywhere and the nine
installations of VNC that were detected
are policy violations that need to beinvestigated then corrected. Running
pcAnywhere or VNC is not a security
vulnerability per se, but remote admin-
istration software certainly has a security
implication. That is the difference
between looking for specific vulnerabili-
ties and gathering general data on the
network.
As a further example, the IRC server
that was found on the network would
probably raise the eyebrow of most secu-rity practitioners.
Note how simple it is to perform this
analysis, in contrast to having to wade
through hundreds of pages of vulnerabil-
ity assessment report. If a patch man-
agement solution is being used to detect
weaknesses in the patch status of hosts,
then this is the type of data that it is
valuable to collect across the network.
This is not traditional vulnerability
assessment data, but ratherfoundationaldata about the network.
Table 2shows data on the number
of operating system types found
within a particular network. Again,
this data was collected using simple
network information gathering
techniques.
This network employs both Linux and
Windows machines as its corporate stan-
dard. We can therefore say that the
detection of a device running OpenBSD warrants investigation. Similarly, it
would be valuable from a security per-
spective to investigate the two devices for
which there was no fingerprint match.
An all-Linux organization might worry
about the presence of a Windows 95
machine on its network (and vice-versa,
of course).
This approach is well-suited for
detecting the decayin security that
computers tend to suffer over time.
Most businesses employ a standard
build for desktop and server machines
to reduce complexity and increase ease
of management, but day-to-day admin-
istrative activities can negatively impact
that base level of security. Temporary
administrative accounts are created but
then forgotten; services such as file
transfer are added for ad hoc purposes
but not removed, and so on. A vulner-
ability scanner is overkill for detecting
this kind of policy drift. By employ-
ing more simplistic network informa-
tion gathering techniques, the run time
of a scan can be reduced, as can the
impact on network bandwidth. The
duration of the information gathering
loop is shortened, and this allows
results to be provided quicker,
which itself reduces risk by allowing
remediation activities to be carried
out sooner.
ConclusionsPatch management technologies and
processes now deliver to businesses
the core capability of traditional net-
work vulnerability assessment
tools; namely, the identification of vul-
nerabilities that are present due to miss-
ing patches. Patch management
solutions can be used to accomplishthis task by identifying the delta
between the set of patches for
known vulnerabilities and the current
patch status of hosts within the
environment.
For network-wide vulnerability assess-
ment, the question that businesses need
to ask is: what data is it still valuableto
gather across the network? There is
little value in employing a noisy, band-
width-consuming network vulnerabilityscan to interrogate production
systems with an ever-increasing
number of vulnerability checks, when
what data isstill valuable to
gather across
the network?
os count
HP embedded 26
Cisco embedded 33
Linux 42
Windows 553
OpenBSD 1
No match 2
Table 2: Number of operating systems found in a particular network
-
8/6/2019 Elsevier.network.security.april
10/20
Network Security April 200510
CRYPTOGRAPHY
patch status data is already being
collected through patch management
activities.
Employing simple network
information gathering techniques in
this supplementary role is easier, takes
less time, has less impact on network
bandwidth, does not require a
constantly updated set of vulnerability
checks, and provides more intuitive
results.
About the authorAndrew Stewart is a Senior Consultant with
a professional services firm based in Atlanta,
Georgia.
ReferencesChan (2004), Essentials of Patch
Management Policy and Practice,
Available:http://www.patchmanage-
ment.org/pmessentials.asp
Fyodor (1997), The Art of Port
Scanning, Phrack Magazine, Volume 7,
No. 51, September 01, 1997.
Fyodor (1998), Remote OS detection
via TCP/IP Stack FingerPrinting,
Phrack Magazine, Volume 9, No. 54,
25th December, 1998.
PM (2005), Mailing list archive at
http://www.patchmanagement.org
Chinese infosec research efforts are fixat-
ed on cryptography and researchers are
already producing breakthroughs. Agroup of researchers from Shandong
University in China stunned the estab-
lished crypto community at the RSA
conference in February by breaking the
integral SHA-1 algorithm used widely in
digital signatures. This SHA algorithm
was conceived deep within the womb of
the US National Security Agency's cryp-
tography labs. It was declared safe until2010 by the US National Institute of
Standard's and Technology (NIST). But
this illusion was shattered last month.
Even more proof of the hive of crypto
activity in China is that 72% of all cryp-
tography papers submitted to theElsevier journal, Computers & Security
last year hailed from China and Taiwan.
And cryptography papers accounted for
one third of all the IT security research
submitted to the journal.
The Chinese are determined to get
into the subject, says Mike Walker,
head of Research & Development at
Vodafone, who studied cryptography at
Royal Holloway College, London. "If
you attract the best people from onefifth of the world's population, you are
going to sooner or later make a big
impression." Walker would like to see
more young people venture into cryp-
tography in the UK. He believes the
general decline in interest in science
and maths is to the detriment of the
country.
But no such lack of interest is evident
in China. And the achievement in crack-
ing the SHA-1 hash function is anearthquake of a result. "The breakage of
SHA-1 is one of the most significant
results in cryptanalysis in the past
decade," says Burt Kaliski, chief scientist
at RSA Security. "People didn't think
this was possible."
Shelf-life"Now there is no doubt that we need a
new hash function," says Mette
Vesterager, chief executive officer at
Cryptico. Vesterager says a competition
will probably be launched to get a new
replacement for SHA-1. Such a competi-
tion generated the Advanced Encryption
Standard (AES), from two Belgians in
2000 to replace the Data Encryption
Standard (DES). DES was published in
1977 and had 72,000,000,000,000,000
possible key variations, making it diffi-
cult to break.
NIST have now taken DES off the
shelf, however. No such retirement plan
has been concocted for SHA-1 yet. As of
yet the outcome for the broken algo-
rithm is still undecided. But Fred Piper,at Royal Holloway says that people will
migrate away from it in the next year or
so if the Chinese research is proven. In
Sarah Hilley
A newly emergent country has begun to set the pace for
cryptographic mathematicians
Crypto race for
mathematicalinfinitySarah Hilley
It is a racebetween
mathematicians
and
computers
The Chinese are
determined to
get into the
subject
http://www.patchmanagement.org/pmessentials.asphttp://www.patchmanagement.org/pmessentials.asphttp://www.patchmanagement.org/pmessentials.asphttp://www.patchmanagement.org/http://www.patchmanagement.org/http://www.patchmanagement.org/pmessentials.asphttp://www.patchmanagement.org/pmessentials.asp -
8/6/2019 Elsevier.network.security.april
11/20
April 2005 Network Security11
BIOMETRICS
Biometrics is often said to be a panacea
for physical and network authentica-
tion. But there are some considerable
problems with the technology, some of
which can have a major impact on the
security posture of the implementingorganization.
At present, the cost of implementation
means that relatively few companies are
using biometric technologies to authenti-
cate identities. As biometric technologies
become less costly, many network
administrators will find themselves hav-
ing to deal with a comparatively ill-
understood series of authentication tech-nologies. Ironically some may well
expose the systems they are responsible
for to an increased level of risk.
In the rest of the article the range of
biometric technologies on the market
together with the risks and true costs of
implementation that are often ignored
by vendors and politicians alike will be
discussed.
The search beginsFrom the beginning, computer and net-
work security researchers have sought analternative to the unique identifier,
which is currently the most widely used
method of authenticating a user to an IT
service. Typically this is a password and
username combination. However experi-
ence has shown that this mechanism
consistently fails to prevent attacks, as a
knowledgeable attacker can employ a
range of methods to circumvent this
layer of protection.
This model of authentication hasbeen supplemented by multi-factor
authentication mechanisms that are
based on something the user knows (e.g.
Biometrics: the eyeof the stormBy Mike Kemp, technical consultant, NGS
Software
For the last few years vendors and politicians
alike have touted biometrics technology as aninvaluable, even preferred, approach to secure authentication of
identity. However, it presents both the end users of the technology
and those responsible for its implementation with a number of
challenges.
Mike Kemp
addition the Chinese attack has reper-
cussions on other hash algorithms such
as MD5 and MD4.
Down to earthThe breakage of SHA-1 is not so dra-
matic in the humdrum application ofreal-life security through, however. On a
practical level, Kaliski rates it at a two
out of 10 for impact, even through it is
widely used. But cryptographers have to
think ahead in colossal numbers to keep
up with the leaps in computing power.
According to Moore's law, computers
keep getting faster at a factor of 2 every
18 months.
Cryptographers deal with theoretical
danger. They bend and stretch therealms of mathematics and strive to cre-
ate algorithms that outlive computing
power and time. It is a race - a race
between mathematicians and computers.
Fortunately the crack of algorithms like
SHA-1 doesn't yet affect us mere mor-
tals, who unknowingly avail of crypto to
withdraw money from the ATM on a
Saturday night.
This is thanks to cryptographers
thinking in a different time, a time that
is set by the power of computation.
This power isn't here yet to make the
crack of SHA-1 realistic outside a
research environment.
As cryptography is used in one and ahalf billion GSM phones in the world,
and it authenticates countless computer
users, devices, transactions, applications,
servers and so on, this is good news. It
means that we don't have to worry
about underlying algorithms being
attacked routinely like software vulnera-
bilities, for example. The dangers are
much more distant. However side chan-
nel attacks must be watched out for,
warns Kaliski, which target the imple-
mentation of cryptography. Piper recom-
mends that keys have to be managed
properly to guard against such loopholes
in implementation.
Big computersGovernments have historically been
embroiled in mathematical gymnastics
even before cryptography became so
practical. The British famously cracked
the German Enigma code in World War
II. And American Navy cryptanalysts
managed to crack the Japanese code,
Purple, in 1940. What governments can
and can't break these days, though, is
very much unknown."The AES algorithm is unbreakable
with today's technology as far as I'm
aware," says Royal Holloway's Piper. So
far NIST hasn't even allocated a 'best
before' date for the decease of AES. The
AES 128 bit key length gives a total of
an astronomical 3.4 x (10^38) possible
keys. But if law enforcement can't break
keys to fight against terrorism, intelli-
gence is lost, warns Piper. However, peo-
ple wonder 'what can the NSA do?', saysVesterager, and 'how big are their com-
puters?' But the general opinion is that
AES was not chosen because it could be
broken. Time will show, however, she
adds.
And with China pouring large
amounts of energy into studying the lan-
guage of codes and ciphers, the NSA
may want even bigger computers.
-
8/6/2019 Elsevier.network.security.april
12/20
Network Security April 200512
a password), something the user has (e.g.
a token), and something the user is
(biometrics).
As has been widely discussed ,
although popular, password authentica-
tion is often associated with poor pass-
word policies, and management strate-
gies that dont work. Many network
administrators have wrestled with bal-
ancing password authentication and
password policies against account user
needs or demands. Too many know how
far they have had to compromise security
in order to service users.
Token of affection?The use of token-based technologies such
as SecureID tokens, smart cards and digi-tal certificates is becoming widely accept-
ed, not only in the workplace, but out-
side as well. Beginning in October 2003
the UK commenced a roll out of Chip
and PIN authentication methods for
transactions based on bank and credit
cards. The primary aim was to combat
the growing rate of card fraud based on
the manipulation of magnetic strips or
signature fraud. So far over 78 million
Chip and Pin cards are in common use
in the UK, more than one for every man,
woman and child on the island.
Token-based authentication is not
without its downside, however. In fact, it
is far from a panacea with regards the
security of networks, or indeed ones per-
sonal finances. A number of attack vec-
tors exist for both the use of SecureIDs
and the like. Certainly, the number and
value of card-based frauds appears to
have risen since Chip & PIN was intro-
duced. Recent research, still ongoing, is
expected to expose a number of flaws
within the use of Chip and PIN authen-
tication mechanisms in a variety of com-
mon environments.
PIN-pushersThe push towards biometrics comes
from a variety of sources. The financial
industry in particular is resolved to
reduce fraud based on stolen identities,which, according to Accenture, the man-
agement consultancy, now costs con-
sumers and banks $2 trillion a year.
National security agencies in various
countries, led by the US immigration
authorities, are also seeking reliable
unique authentication systems as part of
the war on terror. Biometrics is the as-
yet unfulfilled promise of the third pillar
of authentication mechanisms.
At the network level, biometrics may
well enable network administrators to
increase the security of their network
environments. There are a number of
implementation and security issues that
are often overlooked in the push towards
new methods of authentication.
Methods of biometricaccessAs has been outlined earlier, biometricsis a means of authenticating an
individual's identity using a unique
personal identifier. It is a highly
sophisticated technology based on
scanning, pattern recognition and pat-
tern matching. At present it remains
one of the most costly methods of
authentication available.
Several different technologies exist
based on retinal scans, iris scans, facial
mapping (face recognition using visibleor infrared light, referred to as facial
thermography), fingerprinting (including
hand or finger geometry), handwriting
(signature recognition), and voice
(speaker recognition).
For biometrics to be effective, the
measuring characteristics must be pre-
cise, and the false positives and false neg-
atives minimised.
When a biometric authentication sys-
tem rejects an authorised individual thisis referred to a Type 1 error; a Type 2
error occurs when the system accepts an
impostor. The effectiveness of a biomet-
ric solution can be seen in the Crossover
Exchange Rate (CER). This is a per-
centile figure that represents the point at
which the curve for false acceptance rates
crosses over the curve for false rejection
rates. Depending upon the implementa-
tion of the chosen biometric technology,
this CER can be so high as to makesome forms unusable for an organisation
that wishes to adopt or retain an aggres-
sive security posture.
Space invadersSome forms of biometrics are obviously
more invasive of ones personal space
than others. Fingerprinting, for instance,
has negative connotations because of its
use in criminal detection. As such, some
biometrics may well meet with user resis-tance that company security officers will
need to both understand and overcome.
In 2005, Londons Heathrow airport
introduced plans to conduct retinal scans
in a bid to increase security, and increase
the efficiency of boarding gates. At pre-
sent there are no figures on user accep-
tance of the scheme, which is currently
voluntary. However, as retinal scans are
among the most invasive of biometric
technologies it would be surprising if the
voluntary acceptance rate is high enough
to justify either the expense or efficiency
improvement of the solution.
Print sprintTraditionally biometrics is commonly
associated with physical security.
However there is a growing shift
towards adopting biometrics as a mech-
anism to secure authentication across a
network. A number of fingerprint read-ers are currently available that can be
deployed for input to the authentica-
tion system. These are now cheap and
reliable enough for IBM to include one
in some of its latest laptop computers as
the primary user authentication device.
There is also on-going research to
reduce the cost and improve both the
accuracy and security other biometric
methods such as facial maps and iris or
retinal scans. Judging by the develop-
ments in the field of biometrics in the last
15 years it can only be a matter of time
before everyone can afford the hardware
for biometric network authentication.
Accuracy and security?As has already been discussed the bio-
metrics approach to network authentica-
tion has much promise; however, it is an
as yet unrealised potential. One reason is
that it is laden with a variety of short-comings that need to be fixed prior to its
widespread adoption as an authentica-
tion mechanism.
BIOMETRICS
-
8/6/2019 Elsevier.network.security.april
13/20
April 2005 Network Security13
BIOMETRICS
One of the touted benefits of biomet-
rics is that biometric data is unique, and
this uniqueness makes it difficult to steal
or imitate. One often-overlooked prob-
lem with the biometric approach is that,
unlike other forms of authentication,
they are anything but discreet. Unlike
the traditional password-based model, or
even the token-based approach (e.g.
Chip and PIN) no biometric approach
relies upon something the user holds as
secret. Indeed in all the biometric tech-
nologies currently available potential
attackers can see exactly what is going
on. Obviously, this makes them poten-
tially vulnerable.
Attack vectorsWhen evaluating biometrics networkadministrators should consider possible
attack vectors. These fall into two dis-
tinct classes, namely:
Physical spoofing, which relies on
attacks that present the biometric
sensor (of whatever type) with an
image of a legitimate user.
Digital spoofing, which transmits
data that mimics that of a legitimateuser. This approach is similar to
the password sniffing and replay
attacks that are well known and are
incorporated in the repertoire of
many network attackers.
In 2003, two German hackers,
Starbug and Lisa, demonstrated a range
of biometric physical spoofing attacks at
the Chaos Computer Camp event. Their
attacks relied upon the adaptation of atechnique that has long been known to
many biometrics vendors. In the original
attack vector an attacker could dust a
fingerprint sensor with graphite powder,
lift the fingerprint, and then subsequent-
ly use it to gain entry.
The 2003 attack showed it could cre-
ate a 'gummy finger' using a combina-
tion of latex, photo imaging software
and graphite powder. Although this
method may seem somewhat far-fetched, it can be used to bypass a num-
ber of available fingerprint biometric
devices. Indeed, in 2002, Japanese
researcher Tsutomo Matsumoto was able
to fool 11 biometric fingerprint readers
80% of the time using 'gummy fingers'.
Worse news came in 2004, when
researchers revealed that some finger-
print readers could be bypassed merely
by blowing gently on them, forcing thesystem to read in an earlier latent print
from a genuine user.
Attacks are not limited only to finger-
print readers (as found in the current
range of network access devices); both
face and iris scanners can be spoofed
successfully. In the case of the former, a
substitute photograph or video of a
legitimate user may be able to bypass
systems; with regards to iris scanners, a
photograph of the iris taken under dif-fused lighting and with a hole cut for
the pupil can make for an effective
spoofing stratagem.
If compromised biometric devices are
a conduit into a network, it may be pos-
sible to manipulate stored data, thus
effectively bypassing all security policies
and procedures that are in place.
Attack on all sidesAs has been outlined, biometric technolo-gies are far from risk-free. Many (if not
all) are susceptible to both physical and
logical digital attack vectors. The reasons
for these shortcomings are many, includ-
ing a potential ignorance about security
concerns on the manufacturer's part, a
lack of quality control, and little or no
standardisation of the technologies in use.
There is also the sometimes onerous
and problematic process of registering
users who may not embrace the use ofbiometrics, and who may start quoting
passages from the Human Rights Act.
When you think about implementing
biometric technologies remember that
they do not yet measure perfectly, and
many operational and security chal-
lenges can cause them to fail, or be
bypassed by attackers. Presently there is
not enough hard evidence that shows
the real levels of failure and risk associ-
ated with the use of biometric authenti-cation technologies. It would be a brave
administrator indeed that chose to
embrace them blindly and without a
degree of external coercion, such as a
change in the legislation.
Goodbye to passwords?Biometric technologies have the poten-
tial to revolutionise mechanisms of net-
work authentication. They have severaladvantages, such as users never need to
remember a password, and more
resilience against automated attacks and
conventional social engineering attacks.
However, the market for such devices is
so new, and the amount of clear statisti-
cal research data as to its cost and bene-
fits is Spartan.
Most large companies can probably
afford to implement them. But doing so
may have the undesirable side effect ofactually increasing their exposure to risk.
In particular, the lack of standardisation
and quality control remains a serious
and grave concern.
In the coming years, biometrics may
improve as an authentication technolo-
gy, if only because politicians and fraud-
sters are currently driving the need for
improvements. At the present level of
technical understanding and standardisa-
tion, and many signs of user resistance,network administrators who voluntarily
introduce the technology may find
themselves on the bleeding edge, rather
than the leading edge.
Network administrators need to ques-
tion closely not only the need for bio-
metrics as a network authentication and
access mechanism, but also the levels of
risk they currently pose to the enter-
prise. For most, the answer will be to
wait and see.
About the authorMichael Kemp is an experienced technical
author and consultant specialising in the
information security arena. He is a widely
published author and has prepared numer-
ous courses, articles and papers for a
diverse range of IT related companies and
periodicals. Currently, he is employed by
NGS Software Ltd where he has been
involved in a range of security and d ocu-mentation projects. He holds a degree in
Information and Communications and is
currently studying for CISSP certification.
-
8/6/2019 Elsevier.network.security.april
14/20
PROACTIVE SECURITY
Network Security April 200514
Vendor bandwagonNevertheless the vendors do seem
to have decided that proactive securityis one of the big ideas for 2005, and
there is some substance behind the
hype. Cisco for example came out
with a product blitz in February 2005
under the banner of Adaptive Threat
Defence. IBM meanwhile has been
promoting proactive security at the
lower level of cryptography and digital
signatures, while Microsoft has beenworking with a company called
PreEmptive Solutions to make its code
harder for hackers to reverse engineer
from the compiled version. The dedi-
cated IT security vendors have also
been at it. Internet Security Systems
has been boasting of how its customers
have benefited from its pre-emptive
protection anticipating threats before
they happen. And Symantec has
brought to market the so-called digital
immune system developed in a joint
project with IBM.
UnreactiveThese various products and strategies
might appear disjointed when takentogether, but they have in common the
necessary objective of moving beyond
reaction, which is no longer tenable in
the modern security climate. The crucial
question is whether these initiatives real-
ly deliver what enterprises need, which is
affordable pre-emptive protection. If the
solutions extract too great a toll on
internal resources through need for con-
tinual reconfiguration and endless analy-
sis of reports containing too many falsepositives, then they are unworkable.
Proactive security has to be as far as pos-
sible automatic.
On this count some progress has been
made but there is still a heavy onus on
enterprises to actually implement proac-
tive security. Some of this is inevitable,
for no enterprise can make its network
secure without implementing some good
housekeeping measures. The products
can only deliver if they are part of a
coherent strategy involving analysis of
internal vulnerabilities against external
threats.
Indeed this is an important first step
towards identifying which products are
relevant. For example the decline in
perimeter security as provided by fire-
walls has created new internal targets
for hackers, notably PCs, but also
servers that can be co-opted as staging
posts for attacks. There is also the risk
of an enterprise finding its servers or
PCs exploited for illegal activities such
as peer-to-peer transfer of software,
music or even video, without its knowl-
edge. Identifying such threats and
putting appropriate monitoring tools in
place is an important first step along
the pre-emptive path.
Stop the exploitationHowever some of the efforts beingmade will benefit everybody and come
automatically with emerging releases of
software. Microsofts work with
PreEmptive Solutions springs to mind
here, as the technology concerned is
included with Visual studio 2005.
This technology called Dotfuscator
Community Edition is designed to
make the task of reconstituting
source code from the compiled object
code practically impossible, so that
hackers are unlikely to try. Of course
the risk then becomes of the source
code itself being stolen, but that is
another matter.
Sharing private keysThe principle of ducking and weaving
to evade hackers can also be extended
to cryptography. The public key system
is widely used both to encrypt sessionkeys and also for digital signatures.
The latter has become a target for
financial fraudsters because if they steal
Philip Hunter
Proactive security sounds at first sight like just another marketing
gimmick to persuade customers to sign for up for yet another
false dawn. After all proactivity is surely just good practice,
protecting in advance against threats that are known about, like
bolting your back door just in case the burglar comes. To some
proactive security is indeed just a rallying call, urging IT managersto protect against known threats, and avoid easily identifiable
vulnerabilities. All too often for example desktops are not
properly monitored allowing users to unwittingly expose internal
networks to threats such as spyware. Similarly remote execution
can be made the exception rather than the default, making it
harder for hackers to co-opt internal servers for their nefarious
ends.
Proactive securitylatest: vendors wire
the cage but hasthe budgie flown.
Proactive
security has
to be
automatic
Philip Hunter
-
8/6/2019 Elsevier.network.security.april
15/20
PROACTIVE SECURITY
someones private key they can write
that persons digital signature, thereby
effecting identify theft. But here too
risks can be greatly reduced through
pro-activity. An idea being developed by
IBM involves distributing private keys
among a number of computers rather
than just one. Then the secret key can
only be invoked, whether for a digital
signature or to decrypt a message, with
the participation of a number of com-
puters. This makes it harder to steal the
key because all the computers involved
have to be compromised rather than
just one. In practice it is likely that at
least one of the computers will be
secure at any one time at least such is
the theory. This development comes at
a time of increasing online fraud and
mounting concerns over the security of
digital signatures.
BuglifeThere is also scope for being proactive
when it comes to known bugs or vul-
nerabilities in software. One of the
most celebrated examples came in July
2002 when Microsoft reported vulnera-bility in its SQL Server 2000
Resolution Service, designed to allow
multiple databases to run on a single
machine. There was the potential to
launch a buffer overflow attack, in
which a hacker invokes execution of
code such as a worm by overwriting
legitimate pointers within an applica-
tion. This can be prevented by code
that prohibits any such overwriting, but
Microsoft had neglected to do so withinResolution Service. However Microsoft
did spot the vulnerability and reported
it in July 2002. One security vendor,
Internet Security Systems, was quick
off the mark, and in September 2002
distributed an update that provided
protection. Then in January 2003 came
the infamous Slammer Worm exploiting
this loophole, breaking new ground
through its rapid propagation, doubling
the infected population every 9 seconds
at its height. The case highlighted
the potential for pre-emptive action,
but also the scale of the task in distrib-
uting the protection throughout the
Internet.
Open disclosureAnother problem is that some software
vendors fail to disclose vulnerabities
when they do occur, through fear ofadverse publicity. This leads to delay in
identifying the risks, making it even
harder to be proactive. It makes sense
therefore for enterprises to buy software
only where possible from vendors that
practice an open disclosure policy. Many
such disclosures can be found on the
BUGTRAQ mailing list, but a number
of vendors, and in some cases even sup-
pliers of free software when there would
seem nothing to gain by it, hide issues
from their users. There is however a
counter argument in that public dissemi-
nation of vulnerabilities actually helps
and encourages potential hackers. But
there is the feeling now that in general
the benefits of full disclosure outweigh
the risks.
Patch itBe that as it may the greatest challenge
for proactive security lies in respondingand distributing patches or updates to
plug vulnerabilities within ever decreas-
ing time windows. As we just saw the
Slammer worm took six months arrive,
and the same was true for Nimda. This
left plenty of time to create patches and
warn the public, which did reduce the
impact. But the window has since
shortened significantly a study by
Qualys, which provides on-demand vul-
nerability management solutions,reported in July 2004 that 80% of
exploits were enacted within 60 days of
a vulnerabilitys announcement. In
some cases now it takes just a week or
two, so the processes of developing and
distributing patches need to be speeded
up. Ideally service providers should
implement or distribute such protection
automatically.
ConclusionProactive security also needs to be flexi-
ble, adapting to the changing threat
landscape. A good example is the case of
two-factor security, in which static pass-
words are reinforced by tokens generat-
ing dynamic keys on the fly. This has
been the gold standard for controlling
internal access to computer systems
within the finance sector for well over a
decade, but recently there have beenmoves to extend it to consumer Internet
banking. But some experts reckon this is
a waste of money because it fails to
address the different threats posed byInternet fraudsters. These include man
in the middle attacks which capture the
one time key as well as the static pass-
words and replay both to the online
bank. So it may be that while two-factor
security will reduce fraud through guess-
ing or stealing static passwords, the cost
of implementing it across a customer
base will outweigh the benefits, given
that vulnerabilities remain. But nobody
is suggesting that proactive securityavoids hard decisions balancing
solutions against threats and cost of
implementation.
April 2005 Network Security15
Many suppliershide issues
from users
There havebeen moves
to extendtwo- factor
authentication
to Internet
banking
-
8/6/2019 Elsevier.network.security.april
16/20
PKI
Network Security April 200516
PKISecure messaging employing end-to-end
architectures and PKIs offer message
confidentiality through encryption, and
message authentication through digital
signatures. However, there are a number
of implementation and operational issues
associated with them.
One of the major criticisms is the
overheads involved in certificate and key
management. Typically, certificates andkeys are assigned a lifetime of one to
three years, after which they must be
replaced (rekeyed). A current trend is to
employ a rigorous semi-manual process
to deploy initial certificates and keys and
to automate the ongoing management
processes. For the initial issuance, it is
vital to confirm the identity of the key
and certificate recipients; especially
where messages between organizations
are to be digitally signed.Business partners must have trust in
each others PKIs to a level commensu-
rate with the value of the information to
be communicated. This may be deter-
mined by the thoroughness of the
processes operated by the Trust Centre
that issued the certificates, as defined in
the Certificate Policy and Certificate
Practice Statement.
The organisations corporate directory
plays a critical role as the mechanism forpublishing certificates. However, corpo-
rate directories contain a significant
amount of information which may
create data-protection issues if published
in full. Secondly, corporate directories
usually allow wildcards in search criteria,
but these are unwise for external connec-
tion as they could be used to harvest e-
mail addresses for virus and spam
attacks. Furthermore, organizations may
publish certificates in different locations.
Dedicated line and
routingThe underlying idea for this alternative
to a fully blown PKI is to transmit mes-
sages on a path between the participating
organizations that avoids the open
Internet. There are two major options:
A dedicated line between the
involved companiesWith this option all messages are nor-
mally transmitted without any protec-
tion of content. The level of confiden-
tiality for intracompany traffic thus
becomes the same for the intercompany
traffic and for many types of informa-
tion that may be sufficient. Depending
on bandwidth, network provider and
end locations, however, this option may
be expensive.
A VPN connection between
participating companiesSuch a connection normally employs theInternet, but an encrypted, secure tunnel
on the network layer is established
between the networks of participants.
Thus all information is protected by
encryption. An investment to purchase
or upgrade the network routers at the
endpoints of the secure tunnel might not
be insignificant.
Most of the work to implement such
solutions lies in establishing the network
connection, and a dedicated line may
have a considerable lead time. The same
applies for new network routers as end-
points of a VPN.
Gateway to gatewayencryption usingTransport Layer Security
(TLS)Internet email messages are vulnerable toeavesdropping because the Internet
Simple Message Transfer Protocol
(SMTP) does not provide encryption. To
protect these messages, servers can use
TLS to encrypt the data packets as they
pass between the servers. With TLS, each
packet of data is encrypted by the send-
ing server, and decrypted by the receiving
server. TLS is already built into many
messaging servers, including MicrosoftExchange and IBM Lotus Domino, so
that implementation may simply involve
the installation of an X.509 server certifi-
cate and activation of the TLS protocol.
The downside is that data is protected
only in transit between servers that sup-
port TLS. TLS does not protect a mes-