PROFESSIONAL COMPUTING
Transcript of PROFESSIONAL COMPUTING
PROFESSIONAL
COMPUTINGTHE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY APRIL 1991
THE POWER TO PROTECT
0=Hb
Ik? (rn
WBSSMBSBBMi
.
'inverter' RESERVEHELP CD 0 STEP UP5TART J STOP J RESERVE INVERTER
aeonunEsvsrsms
i
EDP Auditors Association, Region 8 Conference, 1991 National Convention Centre Canberra, 15-17 May 1991
EDPAC 91
Objectives★ to promote the discipline of Information Systems Audit as a profession★ to share practical, technical and management knowledge about
auditing information systems★ to address the differing interests of auditors, management, computer
security and the data processing community
Who should attend?★ Information technology managers★ Data security officers★ Auditors and Accountants★ Systems analysts and programmers
Topics include:★ Data security★ Computer abuse★ Automated controls★ Unix & open systems
Further information:EDPAC91 Conference Secretariat Conference Solutions PO Box 135 CURTIN ACT 2607
ph (06) 285 3000 ------- --- -fax (06) 285 3001 I == ===
Major Sponsors
Have you noticed how small it is?
YOUR magazine relies on only two sources of revenue, an ACS monthly payment and advertising sales. Like all of the IT industry papers we are much smaller than usual and small
er than we would like because these sales are very hard to make at this time.
You may like to read the covering letter I am sending out with some promotional literature.
Dear (agency person),Information Technology is rarely the reason for
the existence of a business and IT management can have a particularly hard time bidding for a share of corporate funds unless it has established a good track record of efficient contribution to the company’s success.
When times are easy, buying decisions can be made with a fair degree of confidence that the results of wrong choices will dissipate; if a really cynical view is taken, the consequences can be escaped by moving to a new job — there have always been plenty of them.
Today it’s different, IT managers’ decisions need thoughtful evaluation, serious consideration of options that have become technically difficult to understand and demand more than ever the involvement of specialists within their departments.
Professional Computing magazine is read by the 12,000 members of the Australian Computer Society, decision makers and influences who belong to the organisation because they recognise the importance of keeping up to date with fast moving technologies, for industry interaction to discuss and influence developments and for a professional approach to important work. There is no formal requirement for people to join the ACS; although there are good reasons for certification of professional competence to operate in the industry; only members prepared to pay a non-trivial annual fee to belong to an organisation whose aims are all related to doing the IT job better.
Confidence that value will be received for an investment is one of the most effective answers to the problems of budgetary constraint, and our readers are the professionals equipped to supply that confidence. It surely follows that Professional Computing is the magazine most likely to ensure value from an advertising investment.Yours sincerely,
A.H.Blackmore, Editor.
Do you believe this? Can you influence a decision? Help us to convince the advertisers that my message is true and you’ll get value in the form of a bigger, better magazine.
Tony Blackmore
PROFESSIONAL
COMPUTINGCONTENTS: APRIL 1991SERIOUSLY, SECURELY, COMPUTING: Data communications and computing security solutions have been driven by Defence needs. In this two-part article the issues are overviewed and the implications for Australian Government and private industry are drawn. 2
IT SYSTEMS SECURITY: A BROAD VIEW: In the context of Information Technology, general use of the term ‘security’ is vaguely defined. Everyday interpretations include maintaining privacy, the prevention of data loss or the prevention of data corruption. Regarding security as the protection of the information assets of an organisation, is a definition which includes the everyday usages. 6
GROWING PERSONALLY AND LESS SECURE: The replacement of the once ubiquitous ‘dumb terminal’ by the personal workstation has created new local security risks and provided better tools for the accidental or deliberate destruction of host information. 8
OPENING MOVES: Real-time Unix discussed. 10
REGULAR COLUMNS ACS in View 13:
PROFESSIONAL
COMPUTING
COVER:Clean Line Systems ia a division of Chloride Power Electronics, a multi-national that is one of the world’s top three manufacturers of power support products for computers and critical electronic equipment.
Clean Line supplies and services sophisticated uninterruptable power supplies sourced from its manufacturing sites around the world as well as Australian manufactured power line conditioners.
The Clean Line product range maintains electrical supply integrity for computers from personal to office through to large mainframes.
Meeting the needs of the computer industry for the last 10 years Clean Line continues to offer the most comprehensive power product range and service support available throughout Australia and New Zealand.
PROFESSIONAL COMPUTING, APRIL 1991 1
Seriously, securely, computingData communications and computing security solutions have been driven by
Defence needs. In this two-part article the issues are overviewed and the implications for Australian Government and private industry are drawn.
A.G. Kerr
WESTERN society is now well into the information age with individuals and organisations increasingly relying on Information
Technology (IT) systems. This dependence is often invisible, sitting behind the preparation of a TELECOM invoice, aiding the preparation of a chemist’s prescription or assisting the management of air traffic around congested airports. IT systems are becoming more interconnected leading to exponential growths in computer to computer exchanges of information. At one end of a spectrum, IT system failures have the capacity to embarrass an individual and, at the other end cause large organisations to cease operations. Failures in safety critical software have led to loss of life.
Effective control of information in many organisations has changed. Previously managed by the users of the now- replaced manual systems, control is now in the hands of IT experts who often identify more with the technology than the aims of their organisation, and who rely on vendors for advice on the strengths of a system’s security features. Weaknesses will not be so identified.
The defence environment has generally been more advanced than other areas of government and private industry in demanding adequate security, within a limited scope. The US and, more recently, European and Australian Government defence organisations have been determining security requirements that vendors and builders of systems must meet to be able to sell those markets. Many of these requirements are applicable to non defence sectors.
This paper explores the attributes of secure IT systems and maps them onto evolving standards and requirements emanating particularly from defence arenas.
Technology advances and security One of the earliest computer network-
PrOFESSIONAL COMPUTING, APRIL 1991
ing experiments occurred 50 years ago when G Stibitz from AT&T Bell laboratories installed a computer constructed of telephone relays that could be instructed to perform a complex calculation by any one of three Teletype terminals located on various floors of the Bell building (1). Today a personal computer connected via Local Area Network (LAN) or modem to interconnected Wide Area Networks (WAN) allows the rapid world-wide transmission of information. Much of this information would be textual rather than numeric, and much of it could be classed as sensitive. The security of an IT system can no longer be addressed solely within the physical boundary of an organisation.
It is necessary to reflect on where we have come from, and the rapidity with which the technology is moving beyond where we are now: security contemplated for today’s systems may be hopelessly inadequate in a few years time.
Some stages in the growth of importance of security include:a. early long distance communication
involved bonfires and smoke signals from local high spots. Weather conditions that allowed availability of the communication medium also removed any semblance of confidentiality;
b. with the arrival of the electronic and computing era, the mass movement of data onto computers, during the 1960s, was via desks of Holerith encoded cards that were manually ferried between the user and the central computing installation. With appropriate manual procedures, and physical computer centre protection, confidentially of information was reasonably assured. System availability was always a problem, and integrity of software and data often suffered as card decks were dropped and shuffled;
c. during the mid to late 1960s simple terminal star networks were placed
throughout organisations, using vendor specific communication facilities, often with limits on cable lengths. This was the era of centralised mainframes and star networks with very localised and identifiable network boundaries. Information security was not a concern, primarily because the bulk of the information belonged to the then traditional number crunching area, and was retained within the walls of the organisation;
d. the 1970s saw the arrival of long distance computer controlled communications, and the movement of text information onto computer data bases. Networks used proprietary protocols, with inter-working between the offerings of different vendors being very complex. The boundaries of networks were still in control of the network managers. The arrival of the local area network resolved many of the availability issues within an office organisation, allowing ease of movement of terminals and, through protocol translation, the movement of information equipment from different vendors;
e. the 1980s was the decade of development and implementation of communications standards. Primarily driven by the International Standards Organisation’s Open System Interconnection (OSI) model, with major contributions from IEEE, CCITT and other bodies, it became relatively easy to connect together numerous heterogeneous networks including LANs, WANs, and Metropolitan Area Networks (MAN) on a variety of media and network topologies. Much work remains to be done in this arenas, particularly within the higher levels of the OSI model;
f. The 1980s also saw the arrival of the personal computer and its connection as an intelligent workstation to remote mainframe hosts, and local departmental computing nodes. A per
2
sonal computer can now provide more computer power and information storage on an individual’s desk than existed in centralised mainframes in the 1960s. The PC phenomenon has revolutionised the availability of computer power. It has been disastrous for the integration and confidentiality of information;
g. already in these early years of the 1990s, individuals can have on their desk massive stores of information and easy access to other systems. Hardware, software and communications advances are removing the last technological barriers to information availability; and
h. the coming years will see implementations complying with standards at higher levels of the OSI protocol stack and a continual expansion in network bandwidths fuelled by ISDN, FDDI and other emerging technologies. The mass movement of information between locations will continue to become easier, faster and, perhaps, less controllable.Until very recently, the technological
progress has single-mindedly increased computer power and connectivity thus dramatically enhancing the capacity to share information. Within many organisations, copying corporate information onto floppy disc is easier than copying a few pages of paper. With increasing OSI led connectivity, and ease of addition of modems to personal computers, network managers are faced with the almost intractable problem of managing unbounded networks.
Information that was once in filing cabinets and under the custodial eyes of co-located owners of the information is now computer managed in a way that is analogous to leaving the filing cabinet open a busy highway intersection, with the added attributes that changes made to the information, or copies taken, often cannot be detected. With voice communications, the party line has long since disappeared; with data communications, the LAN is a party line on which a user with the right knowledge may listen to all passing traffic.
Information security(i) Generic Aspects
A secure or trusted IT system is one in which:a. confidentiality is maintained by the
prevention of unauthorised disclosure of information;AND
b. Integrity is maintained by the prevention of unauthorised amendment or deletion of information;AND
c. availability is maintained by preven
tion of the unauthorised withholding of information or resources. Attainment of appropriate Informa
tion Security (INFOSEC) requires activities in a number of specialised areas:a. computer security (COMPUSEC);b. communications security (COMSEC),
which includes:i. transmission security
(TRANSEC); andii. cryptographic security;
c. compromising or emitted radiation security;
d. personnel security;e. procedural security; andf. physical security.
Substantial analysis is required to select appropriate measures as, for example, one that offers affords greater avail
ability may reduce confidentiality. While the proliferation of personal computers has dramatically increased computer power availability, confidentiality and integrity of PC-managed information is generally well below the level of that attained in well-managed mainframe environments.(ii) Computer Security
While mature operational procedures exist for mainframe environments, rigorous attention to security issues has yet to evolve. In specialised areas, of which defence is one, confidentiality issues have received substantial attention but in general there is a lack of engineering discipline in tackling security. This
Continued page 4
The Guardian Is In Town. Viruses Beware!
Some experts predict that computer viruses could infect as many as 160,000 machines by 1992.What are these rogue programs? They are software programs designed to hide in your system and do things that you do not want to happen, they can do anything from simply printing a smart remark on your screen to completely destroying all of the data and programs on your system, as well as rendering your computer system inoperative.The designers of modem disk operating systems have made available programming hooks on which specially designed software can interface to the operating system adding additional features where required (TSRs). Unfortunately the disk operating system has no way to tell the difference between an honest or dishonest TSR and must presume that all TSRs are for the systems own good.Unscrupulous programmers trying to prove to the world their programming abilities have unleashed a barrage of Trojan Horses, Logic Bombs, Worms and the like (Viruses) — Aids, Disk Killer, Friday 13th, Jerusalem-B and Stoned II to name a few.To date there are many Software Virus Detection programs which generally scan a suspect Disk or Diskette looking for a fingerprint (a known binary pattern that is unique to each logged and documented virus). This method has several disadvantages as follows:—1) The computer system must be up and running with a current version of DOS which may or may not be infected. If the system is infected then intelligent TSR viruses may intercept the scanning or reporting process.2) The viruses that will be detected must be known by the scanning software in advance, and for this to take place the virus must have existed, wreaked havock, been studied, documented and implemented into fingerprint tables at some operators expense (usually many units become infected before identification).3) Intelligent viruses have been known to change their fingerprint pattern when replicating themselves.THE ALTERNATIVE! A hardware solution that does not rely on DOS or fingerprint tables — hence the VIRUS GUARDIAN is bom. This hardware solution (to deceptive software trickery) is in total control of all your Hard Disk Drives (HDD’s) and Floppy Disk Drives (FDD’s) even be-, fore DOS is loaded, protecting the system against:—1) Infected DOS bootups from both HDDs & FDDs.2) Non-DOS infected boot loaders from FDDs (games)3) Illegal writes to the Boot Sector, Partition Table &
Directories4) Illegal writes to files protected by the File Filter
Extension Table set up by you5) Additional security by Password Access to enter the
computer system6) On the fly Keyboard — Paswoord protection.
Recommended Retail Price $349SOLE AUSTRALIAN DISTRIBUTOR
Questwill Enterprises“Questwill” Crooked Lane,
North Richmond NSW 2754 AUSTRALIA Ph (045) 71 1508 Fax (045) 71 1314
PROFESSIONAL COMPUTING, APRIL 1991 3
Seriously,securely,
computingFrom page 3
problem is compounded in the PC environment where the rapid spread of viruses indicates a lack of security awareness by users which is unfortunately coupled with poor appreciation of the extent of corporate information that is managed by PCs.
The increasing use of computers in safety critical areas is necessitating reviews of software development methodologies. Leading to the introduction of formal techniques, mathematical modelling of software algorithms seeks to guarantee the integrity of system operations. However, no discipline yet exists that is widely followed in the IT community.
As further corporate productivity gains are sought, IT systems are increasingly managing and communicating legally enforceable documents. The forerunner was Electronic Funds Transfer. A new entrant is Electronic Data Interchange which seeks to substantially reduce the amount of paper that need flow between corporations to consummate various agreements. Thus integrity measures must provide non-repudiation facilities whereby a sender of an electronic document cannot deny having sent it, and a recipient cannot deny having received it. Where appropriate, confidentiality must be maintained, and systems must continue to provide the required level of availability.
Many organisations view system availability from the perspective of terminal response times, employing analysts expert at tuning systems. Little planning addresses long term non-avail- ability of IT systems. Only a few organisations, such a banks and others typically with large on-line terminal networks, have realised their total dependence on the availability and survivability of their IT systems, and have therefore developed and tested contingency plans.(iii) Communication Links
Communication links facilitate interchange of information. While asynchronous baudot communication is still in use in isolated areas, the move to OSI compliant communication protocol suites is apace. Resulting from excellent collaboration by the standards bodies, the OSI protocol stacks have substantially removed communication integrity concerns. Error free delivery of information in the order in which it was sent,
OSI Security ServicesSERVICE OSI a\/er
1 2 3 4 5 6 7Peer entity authentication v'*Data origin authentication V* v'
Access control service v'Connection confidentiality V V"*
Connectionless confidentialitySelective field confidentiality
Traffic flow confidentiality v'Connection integrity with recovery V*
Connection integrity without recovery /Selective field connection integrity /
Connectionless integrity / 1/ v'*Selective field connectionless integrity v'
Non repudiation - origin v"Non repudiation - delivery
Indicates the layer that could provide the serviceNOTE: Layer 7 services could be provided by an application
and without missing or duplicated data, is the normal expectancy.
Although a rearguard action, the development of standards for the OSI security services, as shown in figure 1, is progressing and will, when implemented, provide confidentiality, authentication and non-repudiation services.
As shown in figure 2, the mechanisms that will provide many of the OSI security services rely on encipherment techniques. Such techniques must provide assured protection and the cipher transform processes must be achievable at acceptable cost. While the symmetric Data Encryption Standard (DES) provides encipherment of recognised strength and is readily implemented in VLST chip technology with acceptable transform rates, its usage, controlled by the US National Security Agency (NSA), is limited to a small number of application areas. Non-symmetric key schemes, like the public/secret facilities of the RSA algorithms, which are required to provide repudiation services and allow the transmission of transaction keys, are compute intensive and not suitable for high speed transaction-based enciphper- ment.
Successful wide-scale implementation of the OSI security services requires further encipherment research.
The OSI security architecture addresses the secure transport of information between layer 7 entities, but does not address the general security issues within a computing node. The computing node is the equivalent of the Chubb safe and needs to have similar strong securi
ty facilities to manage sensitive and classified information.(iv) Assurance
The security features of a system are not apparent during normal operations. That they are present and working is reflected in no failures of security. There are no objective measures of performance. It is the absence of accountable events that reflects their correct performance whereas the correct performance of applications and systems software is tested each time it is used. Thus there are problems in assuring that the security features are performing as required. When acquiring systems from vendors, it may be hard to obtain information about system deficiencies; only the system’s strengths are documented and available for perusal during the acquisition process. Achieving a secure system requires:a. the use of an information security risk
analysis and management methodology to identify the security functionality that is required; and
b. a process that establishes a level of confidence in the correct functioning of the eventually implemented security measures.
(v) Information Security Within a Defence Environment
The Australian Department of Defence’s mature procedures for the manual handling of classified material have evolved over many decades. They are based on:a. clear marking of each document con
taining sensitive information with a
4 PROFESSIONAL COMPUTING, APRIL 1991
Mechanism
c»
^^VV r /V /oSf *£ J? ^<& <f* rf' ^°
Peer entitv authentication V** — 7" —
Data oriain authenticationAccess control service ✓*Connection confidentiality v"Connectionless confidentiality v' v-"Selective field confidentiality v"Traffic flow confidentialityConnection integrity with recoveryConnection integrity - no recovery v-Selective field (connection) Integ. 1Connectionless integritySelective field connectionless integ. v"Non repudiation - origin v" / v'Non repudiation - delivery ! V ! v"1
OSI Services and Mechanisms
classification (top secret, secret, confidential, etc.);
b. vetting and then authorising staff for access up to one of the classification levels; and
c. only allowing access to information if there is a demonstrated need to know. Many staff are cleared to access confi
dential material, while only a small percentage has authorised access to top secret information. Stringent procedures surround all accesses to and the management of classified material. For example, mandatory characteristics of storage containers are specified in the Department’s security manuals, and much classified material is individually numbered and audited.
Communications security has been well practised, particularly during and since the second world war. On-line encipherment facilities encrypt all information on a link thus providing protection against traffic analysis. As a corollary, these facilities also prevent the use of packet switch networks such as TELECOM’S AUSPAC. Defence data communication networks essentially comprise point to point encrypted links connecting secure locations. Terminating equipment for links carrying more highly classified information must be installed to exacting technical standards to prevent unintended radiation of unencrypted information along power lines, earth leads and the like. For similar reasons, computer and other equipment may need to comply with standards specifying maximum permissible levels of electromagnetic radiation.
The need to maintain strict and audit- able confidentiality of classified materials its uncomfortably with the current computing and communications technology that promotes availability and sharing of information.Conclusion
This paper has skimmed across many issues in presenting a view that, as advances in Information Technology increase the ease with which information may be manipulated and communicated, greater attention must be directed at security issues both within government and industry. A top level view of the OSI security architecture has been placed in context with the present and evolving security philosophies of various government agencies. It is argued that the competitive nature of international marketing makes it imperative for Australia to develop an IT security evaluation and certification facility.References1 “Computing has come a long way
since Stibitz first demonstrated networking in 1940”. Pacific Computing Weekly, 28 September, 1990.
2 “Department of Defence Trusted Computer System Evaluation Crite
ria”, US Department of Defence, DoD 5200.28-STD, December 1985.
3 “Trusted Network Interpretation, National Computer Security Centre, NCSC-TG-005, July 1987.'
4 “UK Systems Security Confidence Levels”, UK, Communications Electronic Security Group, Government Communications Headquarters, CESG Computer Security Memorandum No. 3, February 1989.
5 “Evaluation Levels Manual”, UK Department of Trade and Industry Com
mercial Computer Security Centre, V22— Version 3.0.
6 “Security Functionality Manual”, UK Department of Trade and Industry Commercial Computer Security Centre, V21 — Version 3.0.
Tony Kerr is the Principal Consultant with Brundish Pty Ltd specialising in the strategic integration of information technology . Tel: 018 367 588
This paper was presented at the Communications 90 conference organised by the Institution of Engineers, Australia.
PROFESSIONAL COMPUTING, APRIL 1991 5
IT systems security: a broad viewIn the context of Information Technology, the term ‘security’ is vaguely defined.
Everyday interpretations include maintaining privacy, the prevention of data loss or the prevention of data corruption. Regarding security as the protection of the information assets of an organisation, is a definition which includes the
everyday usages.Alan Conrad
TO ADDRESS security properly, an organisation’s needs for confidentiality, integrity and availability with respect to its information assets
must each be considered. In today’s business, IT systems, their software and the data they contain are an organisation’s lifeblood and must be protected in a planned and systematic manner.
Security however, is not another system to be added to the list of debtors, payroll and inventory control. Nor is it a feature, plugged into the feature list for version 2.0 like a graphical user interface. Security needs to be part of the organisational culture, considered at every level of the IT business function.
Security at the organisational levelThe maintenance of proper security requires the analysis and management of risk with the effect of reducing security risks to acceptable levels. While the most common approach is to implement a disaster recovery plan, that plan is often implemented without a systematic assessment of the requirements that have given rise to it. The implication of this simplistic approach is that the lack of a good match between the security requirement and the disaster recovery plan countermeasure, will lead to higher than necessary costs and/or reduced effectiveness.
A preferred approach to security is to start from the top working down through the systems, first cataloguing the information assets in need of protection, second identifying risks and vulnerabilities and last choosing the optimum countermeasures.
Such an approach which can be tedious, is greatly aided by a methodology which allows the process to be carried out in a systematic manner.
Assets; the databases, applications software, the hardware necessary to process the data and communicate and even the accommodation requiring pro
tection should be catalogued. Each asset can be assigned a value to the organisation, or a loss that will be incurred through a breach in security. Losses could be incurred through the information being divulged, or the required computer system being down for an excessive time. The effect of downtime will be different on an accounts receivable system when compared to a retail point-of-sale system.
Having listed the assets, the threats and vulnerabilities should be identified and analysed with the assets to reveal the risks faced by the organisation. The benefit of examining risks in this way is that they can be directly and clearly connected to the organisation’s business functions. For example, the business effect of a communications failure between two branch offices becomes clearly visible.
The final step in the process is to choose a set of countermeasures that match the business risks. Some will be discarded because the cost outweighs the risk, others can be implemented with a
clear vision of the protection of assets.One such countermeasure might be
the implementation of a disaster recovery plan. If it is, you will clearly understand its objectives and a corresponding budget.
Security during systems developmentThese processes are designed to be applied to the existing systems in an organisation. With the development of new systems, the opportunity to weave security requirements into the fabric of the design presents a luxury for the systems designer. All too often, people involved in the development of IT systems have made statements about the need to consider security early in a projects’s lifecycle, without understanding either the importance of security of all. of the areas to be considered.
In the past, system developers, when they have been faced with the question of security have tended to refer back to previous projects and lift the relevant chapter without considering whether all the measures are relevant to the new
Development Phase Security Activity
IS Strategy Security Policy
Feasibility Study High Level Review
Current System
Required System Application Countermeasures
Technical Options Technical Countermeasures
Physical Design Procedural Countermeasures
Testing Test Compliance
Live Operation Change Control and Security Administration
6 PROFESSIONAL COMPUTING, APRIL 1991
system, or whether additional protection is required.
In certain projects the need for security may be the overriding concern. For example, it is of paramount importance that a flight control system is constantly available and accurate, and this need for security will influence the design of the system.
Also, the choice of design may affect the choice over what is proper security. For example, while it might be enough to identify users by passwords if the system is centralised, a distributed system may need a stronger user identification mechanism.
It is important not to overlook the cost when examining the feasibility of a system. Some security mechanisms, such as encryption, can be very expensive in both capital and running costs. These costs must be included in the estimates made during a feasibility study.
Certain measures, particularly those which affect the design of an application, would either be extremely difficult or impossible to implement after the system had been developed. For example, if the audit requirements have not been properly considered it can mean that either insufficient information is recorded to allow incidents to be detected, or too much information is recorded
and it becomes impossible to review an incident with the resources available.
The objective of considering security during the development of a system is to allow the system developer to identify all the proper measures needed to provide adequate security. This must be done in a consistent and comprehensive fashion. This way will eliminate loopholes and inconsistencies that could otherwise have existed in the system’s design, and lead to a more secure system.
A structured approach which helps the system developer identify where the potential dangers lie will lead to a clearer understanding of the need for security, and enable effort to be directed to the areas of most need.
By considering security in this manner the system developer is presented with a new view on the data, forcing him to consider not only how the information can be used, but also how it can be abused. This new view will strengthen the developer’s understanding of the system and lead to an improved design, not only in terms of security but also the general quality of the system.
If security is considered during development the review process can be quick to complete, because all the relevant people and information are gathered together in one place.
When should security be consideredHaving explained the reasons why security needs to be addressed throughout a project’s lifecycle the next question to be resolved is at precisely which steps of the project lifecycle should security be examined.Security should be considered at the strategy planning stage and this can form the basis of a corporate Security Policy. This will help define organisation wide standards and policies, and provide a framework into which individual system security policies fit.
During the feasibility study a high level review of the project should be conducted. This will highlight where there is a requirement for security measures which have a major effect, either in terms of: capital costs, running costs, organisational effect, and restrictions on functionality.
When the technical specification is being written the systems developer must specify the security mechanisms that the technical support environment needs to provide. This can be a complex and technical area, in which few systems developers have wide experience, but is essential that the controls built into the application cannot be compromised.
Continued page 8
Tomorrow’s solutions today...Blocklt! - DDP's Australian developed security software provides:
Blocklt!
ACF2RACF
^ S Blocklt! for DOSBlocklt! for Windows
...total end to end workstation security.Head Office - Melbourne77 Southbank BoulevardSouth Melbourne 3205Tel: (03) 694 6711 Fax: (03) 686 9036Int Tel: +613 694 6711Int Fax: +613 686 9036
Sydney: 17 Atchison Street St Leonards NSW 2065 Tel: (02) 906 1200 Fax: (02) 906 1290
Perth: Level 3, 83 The Esplanade South Perth WA 6151 Tel: (09) 474 2455 Fax: (09) 474 1616
Products of DDP (NZ) LtdLevel 6, Waterside House 220 Willis Street Wellington NZ Tel (04) 856 630 Fax: (04) 848 511
Distributed Data Processing Pty LtdA.C.N . 005 511 517
PROFESSIONAL COMPUTING, APRIL 1991 7
Growing personally and less secure
The replacement of the once ubiquitous ‘dumb terminal’ by the personal workstation has created new local
security risks and provided better tools for the accidental or deliberate destruction of host information.
Tom Armstrong
Broad view of security systems
From page 7
During the physical design stage one of the tasks the systems developer is faced with is to specify the procedures which the users should follow. These procedures must include all of those related to the security of the system.
The subject of testing, and the related subjects of quality and assurance are complex areas in their own rights, suffice it to say that the extent and rigour of the testing plans will be influenced by requirements for security.
Once the system has become operational there is a need for the security mechanisms to be maintained and administered. Further development of the system should be controlled under a change control procedure, which would ensure that changes to the system do not invalidate the security policy or increase the requirement for security without proper consideration of how such changes should be made.
Weaknesses of the current development methodologiesThe approach taken by many development methods is to consider security as an isolated step at which the systems developer tries to prepare the complete list of the countermeasures that the system needs. This step is often carried out very early in the development of a system, often before major decisions which may affect the proper level of security have been taken.
Although some methods provide guidance on the issues which need to be considered, this is not comprehensive and some important aspects of security may be overlooked. Furthermore, it does not help the difficult task of justifying the cost to management.
Although each development method can be considered as a tool-box, containing several techniques which allow the systems developer to model the information that will be processed by the system, the system development methods generally lack tools to define the need for security within the system.
In essence, the system developer needs to add the new technique of risk analysis and management to the toolbox of techniques that they already use.
Alan Conrad is a consultant with Technology Australia Pty.Ltd. Tel:03 841 9733
OVER the last 10 years we have seen the personal computer evolve from its humble beginnings as a stand alone device with little
memory and little storage into a corporate workstation that rivals the mini computers of today.
It is often connected via complex local area networks to other PCs and frequently linked to super mini or mainframe equipment.
One of the MIS conundrums of the 1990s is how to bring these PC workstations back under central control while still giving end users the facilities they want or need.
While the principal subject here is security, it is worth taking a brief look at a slightly wider picture.
The rapid introduction of the PC as the corporate workstation gives rise to a number of issues not encountered when using host connected terminals. Among these are the need for increased storage space on both the PC itself and the server and the difficulties of linking these PCs to each other and into host systems. These technical difficulties pale into insignificance when we take a look at the management problems evolving.
It is no secret that management is under pressure to reduce costs while maintaining or increasing service levels. It is also no secret that there is an increasing number of users requiring access to more information from more locations. As the installed base of workstations increases, there is an ever increasing need for support, therefore more people are required.
A major concern then becomes one of human resources. The simple answer is to add more support people, but an organisation can only add so many ground or field staff before it is necessary to introduce further levels of expensive management.
It is uneconomic and logistically difficult to introduce new software and upgrade the old by means of sending out floppy disks to users; central management of these system is difficult to im
plement in any form; asset management becomes a nightmare; and, individual workstation security is extremely difficult to implement.
Many of the applications are critical to the conduct of a business but without the facilities for direct management control over systems there can be no assurance of integrity or compatibility of data so vital to a company’s need to retain or increase its competitive edge.
We will now take a specific look at the problems of implementing security. Computer security in the 90s
It can be quite difficult to define the meaning of the word security in the modern business environment. To some it means physically securing machines against theft but in this instance we will leave this definition to be dealt with by those responsible for the physical protection of premises.
At MIS level we can give the word security two primary meanings. Firstly it means reliability (protection against accidental problems), and secondly it means defensibility (protection against deliberate misuse).
In both cases, security should mean peace of mind — a confident feeling that the system is secure, that it will continue to operate without difficulty and that no unauthorised use is taking place. When security is working best, it is an invisible protector.
There is no doubt that the widespread introduction of the personal workstation has led to the probability of a serious threat to the prime asset of any organisation — its information. This prime asset is the organisation’s company information. Mainframes accessed by dedicated terminals are not nearly as vulnerable to intentional or unintentional misuse as is the personal workstation.Mainframe computers
Host mainframe computers were accessed by terminals which were well controlled as to the information available to the operator of that terminal.
With IBM mainframe equipment, the
8 PROFESSIONAL COMPUTING, APRIL 1991
terminal user was (and still is) isolated from the operating system by the operating system itself. In addition, most systems had a security product such as RCAF or ACF2 which provided a barrier that prevented unauthorised access to the company’s information database.
On top of all this, the terminal was not user programmable so that any information obtained legitimately from the database could only be subject to pre-determined manipulation. For instance, if a user was intent on destroying information in that database, it would have to be carried out one record at a time. Therefore major destruction or manipulation of company information became a time consuming chore which led to considerably increased chances of detection.Personal computers
The PC was for many years just what the name suggests — a personal computer. It stood alone and by virtue of its limited storage capacity it held important but comparatively limited company information.
The PC is always subject to ‘finger problems’ as DOS is very flexible by nature and does not provide the controlled environment seen in the mainframe. It has always been simple for the user to access and delete or change operating system and data files — either intentionally or inadvertently. Programmable workstations, LANs and WANs
The first major change in the order of things came with the availability of terminal emulation software as a terminal linked to the host. While the host security provisions were still in force, they became somewhat less efficient owing to the processing power of this PC.
Record access was still controlled as with a terminal but while with a terminal it might be necessary to use a number of keystrokes and still only access a single record at a time, programs available on the PC could allow the user to program a situation in which the machine would very quickly and automatically manipulate as many records in the company database as he wished — without any user intervention.Where are the real security problems?
One of the first concerns that comes to mind is the problems that can be caused by hackers, and as the Australian Computer Abuse Research Bureau based at the Royal Melbourne Institute of Technology has shown, these are very real concerns.
However I believe that the most serious threat comes from untrained or careless users. The occasional reports of
large scale sabotage are certainly a concern but by far the largest problem lies in protecting the user against actions that may be unintentionally initiated.
Although there are very few out to intentionally wreck a company’s records, there are millions of users in a position to destroy information without any intent to cause harm.Security is about asset protection
Decisions on computer security need to be based on a business premise centred around the protection of the organisation’s prime asset.
As we have seen, mainframe security
is a well matured environment and host terminals are very difficult to manipulate.
In the PC area whether using terminal emulation in the host environment or connected to networks, there is often a wide open (if unannounced) invitation to the user to misuse the equipment — intentionally or by accident.
Ideally the user should be allowed the least possible privilege to complete the necessary work.
To do this, a PC-based security system must be able to completely isolate
Continued page 12
PATROLThe Active UNIX Security System
UNIX security is good. C2 certification makes it better. But the best security available is when you have
PATROLPATROL limits access to the times and terminals
you designate.
User Access ControlsTerminal Lines Number of LoginsTime of Access Idle Time
Security ReportsActive Alarms Full Audit Trail
PATROL - the 24-hour 7-day UNIX Security System
** Ask about our Secure Menu System too **
RJM Systems Pty LimitedU2-118 'HUavera ^Road, lnc"rl»"r’•".«* 1,1 *«"" Ws»“North Rvde 2113
Tel: (02) X7X 5032 l av: (02) ,X7X 5472
I
PO Box 1826 Macquarie Centre 2113
PROFESSIONAL COMPUTING, APRIL 1991 9
APRIL 1991
Real UNIX and real time merged in Modcomp’s REAL/IXA
LTHOUGH conventional wisdom sees Unix and real-time as mutually exclusive, there is a clear move towards a Unix which will
satisfy the needs of real-time users. Because few suppliers have been willing to accept the technical challenge of a full merger of true UNIX with true real-time performance, many interim approaches have evolved. These will become obsolete with the prospect of an effective merger.
The trend has already been endorsed by the various UNIX standards organisations, with POSIX, through its 1003.4 real-time subcommittee, leading the way. Within three to five years, the major computer manufacturers will all offer real-time capabilities as standard features of their general purpose UNIX platforms.
They’ll do this to increase their presence in traditional real-time markets, but they’ll also do it because the basic technology of real-time, when fully merged with UNIX, dramatically improves the performance of UNIX in other applications. The move away from proprietary solutions and towards open-systems based products is by far the most powerful driving force in the computer industry today. Faced with escalating software costs and the need for complex multi-vendor systems, today’s users demand standards-based solutions.
Traditional real-time markets include process control in chemical and petrochemical manufacture, utility and energy plant supervisory control and data acquisition, primary metals processing and material handling, and discrete manufacturing applications in the automotive and aerospace sectors. Other important aerospace applications include wind tunnel model monitoring, data acquisition and control of deep space telescopes, jet engine testing, te
lemetry, and pre-launch monitoring of space shuttle engines.
Historically, these real-time markets have been served by proprietary systems suppliers with optimised hardware and software architectures tailored to meet stringent real-time requirements. But because the advantages mentioned earlier for standards-based software are equally applicable in real-time environments, standards are evolving — speeding the merger of UNIX and real-time. Systems designed to support real-time applications must respond to external, asynchronous events within a predictable timeframe. Whereas “real-time” is often interpreted to mean “fast”, a better synonym is “predictable” or “deterministic”. With this definition, it become clear that the technology inherent in making a system “real-time” can be of of significant benefit in any application that requires a predictable response. Many of the requirements discussed below are being incorporated into the POSIX 1003.4 real-time standard, and are not yet part of standard LINIX oflferings.
Real-time systems must support asynchronous I/O to maximise predictability. This goes a step further than just making I/O “fast”. It actually allows concurrent execution of other portions of the application while an I/O operation is taking place. Predictable I/O completion is important in some applications.
Applications must be able to ensure data is actually written to a file or sent to an external device, instead of waiting in a data queue or buffer. This is a key requirement for some online transaction processing (OLTP), for instance.
The concepts of priorities and preemption are important in real-time environments. High priority tasks must be able to preempt execution of a lower priority task whenever they need access to system resources. A key measure of
real-time systems is their worst-case delay to the start of execution of a high priority task, in response to either an externally generated event or the completion of a prerequisite operation.
Deterministic performance is greatly improved when a system supports preallocation of system resources such as memory or file space to high priority tasks.
Real-time timers and synchronisation mechanisms are needed to allow the scheduling of events and the predictable tracking of elapsed time. Separate processes must be able to communicate, and to reliably synchronise their execution.
IEEE’s 1003.4 group is developing the minimum changes to the 1003.1 specification to meet these requirements while ensuring the portability of real-time programs. The committee addresses 10 topics that are deemed necessary to achieve this goal.
AT&T, OSF; and X/Open have all committed to following the 1003.4 standard in their implementations of realtime functionality. AT&T’s UNIX System V, for instance, includes some of the real-time extensions, but unfortunately does not yet achieve true realtime performance for reasons that will be discussed later. The 1003.4 POSIX group has the support of all real-time vendors. A final specification should be issued in late 1990 and approved by mid-1991. For companies using UNIX that have not been immersed in the challenges of real-time prior to release of this new standard, fully debugged implementations of the real-time functionality will take at least a full year. This means that the merging of UNIX and real-time will not be complete until 1993, except by those few vendors who have been appropriately focused.
The traditional real-time markets can of course be well served by a true real-
10 PROFESSIONAL COMPUTING, APRIL 1991
time implementation of UNIX. The technology of real-time systems design, coupled with the portability advantages of UNIX, will also serve other non-realtime markets.
UNIX is already the defacto standard operating system in many communications products. With the merging of realtime functionality into UNIX, these products will be able to adhere more closely to standards, and olfer increased capability.
OLTP applications benefit from realtime techniques such as synchronous I/O, priority-based scheduling, and fixed resource allocation including the ability to lock processes into memory for deterministic response. Real-time UNIX provides these tools, while retaining full access to the expanding range of UNIX commercial software.
Real-time UNIX supporting multiple threads of operation through a single kernel can optimise the performance of the Ada language. Accordingly, UNIX will be used to address problems in simulation, military command and control, and other government and aerospace markets that require the use of Ada.
The true integration of UNIX and real-time is the ideal approach to providing this capability, but several interim approaches have evolved over the past few years:
Many designers have simply moved real-time functionality off-line to a separate embedded real-time executive kernel. This forces a user to learn and use two different environments — one for development and another for execution.
Examples of real-time executive kernels include VxWorks, by Wind River Systems; C Executive, by JMI Software Consultants; and VRTX by Ready Systems. They are not built from standard AT&T UNIX — they are proprietary kernels with UNIX-like interfaces.
As UNIX and real-time continue to merge over the next few years, this approach will begin to lose its appeal. Even in applications that require a scaled down kernel or diskless applications, designers will be better served by a scaled down POSIX-compliant true real-time UNIX kernel than by one of these nonstandard executives.
The fully featured real-time UNIX will be used both for development purposes and as the host of a network of diskless embedded systems, which will themselves be running a scaled down subset of the same real-time UNIX.
These diskless real-time executives are not the only real-time solutions that provide UNIX interfaces without starting from the standard AT&T UNIX source tree.
An ambitious approach is to develop a full-featured real-time operating system and make it UNIX-like in terms of
Opening Movessystem calls and interfaces. This allows the design of full preemption into the kernel from the ground up, but necessarily impacts compliance with standards. Application programs developed in compliance with the SVID may not run on such systems, and binary compatib- lity is even more at risk. Systems based on this approach also have difficulty conforming to new releases of UNIX.
This approach has been used in LynxOS, by Lynx Real-Time Systems; Regulus, by Alcyon and SBE Inc, and RTU by Masscomp and Concurrent Computer Corporation. Clearly, as UNIX and real-time merge, these vendors will be hard-pressed to maintain a competitive advantage over a true realtime UNIX. Adherence to standards and compatibility concerns make the best approach one that starts from a true AT&T UNIX source. The desired degree of real-time performance can lead to different implementation paths, each a subset of the next.
Adding any number of the real-time extensions postulated in the POSIX 1003.4 draft ensures standardisation, but does not provide true real-time performance. AT&T’s System Y4, for instance, incorporates some 1003.4 realtime extensions. But it does not achieve deterministic real-time performance. This approach must be coupled with some form of kernel preemption to provide acceptable response improvements.
The POSIX 1003.4 draft specifies several preemption points, at which the UNIX kernel must accept an interrupt and perform a context switch to a high priority task. Various designers have adopted this philosophy, and some have added more preemption points than those specified by POSIX.
A true real-time system must provide worst case predictable and deterministic response. When preemption can only occur at certain points in the kernel, worst case response is unacceptably high — in the millisecond, rather than the achievable microsecond range.
The design goals for MODCOMP’s REAL/IX were full SVVS compliance along with binary compatibility with Motorola’s UNIX V/68, true deterministic real-time performance, and compliance with the emerging POSIX real-time standards.
MODCOMP’S engineers started from the standard AT&T UNIX and Motorola V/68 source trees. Then, they implemented full preemption and countless code refinements into the kernel, along with a robust and consistent data structure protection scheme.
REAL/IX conforms fully to AT&T’s
SVID and passes the SVVS. It also retains full binary compatibility with Motorola’s V/68. The hardware-independence required to support application portability is therefore assured. In addition, REAL/IX satisfies all of the topics being addressed by the evolving POSIX 1003.4 real-time standard. It will of course conform fully with the final standard — MODCOMP is involved with the committee and has hosted some of their meetings.
REAL/IX supports preallocation of memory resources, and incorporates a fixed priority process scheduler to ensure predictable response for high priority tasks. I/O is also scheduled based on the priority of the originating task. The system’s timers have 1 /256th second resolution, and it offers enhanced interprocess communications including shared memory, binary semaphores, and enhanced signalling.
The file subsystem has been significantly enhanced. While familiar UNIX access methods are of course supported, the enhancements are of great value for real-time applications. I/O is performed in priority-based order. Asynchronous I/O is supported, and programs can bypass the buffer cache to guarantee reliable synchronous reads or writes. Larger block sizes of up to 128k bytes provide faster throughput. As well, contiguous file space can be pre-allocated to ensure fast storage availability.
External I/O is also improved. A user program can perform direct I/O with external devices, and can be directly notified of an external hardware interrupt. The small computer systems interface (SCSI) standard is used, and MODCOMP simplified its administration — allowing the user to easily add new devices in highly configurable realtime environments. Another unique facility allows users to install new system calls and real-time device drivers into the REAL/IX kernel without access to source.
Standard UNIX benchmarks have shown the REAL/IX implementation to offer significant performance improvements in any application environment. The faster internal performance of the system speeds I/O, system call handling, and the operation of tools and compilers by a factor of two to one in many cases.
As a result, REAL/IX offers the best of both worlds. It merges standard compatible UNIX with true real-time performance, providing context switch latencies form 80 to a worst case of 360 microseconds. This is in the range of the proprietary real-time operating systems! A full development environment is therefore completely integrated with a true real-time environment. Commercial applications can coexist with realtime.
PROFESSIONAL COMPUTING, APRIL 1991 11
Growing personally, less secureFrom page 9
the user from the operating system and in addition it must place that user in the same position as if a host terminal is in operation. This means that access to the organisation’s information is at all times on an controlled authorisation basis. The system must address both the environmental issues and authorisation requirements necessary to implement effective information asset protection.
We will now take a closer look at some of the issues that occur in local area networks.
Physically, the difference between a local area network (LAN) and more traditional forms of communication between computers is that a LAN uses one cable for connecting between several different computers. Traditionally computers had a single point-to-point connection between two machines.
In a LAN it is therefore immediately obvious that the problems of overall security are greater than with point-to- point connections. In a LAN, if one part of the shared cable becomes insecure, the whole LAN becomes insecure. If one cable becomes insecure in the traditional point-to-point scheme, only the connection between the two computers is affected.
Also with a LAN, every connected computer is a potential point of insecurity. From the viewpoint of reliability, every computer connected to the LAN can cause problems and therefore affect the network. From the viewpoint of data security, every computer connected to the LAN is a listening device and also a potential originator of false data.
In a traditional point-to-point system, only a fault in a central computer can affect the whole system, and terminals only receive or transmit data when actively permitted by a central machine.
A LAN still has most of the potential security problems of a traditional central computer system. In most LANs there is also a central computer, a server, which holds vital data or operating instructions required for the correct operation of the overall system.
Unauthorised access to central computers can be just as dangerous with a LAN as with a traditional system, and both kinds of system often have modem connections through which outsiders might gain access.
Therefore all traditional methods of safeguarding computer security are
equally applicable to LAN-based systems. These include passwords, user verification, encoded files, and access auditing. The difference with LANs is that the physical method of transmitting signals leads to a special new set of problems.Ethernet and token ring
The leading LAN standards, Ethernet and Token Ring, have completely different methods of using a shared network cable.
In the case of Ethernet, the cable containing the network data passes by all the computers in the network. In fact in many Ethernet cabling schemes the data passes through a connection at the back of a computer even when the computer is switched off and is not connected to the network.
Ethernet’s concept of operation depends on this easy access to all points on the cable. Computers are expected to listen in to the cable to check that it’s not busy before they start transmitting data.
Computers which are active in a Token Ring LAN listen to the network to see if any frames are addressed to them, and there are some special machines which read all frames from the network even when they are not the addressees. However computers only join the LAN after they had identified themselves and they can be denied access rights to enter the network.
Token Ring LANs contain sophisticated network management features at each node. These are built-in to the Medium Access Control (MAC) protocol layer. One of its features is that each computer announces itself when it wishes to join the LAN, and if there is a central ring manager computer (known in MAC layer terminology as a Ring Parameter Server) present, then this manager computer can prevent the new computer from joining the LAN.
After a computer is on a Token Ring, it regularly sends out a message to indi
cate that it is active, and will also respond from the MAC layer if it is checked by the central ring manager. The ring manager can, if it wishes, send an instruction to remove a particular computer form the LAN.
Each computer also sends out messages on the LAN if it detects an error either in itself or in a neighbouring computer on the ring. These messages can be collected by the ring manager (acting as an ‘error monitor’), to be displayed to a network administrator for the purposes of fault finding. In extreme cases of LAN faults, a node on a Token Ring will remove itself automatically from the network and re-test both itself and the lobe cable, and it will then only re-join if the tests pass with no problem.
The ring managerToken Ring therefore contains inte
gral features in its hardware and MAC layer protocols to make a network more secure, both in terms of detecting and preventing unauthorised access and in
limiting the damage caused by faults on the ring.
Some of these features are self-healing. A faulty node can automatically remove itself from the network, and all nodes have the ability to re-start without causing any harm to higher-level operations on the network if faults only last for a short space of times. In fact a Token Ring LAN stops momentarily each time a new node joins or leaves the network, but this causes no practical difficulties.
Other items require centralised monitoring and management. Serious errors on the network which cannot be self- cured need the intervention of a human network administrator — who needs information on what has been happening on the network, and tools with which to take action. Attempts to gain unauthorised access to the network can be rejected by a central ring manager, and can
No Central Controls
SUPPORTOVERHEADS
Central Controls
WORKSTATIONS
12 PROFESSIONAL COMPUTING, APRIL 1991
cause an alarm to be sent to the network administrator.
For security control, the network administrator can set up a list of authorised network nodes and the ring manager will prevent unauthorised nodes form joining the ring. In case a computer succeeds in by-passing this network-joining check, the ring manager detects the unauthorised nodes and removes it later. These measures cope with a serious security problem of most LANs.
As said earlier, the best kind of security is the security which is never noticed.
^If the LAN is trouble-
free and you can be confident that your data will not be falling into the wrong hands, then you have ideal LAN security. ^
However, this is only possible if the LAN is continuously maintaining its own integrity, and if a LAN administrator has the tools to manage the network.
With the advent of the personal computer as the corporate workstation, the need for security has expanded way beyond the view that the most dangerous threat is from the hacker who is intent on malicious damage. Threats from hackers are as valid as ever, but the larger threat comes from accidental damage to information being processed on that PC — a machine with no inbuilt protection of any sort.Tom Armstrong is technical director of Melbourne-based Distributed Data Processing Pty. Ltd. Tel: 03 615 6711.
PROFESSIONAL
COMPUTINGTHE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY
JUNE FEATURE
Performance measurement, development environments
and CASE.
Editorial and Advertising enquiries ph (03) 520 5555
ACS in VIEW
PRESIDENT'S MESSAGE:
From the President’s guest: Geoff DoberIT WAS most gratifying to see a large audience when the NSW Branch hosted a breakfast to launch the PCP Directory of Endorsed Courses during February.
By now, most members and associates will have received the PCP Directory through the mail.
The PCP scheme has received favourable comments from many sources, but, not surprisingly, there has been criticism by some.
IT professionals frequently ‘inflict’ change on others as they introduce new systems. It is interesting therefore to see how we react when change is done to us.
How have members reacted to PCP? The Victorian Branch has recently conducted a member survey and many members have commented on the scheme. While the vast majority supported the scheme, some of the negative comments are reproduced below.
“Carries no weight with a large number of employers.”
“Not possible in a low budget organisation.”
“I regard it as a stunt by the Branch Committee to promote themselves.”
“Until the ACS is viewed more highly by the business community, I see no value to me to spend time and money becoming a PCP”
“My courses are all in-house and not accredited.”
“I still don’t understand it.”
KNOW DON’T KNOW
KNOw
Competent ConsciouslyIncompetent
D0N
TKNOw
UnconciouslyCompetent
UnconciouslyIncompetent
“You should start recognising courses from manufacturers, e.g. Oracle.”
“Needs promotion to the public like CPA.”
“Can’t afford it.”“What about conferences and semi
nars not endorsed.”“I don’t need to prove my
professionalism.”It would appear from these comments
that we are just the same as the many to whom we subject our changes. We are not good at reading the documentation as many of these comments are not correct interpretations of the scheme. But it proves something that we knew all along. We have to work hard at making sure that the scheme is properly understood and, if necessary, modify it to improve it. We accept this challenge and are delighted that the editor of Professional Computing has decided to start a regular PCP section from the May issue. Send your feedback forms with comments from the Directory of endorsed courses and we will include your views in the PCP section.
At the launch, I explained the PCP scheme with the aid of the following diagram.
Self awareness is on the vertical axis and knowledge on the horizontal axis. It reads as follows:
If you know you know, then you are competent.
If you know you don’t know, you are consciously incompetent.
If you don’t know you know, you are unconsciously competent.
If you don’t know you don’t know then you are unconsciously incompetent (or unconscious!)
The PCP scheme is our conscious attempt to encourage professional members of the ACS to maintain and extend their competency so that our profession can make a significant contribution to Australia’s future.
I commend the PCP scheme to you. I will be consciously seeking to enhance my own competence and in so doing accrue the 30 hours needed to achieve PCP status.
PROFESSIONAL COMPUTING, APRIL 1991 13
Security in Information Systems ACS in VIEW
THE activities of Technical Committee 11 (Security and Protection in Information Processing Systems) have grown over the years since its formation in the early 1980s. However, before going into the structure of the TC it may be worthwhile to consider some major events in computer security for 1990.
In general, the growth of worldwide awareness of the need for control, management, performance and security (CMPS) in information systems continued during 1990 and appears set to take a major role in the 1990s. Security concerns encompassed both computer systems, including PCs and workstations, and associated data communications networks.
From the publication of the so-called European “White Book”, Information Technology Security Evaluation Criteria (ITSEC), to a major report by the National Research Council in the USA, to the enactment of the United Kingdom’s “Computer Misuse Act 1990”, to the growth of so-called “stealth” computer viruses, activity developed on many fronts. In particular, one of the hallmarks of a computer professional is now being seen as a sense of and dedication to the CMPS aspects of the information systems that they create and/or support. On the academic/education front, computer security studies entered the mainstream of computer science and information systems education at a number of locations throughout the world.
These along with other aspects of information systems security are the concern of IFIP’s Technical Committee 11 (Security and Protection in Information Processing Systems) in which Australia is represented.USA National Research Council urges $15m Computer Security Foundation
Of major note during 1990 was the report by the National Research Council (NRC) of the USA that noted the “failure of government and industry to protect computer security” (The Asian Wall Street Journal, 7 December 1990). To combat this situation the NRC suggested the formation of a private “foundation” with an annual budget of around $15 to $20 million (US) to coordinate and oversee activities in the computer security area. The foundation would, in particular, develop and circulate security standards, evaluate systems and maintain a catalogue of computer misuse cases. Once again the report links
Professor William (Bill) J. Caelli, FACS is the Director of the Information Security Research Centre at the Queensland University of Technology and Technical Director of Eracom Pty Ltd, a company he founded in 1979. Professor Caelli is the chairman of IFIP’s TC 11 for ‘Security and Protection in Information Processing Systems.'
He has had over 25 years experience in the computer industry starting with Australia’s largest company, BHP, later with the ANU, Hewlett-Packard and Control Data, with positions in research, consultancy and marketing. He was made a Fellow of the ACS in 1982 and in 1986 was awarded theAITA award for achievement in the Information Technology Industry. He is also a member of the ACM and IEEE of the USA.
computer security (privacy, integrity and authenticity) with related problems of reliability, system “safety”, and the like. This emphasises that discipline of information security is rapidly starting to work in with the broad area of “software engineering”. The interesting point about this proposal is that it avoids the perceived problem of government involvement in the commercial aspects of the problem as well as the reported “competition” between two government groups in the USA with computer security interests, the National Security Agency (NSA) and the National Institute for Standards and Technology (NIST).“White” book versus “Orange” book
During 1990 four members of the European Community, Germany, France,
The Netherlands and the United Kingdom, released a set of proposed standards for the evaluation of the security aspects of computer systems, under ’.he broad title of the “Information Technology Security Evaluation Criteria” or ITSEC. This suggested standard built upon the earlier work done in the USA in the same area, that resulted in the publication of what has become known as the “rainbow series” of reports on computer security evaluation. The most famous of these is the so-called “Orange Book”, named after the color of its covers, more accurately known as the “Trusted Computer Systems Evaluation Criteria” or TCSEC, for short. This latter book, and, the other members of the series, were essentially published by the United States Department of Defence. (Note: These books have also been recently published by Macmillan in the United Kingdom).
The significant point is that the “books” do not exactly match, and computer manufacturers and service providers, many based in the United States, are concerned over the presence of two separate sets of criteria by which their computer hardware and, particularly, their operating systems, may be judged. The important point is that the cost of performing the evaluation of security in the system is very large!
European manufacturers in turn claim that they cannot be subject to US requirements. They also appeared to be “unhappy” with the “rainbow series” and wanted security criteria more closely matched to commercial, rather than military, needs. The general feeling appears to be that there is now a real need for international agreement in this area, particularly since these same US companies see a substantial proportion of their business (often more than 50 per cent) coming from Europe. Meanwhile, particularly in the USA, security compliance with the “Orange Book” is being seen as a mandatory requirement for government purchasing of computer systems by 1992 (the so-called C2-by-92 program).
From the viewpoint of Australia and South-East Asia this important activity may appear to have little significance since, essentially, no computer systems are fully manufactured here. By this I mean that the R&D, system software design and implementation (eg PC/MS- DOS, OS/2, UNIX, etc.), and so on as
14 PROFESSIONAL COMPUTING, APRIL 1991
ACS in VIEWwell as the intellectual property are not vested in the area. Thus a full computer security validation centre (along the lines of the USA’s National Computer Security Centre of NCSC) or similar could have little purpose besides being very expensive.
However, foreign manufacturers and importers/assemblers will still use the appropriate criteria in describing their systems, eg our XXXX system has “B2” compliance, our YYYY system meets “F2,E2” of “White Book”, etc. Thus it becomes vital that the southern hemisphere has knowledge of these criteria and techniques and be able to check the claims made by overseas suppliers. This could represent an opportunity for collaborative effort with our neighboring countries, along the lines of the need for independent testing facilities for compliance with Open Systems Interconnection (OSI) specifications. In particular, a leadership role could be possible in relation to small and personal computer systems.
After all, recent figures would seem to indicate that the Asia-Pacific region will have the highest growth rate in the information technology market over the next few years and that already the personal computer and workstation sector of the overall world IT market accounts for well over 50 per cent of that market.
Computer misuse act — United Kingdom
There is now a growing body of computer security and related privacy legislation developing throughout the world. In 1990, the United Kingdom enacted its “Computer Misuse Act 1990” which complements the earlier “Data Protection Act”. The Act sets out provisions for “securing computer material against unauthorised access or modification; and for connected purposes”. This Act particularly addresses unlawful “access”
to the computer. However, to be guilty a person must “know” at the time of illicit access that such access is “unauthorised”. The Act may also help to convict anyone found implanting “computer virus” code in the system since a specific section of the Act (part 3.1) makes a person guilty of an offence if any action “causes an unauthorised modification of the contents of any computer.”
IFIP TC-11IFIP’s (International Federation for
Information Processing) Technical Committee 11 has been in operation now for over seven years and has sponsored a series of important international conferences, the IFIP/SEC series. Members of the Technical Committee are nominated by the member societies of IFIP, such as Australia through the Australian Computer Society (ACS). The full committee meets once a year, usually in association with the worldwide IFIP/SEC conference. The next meeting will be in Brighton, England in association with the IFIP/Sec’92 conference, entitled “Creating Confidence in Information Processing” (15-17 May, 1991). The 1990 meeting was held in Helsinki, Finland in May 1990 in conjunction with the IFIP/SEC’90 conference and exhibition. About 300 people attended from 29 countries. The theme of that conference was “Computer Security and Information Integrity in our Changing World” and the proceedings of the conference will be soon available from IFIP’s publisher, Elsevier/North-Hol- land.
The Technical Committee has a number of Working Groups (WGs) that meet to consider individual aspects of the overall problem of the security of information systems. These Groups are currently:
WG 11.1 Security ManagementWG 11.2 Office Automation Security
WG 11.3 Database Security WG 11.4 Crypto Management WG 11.5 Systems integrity and
controlWG 11.6 *** Discontinued ***WG 11.7 Computer Security Law Computer professionals active in the
appropriate fields are invited to take an active part in the working groups and to contribute to the activities of the WGs on an international basis. Working groups also aim to once a year, normally in association with a limited conference or workshop. Further information on IFIP TC-11 is available from:
Professor Bill Caelli, FACS Information Security Research Centre Queensland University of Technology GPO Box 2434, BRISBANE QLD 4001 Tel: 07 864 2752 Fax: 07 221 2384 e-mail: w.caelli @ qut.edu.au
ACS ExaminationsThe next ACS examination in Computing’ will be held on 23 and 30 June 1991. Successful examination candidates are eligible for entry into the professional grades of membership of the Society.
The papers being offered are:23 June:BASIC COMPUTER CONCEPTS and PROGRAMMING TECHNIQUES
30 June:SYSTEMS ANALYSIS and DESIGN and DATA MANAGEMENT and DATA COMMUNICATIONS
Interested people should contact their local branch of the ACS or Donna Edwards at the National Office on 02 211 5855 for an application form.
The Australian Computer Society
Office bearersPresident: Alan Underwood. Vice-presidents: Peter Murton, Geoff Dober. Immediate past president: John Goddard. National treasurer: Glen Heinrich. Chief executive officer: Ashley Goldsworthy.PO Box 319. Darlinghurst NSW 2010. Telephone (02) 211 5855.Fax (02)281 1208.
Peter Isaacson Publications(Incorporated in Victoria)
PROFESSIONAL COMPUTINGEditor: Tony Blackmore. Editor-in-chief: Peter Isaacson. Publisher: Susan Coleman. Advertising coordinator: Linda Kavan. Subscriptions: Jo Anne Blrtles. Director of the Publications Board: John Hughes.
Subscriptions, orders, editorial, correspondenceProfessional Computing, 45-50 Porter St, Prahran, Victoria, 3181. Telephone (03)520 5555. Telex 30880. Fax (03)510 3489.
AdvertisingNational sales manager: Peter Dwyer.Professional Computing, an official publication of the Australian Computer Society Incorporated, is published by ACS/PI Publications, 45-50 Porter Street, Prahran, Victoria, 3181.
Opinions expressed by authors in Professional Computing are not necessarily those of the ACS or Peter Isaacson Publications.
While every care will be taken, the publishers cannot accept responsibility for articles and photographs submitted for publication.
The annual subscription is $50.
PROFESSIONAL COMPUTING, APRIL 1991 15
FINANCECommercial Lease/HP/Mortgage
Operating Lease or RentalPCS, MINIS AND MAINFRAMES — NEW & 2ND HAND, GOVERNMENT & SEMI-GOVERNMENT AUTHORITIES;
PRIVATE & PUBLIC COMPANIES.
Contact:— Mr Bill Purton(018) 373 490 or (016) 375 411 Money Resources (Vic) Pty Ltd
Licensed finance brokers
OPEN SYSTEMS
Macpherson Open Systems — Open System Specialists Unix, Pick, Project Development
Expertise available inProgress and System BuilderSydney (02) 416 2788 Melbourne (03) 866 1177Greg MacPherson Ashley Isserow
HARDWARE SUNS +BUY, SELL, RENT,
ALL BRANDS. “COMPUTERS ARE
CHEAPER THE SECOND TIME AROUND’’
CALL US NOW (02) 949 7144^ m§ iff* COMPUTER RESELLERS
international
Unit 25, Balgowtah Business Path, 28 Roseberry Street, Baigowtah NSW 2093 FAX (02) 9494419
o Workstation Traders
Full Range of Disk sub-systems
100MB to 2GB ; SCSI, SMD, IPI
♦ approved Second User
& Rental Organization
SUn© (02)906-6229microsystems v '
Advertising conditionsAdvertising accepted for publication in Professional Computing is subject to the conditions set out in their rate cards, and the rules applicable to advertising laid down from time by the Media Council of Australia. Every advertisement is subject to the publisher’s approval. No responsibility is taken for any loss due to the failure of an advertisement to appear according to instructions.The positioning or placing of an advertisement within the accepted classifications is at the discretion of Professional Computing except where specially instructed and agreed upon by the publisher.Rates are based on the understanding that the monetary level ordered is used within the period of the order. Maximum period of any order is one year. Should an advertiser
fail to use the total monetary level ordered, the rate will be amended to coincide with the amount of space used. The word "advertisement" will be used on copy which in the opinion of the publisher, resembles editorial matter.The above information is subject to change, without notification, at the discretion of the publisher.
Warranty andindemnity
ADVERTISERS and/or advertising agencies upon and by lodging material with the publisher for publication or auth
orising or approving of the publication of any material, INDEMNIFY the publisher, its servants and agents against all liability claims or proceedings whatsoever arising from the publication, and without limiting the generality of the foregoing to indemnify each of them in relation to defamation, slander of title, breach of copyright, infringement of trademarks or names of publication titles, unfair competition or trade practices, royalties or violation of rights or privacy AND WARRANT that the material complies with all relevant laws and regulations and that its publication will not give rise to any rights against or liabilities in the publisher, its servants or agents and, in particular, that nothing therein is capable of being misleading or deceptive or otherwise in breach of Part V of the Trade Practices Act 1974.
Bookings - Tel: (03) 520 5555, Fax: (03) 521 364716 PROFESSIONAL COMPUTING, APRIL 1991
imam 3 s [
ii ft,
MICRO UPS (SERIES 45) 2 sizes of basic UPS for stand-alone PC's and small network servers. Protects and provides backup for 5-20 minutes....................from $889*
MINI UPS (SERIES 75) Available in 0.5,1,1.5 & 2KVA capacities and various back-up times. Provides mains isolation for total security plus a standard network interface.............. from $2565*
SIDEKICK PLUS UPS - i.skvatransfer UPS with 7-18 minutes integral batteries, networking interface and optional RS232 comms......................... from $3200*
INTERACT UPS - 3 or 5kVA no break UPS with 10 minute integral batteries, RS232 comms and network or AS400 interface................... from $9875*
Available from all leading computer stores, or....
COMPUTER GRADE POWERCustomer Service Centres located at....
Melbourne SydneyPh : 03-706 5662 Ph : 02-949 6000 Fax: 03-794 9150 Fax: 02-907 9802* Recommended Retail Price (including Sales Tax).
BrisbanePh : 07-841 0122 Fax: 07-841 0139
AdelaidePh : 08-347 3622 Fax: 08-234 0339
PerthPh : 09-445 2500 Fax: 09-244 2674
i■Besrv.* 7^5® cCT'fST £