DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on...

151
ED 040 727 AUTHOR TITLE INSTITUTION REPORT NO PUB DATE NOTE AVAILABLE FROM EDRS PRICE DESCRIPTORS IDENTIFIERS ABSTRACT DOCUMENT RESUME LI 002 047 Stevens, Mary Elizabeth Research and Development in the Computer and Information Sciences. Volume 3, Overall System Design Considerations, A Selective Literature Revie4. National Bureau of Standards (DOC), Washington, D.C. Center for Computer Sciences and Technology. NBS-monogr-113-Vol-3 Jun 70 149p. Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402 (NO. C13.44:113, Vol 3, $1.25) EDRS Price MF-$0.75 HC Not Available from EDRS. *Computer Programs, Computers, *Information Networks, *Information Processing, Information Science, *Input Output, Literature Reviews, *Program Design, Programing Languages *On Line Systems A selective literature review of overall system design considerations in the planning of information processing systems and networks. Specific topics include but are not limited to: (1) requirements and resources analysis, (2) problems of system networking, (3) input/output and remote terminal design, (4) character sets, (5) programming problems and languages, (6) processor design considerations, (7) advanced hardware developments, (8) debugging and on-line diagnosis or instrumentation and (9) problems of simulation. Supplemental notes and a bibliography of over 570 cited references are included. Parts 1 and 2 of this series on research and development efforts and requirements in the computer and information sciences are available as ERIC documents: LI 001 944 and LI 001 945 respectively. (Author/NH)

Transcript of DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on...

Page 1: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

ED 040 727

AUTHORTITLE

INSTITUTION

REPORT NOPUB DATENOTEAVAILABLE FROM

EDRS PRICEDESCRIPTORS

IDENTIFIERS

ABSTRACT

DOCUMENT RESUME

LI 002 047

Stevens, Mary ElizabethResearch and Development in the Computer andInformation Sciences. Volume 3, Overall SystemDesign Considerations, A Selective Literature Revie4.National Bureau of Standards (DOC), Washington, D.C.Center for Computer Sciences and Technology.NBS-monogr-113-Vol-3Jun 70149p.Superintendent of Documents, U.S. GovernmentPrinting Office, Washington, D.C. 20402 (NO.C13.44:113, Vol 3, $1.25)

EDRS Price MF-$0.75 HC Not Available from EDRS.*Computer Programs, Computers, *InformationNetworks, *Information Processing, InformationScience, *Input Output, Literature Reviews, *ProgramDesign, Programing Languages*On Line Systems

A selective literature review of overall systemdesign considerations in the planning of information processingsystems and networks. Specific topics include but are not limited to:(1) requirements and resources analysis, (2) problems of systemnetworking, (3) input/output and remote terminal design, (4)character sets, (5) programming problems and languages, (6) processordesign considerations, (7) advanced hardware developments, (8)debugging and on-line diagnosis or instrumentation and (9) problemsof simulation. Supplemental notes and a bibliography of over 570cited references are included. Parts 1 and 2 of this series onresearch and development efforts and requirements in the computer andinformation sciences are available as ERIC documents: LI 001 944 andLI 001 945 respectively. (Author/NH)

Page 2: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

A UNITED STATES

DEPARTMENT OF

COMMERCEPUBLICATION

°Pius 0

U.S.DEPARTMENT

OFCOMMERCE

NationalBureau

of

EN- Standards

(=)C\t

c=t

NBS MONOGRAPH 113, VOLUME 3

ssfi

Volume Overall System Design Consideration

A Seledive Literatle Review

U,S, DEPARTMENT OF HEALTH, EDUCATION& WELFARE

OFFICE OF EDUCATIONTHIS DOCUMENT HAS BEEN REPRODUCEDEXACTLY AS RECEIVED FROM THE PERSON ORORGANIZATION ORIGINATING IT POINTS OFVIEW OR OPINIONS STATED DO NOT NECES-SARILY REPRESENT OFFICIAL OFFICE OF EDU-CATION POSITION OR POLICY

Page 3: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

NATIONAL BUREAU OF STANDARDS

The National Bureau of Standards ' was established by an act of Congress March 3, 1901, Today,in addition to serving as the Nation's central measurement laboratory, the Bureau is a principalfocal point in the Federal Government for assuring maximum application of the physical andengineering sciences to the advancement of technology in industry and commerce. To this endthe Bureau conducts research and provides central national services in four broad programareas, These are: ( 1 ) basic measurements and standards, (2) materials measurements andstandards, (3) technological measurements and standards, and (4) transfer of technology.The Bureau comprises the Institute for Basic Standards, the Institute for Materials Research, theInstitute for Applied Technology, the Center for Radiation Research, the Center for ComputerSciences and Technology, and the Office for In formation Programs.THE INSTITUTE FOR BASIC STANDARDS provides the central basis within the UnitedStates of a complete and consistent system of physical measurement; coordinates that system withmeasurement systems of other nations; and furnishes essential services leading to accurate anduniform physical measurements throughout the Nation's scientific community, industry, and com-merce. The Institute consists of an Office of Measurement Services and the following technicaldivisions:

Applied MathematicsElectricityMetrologyMechanics--HeatAtomic and Molec-ular PhysicsRadio Physics 2Radio Engineering "Time and Frequency 2Astro-physics 2Cryogenics.'

THE INSTITUTE FOR MATERIALS RESEARCH conducts materials research leading to im-proved methods of measurement standards, and data on the properties of well.characterizedmaterials needed by industry, commerce, educational institutions, and Government; develops,produces, and distributes standard reference materials; relates the physical and chemical prop-erties of materials to their behavior and their interaction with their environments; and providesadvisory and research services to other Government agencies. The Institute consists of an Officeof Standard Reference Materials and the following divisions:

Analytical ChemistryPolymersMetallurgyInorganic MaterialsPhysical Chemistry.THE INSTITUTE FOR APPLIED TECHNOLOGY provides technical services to promotethe use of available technology and to facilitate technological innovation in industry and Gov-ernment; cooperates with public and private organizations in the development of technologicalstandards, and test methodologies; and provides advisory and research services for Federal, state,and local government agencies. The Institute consists of the following technical divisions andoffices:

Engineering StandardsWeights and Measures Invention and Innovation VehicleSystems ResearchProduct EvaluationBuilding ResearchInstrument ShopsMeas-urement EngineeringElectronic TechnologyTechnical Analysis.

THE CENTER FOR RADIATION RESEARCH engages in research, measurement, and ap-plication of radiation to the solution of Bureau mission problems and the problems of other agen-cies and institutions. The Center consists of the following divisions:

Reactor RadiationLinac RadiationNuclear RadiationApplied Radiation.THE CENTER FOR COMPUTER SCIENCES AND TECHNOLOGY conducts research andprovides technical services designed..to aid Government agencies in the selection, acquisition,and effective use of automatic data processing equipment; and serves as the principal focusfor the development of Federal standards for automatic data processing equipment, techniques,and computer languages. The Center consists of the following offices and divisions:

Information Processing Standards--Computer Information Computer Services Sys-tems DevelopmentInformation Processing Technology.

THE OFFICE FOR INFORMATION PROGRAMS promotes optimum dissemination andaccessibility of scientific information generated within NBS and other agencies of the FederalGovernment; promotes the development of the National Standard Reference Data System and asystem of information analysis centers dealing with the broader aspects of the National Measure-ment System, and provides appropriate services to ensure that the NBS staff has optimum ac-cessibility to the scientific information of the world. The Office consists of the followingorganizational units:

Office of Standard Reference DataClearinghouse for Federal Scientific and TechnicalInformation "Office of Technical Information and PublicationsLibraryOffice ofPublic InformationOffice of International Relations.

Headquarters and Laboratories at Gaithersburg. Maryland, unless otherwise noted: mailing address Washington, D.C. 20234.Located at Boulder, Colorado 80302.Located at 5285 Port Royal Road, Springfield, Virginia 22151,

Page 4: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

NBS TECHNICAL PUBLICATIONS

PER!ODICALS

JOURNAL OF RESEARCH reports NationalBureau of Standards research and development inphysics, mathematics, chemistry, and engineering.Comprehensive scientific papers give complete detailsof the work, including laboratory data, experimentalprocedures, and theoretical and mathematical analy-ses. Illustrated with photographs, drawings, andcharts.

Published in three sections, available separately:

Physics and Chemistry

Papers of interest primarily to scientists working inthese fields. This section covers a broad range ofphysical and chemical research, with major emphasison standards of physical measurement, fundamentalconstants, and properties of matter. Issued six timesa year. Annual subscription: Domestic, $9.50; for-eign, $11.75*.

Mathematical Sciences

Studies and compilations designed mainly for themathematician and theoretical physicist. Topics inmathematical statistics, theory of experiment design,numerical analysis, theoretical physics and chemis-try, logical design and programming of computersand computer systems. Short numerical tables.Issued quarterly. Annual subscription: Domestic,$5.00; foreign, $6.25*.

Engineering and Instrumentation

Reporting results of interest chiefly to the engineerand the applied scientist. This section includes manyof the new developments in instrumentation resultingfrom the Bureau's work in physical measurement,data processing, and development of test methods.It will also cover some of the work in acoustics,applied mechanics, building research, and cryogenicengineering. Issued quarterly. Annual subscription:Domestic, $5.00; foreign, $6.25*.

TECHNICAL NEWS BULLETIN

The best single source of information concerning theBureau's research, developmental, cooperative andpublication activities, this monthly publication isdesigned for the industry-oriented individual whosedaily work involves intimate contact with science andtechnologyfor engineers, chemists, physicists, re-search managers, product-development managers, andcompany executives. Annual subscription: Domestic,$3.00; foreign, $4.00*.

Difference in price is due to extra cost of foreign mailing.

Order NBS publications from:

NONPERIODICALS

Applied Mathematics Series. Mathematical tables,manuals, and studies.

Building Science Series. Research results, testmethods, and performance criteria of building ma-terials, components, systems, and r,tructures.

Handbooks. Recommended codes of engineeringand industrial practice (including safety codes) de-veloped in cooperation with interested industries,professional organizations, and regulatory bodies.

Special Publications. Proceedings of NBS confer-ences, bibliographies, annual reports, wall charts,pamphlets, etc.

Monographs. Major contributions to the technicalliterature on various subjects related to the Bureau'sscientific and technical activities.

National Standard Reference Data Series.NSRDS provides quantitive data on the physicaland chemical properties of materials, compiled fromthe world's literature and critically evaluated.

Product Standards. Provide requirements for sizes,types, quality and methods for testing various indus-trial products. These standards are developed coopera-tively with interested Government and industry groupsand provide the basis for common understanding ofproduct characteristics for both buyers and sellers.Their use is voluntary.

Technical Notes. This series consists of communi-cations and reports (covering both other agency andNBS-sponsored work) of limited or transitory interest.

Federal Information Processing Standards Pub-lications. This series is the official publication withinthe Federal Government for information on standardsadopted and promulgated under the Public Law89-306, and Bureau of the Budget Circular A-86entitled, Standardization of Data Elements and Codesin Data Systems.

CLEARINGHOUSE

The Clearinghouse for Federal Scientific andTechnical Information, operated by NBS, suppliesunclassified information related to Government-gen-erated science and technology in defense, space,atomic energy, and other national programs. Forfurther information on Clearinghouse services, write:

ClearinghouseU.S. Department of CommerceSpringfield, Virginia 22151

Superintendent of DocumentsGovernment Printing OfficeWashington, D.C. 20402

Page 5: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

UNITED STATES DEPARTMENT OF COMMERCE Maurice H. Stans, Secretary

NATIONAL BUREAU OF STANDARDS Lewis M. Branscomb, Director

Research and Development in theComputer and Information Sciences

3. Overall System Design Considerations

A Selective Literature Review

Mary Elizabeth Stevens

Center for Computer Sciences and TechnologyNational Bureau of Standards

Washington, D.C. 20234

"PERMISSION TO REPRODUCE THIS COPY-RIGHTED MATERIAL BY MICROFICHE ONLYHAS BEEN GRANTED BY

G P eTO ERIC AND ORGAN( TIONS OPERATINGUNDER AGREEMENTS WITH THE U.S. OFFICEOF EDUCATION, FURTHER REPRODUCTIONOUTSIDE THE ERIC SYSTEM REQUIRES PER-MISSION OF THE COPYRIGHT OWNER,"

National Bureau of Standards Monograph 113, Vol. 3

Nat. Bur. Stand. (U.S.) Monogr. 113 Vol. 3, 147 pages (June 1970)CODEN: NBSMA

Issued June 1970

For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402(Order by SD Catalog No. C 13.44:113, Vol. 3), Price $1.25

Page 6: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

ForewordThe Center for Computer Sciences and Technology of the National Bureau of Standards has

responsibility under the authority of Public Law 89-306 (the Brooks Bill) for automatic data process-ing standards development, for consultation and technical assistance to Federal agencies, and forsupporting research in matters relating to the use of computers in the Federal Government.

This selective literature review is one of a series intended to improve interchange of informa-tion among those engaged in research and development in the fields of the computer and informa-tion sciences. Considered in this volume are the specific areas of overall system designconsiderations, including the problems of requirements analysis, system networking, terminaldesign, character sets, programming languages, and advanced hardware developments.

Names and descriptions of specific proprietary devices and equipment have been includedfor the convenience of the reader, but completeness in this respect is recognized to be impossible.Certain important developments have remained proprietary or have not been reported in the openliterature; thus major contributors to key developments in the field may have been omitted.

The omission of any method or device does not necessarily imply that it is considered unsuit-able or unsatisfactory, nor does inclusion of descriptive material on commercially available instru-ments, products, programs, or processes constitute endorsement.

LEWIS M. BRANSCOMB, Director

III

Page 7: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

a

1. Introduction

ContentsPage

1

2. Requirements and resources analysis... 1

2.1. Requirements analysis 3

2.1.1. Clientele requirements 32.1.2. Information control requirements 3

2.1.3. Other system design requirements 5

2.2. Resources analyses 5

2.2.1. System, modularity, configuration and reconfiguration 6

2.2.2. Safeguarding and recovery considerations 6

3. Problems of system networking 7

3.1. Network management and control requirements 7

3.2.' Distribution requirements 8

3.3. Information flow requirements 8

4. Input-output, terminal design, and character sets 9

4.1. General input-output considerations 9

4.2. Keyboards and remote terminal design 10

4.3. Character set requirements 12

5. Programming problems and languages and processor design considerations 13

5.1. Programming problems and languages 13

5.1.1. Problems of veriy large programs and of program documentations 14

5.1.2. General-purpose programming requirements 14

5.1.3. Problem-oriented and multiple-access language requirements 16

5.1.4. Hierarchies of languages and programming theory 17

5.2. Processor and storage system design considerations 18

5.2.1. Central processor design 19

5.2.2. Parallel processing and multiprocessors 205.2.3. Hardware-software interdependence 20

6. Advanced hardware developments 21

6.1. Lasers, photochromics, holography, and other optoelectronic techniques 21

6.1.1. Laser technology 21

6.1.2. Photochromic media and techniques 22

6.1.3. Holographic techniques 23

6.1.4. Other optoeleetronic considerations 24

6.2. Batch fabrication and integrated circuits 25

6.3. Advanced data storage developments 26

6.3.1. Main memories 26

6.3.2. High-speed, special-purpose, and associative or content-addressablememories 27

6.3.3. High-density data recording and storage techniques 28

7. Debugging, on-line diagnosis, instrumentation, and problems of simulation 29

7.1. Debugging problems 29

7.2. On-line diagnosis and instrumentation 30

7.3. Simulation 31

8. Conclusions 33

Appendix A 35

Appendix B 129

List of FiguresFigure 1. A generalized information processing system 2

Figure 2. Photochromt, ,iata reduction 23

Page

, V

Page 8: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

2

Research and Development in the Computer and Information Sciences

3. Overall System Design Considerations:A Selective Literature Review

Mary Elizabeth Stevens

This report, the third in a series on research and development efforts and requirements in thecomputer and information sciences, is concerned with a selective literature review involving overallsystem design considerations in the planning of information processing systems and networks. Specifictopics include but are not limited to: requirements and resources analysis, problems of system net-working, input/output and remote terminal design, character sets, programming problems and lan-guages, processor design considerations, advanced hardware developments, debugging and on-linediagnosis or instrumentation, and problems of simulation. Supplemental notes and a bibliography ofover 570 cited references are included.

Key words: Data recording; debugging; holography; information control; input-output; integratedcircuits; lasers; memory systems; multiprocessing; networks; on-line systems; programming; simula-tion; storage.

1. IntroductionThis is the third in a planned series of reports

involving selective literature reviews of researchand development requirements and areas of con-tinuing R & D concern in the computer and informa-tion sciences and technologies. In the first report,*the background considerations and general purposesintended to be served by the series are discussed.In addition, the general plan of attack and certaincaveats are outlined.**

In the first two reports in this series, we have beenconcerned with generalized information processingsystems as shown in Figure 1, more particularlythese reports were concerned respectively with in-formation acquisition, and sensing, and input opera-tions and with information processing, storage, andoutput requirements. In this report we will be con-cerned with some of the overall system design con-siderations affecting more than one of the processesshown, such as programming languages, remoteterminals used both for input and output, and ad-vanced hardware developments generally.

Affecting all of the system design requirements forspecific functions of generalized information proc-essing systems are those of hierarchies and inter-action of systems, and of effective access-responselanguages; the client, system-configuration, andsystem-usage considerations (especially in terms ofmultiple-access, time-shared systems), and of sys-tem evaluation, including such on-going "evalua-tions" as debugging aids and on-line instrumentedchecking or monitoring facilities.

Under overall system design requirements, weare concerned with input-output capabilities andterminal display and control equipment, with proces-sor and storage systems design, with advancedtechnological developments, with programminglanguage requirements, and with problems of on-line debugging, client protection, instrumentation,and simulation.

First, however, let us consider some of the overallsystem design considerations involved in require-ments and resources analysis and in problems ofsystem networking.

2. Requirements and Resources Analyses

The introduction of automatic data processingtechniques has not changed the kind of fact-finding,analysis, forecasting, and evaluation required for

*Information Acquisition, Sensing, and Input: A Selective Literature Review.** Appendix A of this report contains notes and quotations pertinent to the running

text. For the convenience of the reader, notes "1.1" and "1.1" recapitulate some of theconsiderations discussed in the first report. Appendix B provides a bibliography ofcited references.

effective systems planning and implementation; ithas changed the degree, particularly with respect toextent, comprehensitivity, detail in depth, andquestions of multiple possible interrelationships.For example, a "single information flow" concept 2.1becomes realizable to an extent not possible before.On the other hand, distributed 2.2 and decentralized

1

Page 9: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

r-I

I

i--1

I

II

I

I

I-

I

I

I

II

I

1. InformationAcquisition

2. InformationSensing an

Input

r

I

I

I

I

I

7. ProcessingClient Service

Requests

I3. Preprocessing

10.Matching i I9. Processing

4Specifications . ..i

I

I

I

I

I

I

I

I

I

I

I

I

I

I

I

I

I

I

I

I

I

I6a.

On-LineStorage

b.Stor-age

DirectOatputs

11.earch aSelec-tio

12. Retrieval

14.: Outputs

e MI 410=11 es=11. 011111M MOM 111, =NO Im

15. Use andEvaluation

I

Legend:

FIGURE 1. A generalized information processing system.

2

process flowfeedback flow

Page 10: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

7.

systems also become more practical and efficientbecause of new possibilities for automatic controlof necessary interactions.

A major area of continuing. R & D concern withrespect to both requirements and resources analysisis that of the development of more adequate meth-odologies." Nevertheless, the new business on theagenda of the national information scene that is,the challenge of system networking offers newpossibilities for a meshing of system design criteriathat have to do with where and how the system is tobe operated and with where and how it is to be used.

2.1. Requirements Analysis

Requirements analysis, as an operational sine quanon of system design, begins of course with suitableassessment of present and potential user needs.Elsewhere in this series of reports, some embar-rassingly critical commentaries with respect toactual or prospective usage are selectively covered."Assuming, however, that there are definitive needsof some specifiable clientele for processing systemservices that can be identified, we must first attemptsome quantifiable measures of what, who, when,where, and why, the information-processing-system-service requests are to be honored." In particular,improved techniques of analysis with respect toclientele requirements, information control require-ments, and output and cost/benefit considerationsare generally desired.

2.1.1. Clientele Requirements

It is noted first that "lack of communication be-tween the client, that is, the man who will use thesystem, and the system designer is the first aspectof the brainware problem." (Clapp, 1967, p. 3).Considering the potential clients as individual usersof an information processing system or service, thefollowing are among the determinations that needto be made: 2.8

1. Who are the potential users?2. Where are they located? 2.73. If there are many potential users, user groups,

and user communities, how do needs for in-formation and for processing services differzinong them? 2.8

4. What are the likely patterns and frequencies ofusage for different types of potential clients?

5. To what extent are potential clients both moti-vated and trained to use the type of facilitiesand services proposed? 2.9

However obvious these and other requirementsanalysis considerations may be, a present cause ofcritical concern is the general lack of experimentalevidence on user reaction, user behavior, and usereffectiveness 2.10

2.1.2. Information Control Requirements

Detailed consideration and decision-making withrespect to controls over the quality and the quantity

of information input, flow, processing, storage, re-trieval, and output are essential to effective systemdesign. Davis in a late 1967 lecture discussed manyof the multifacetted problems involved in informa-tion control in both system planning and systemuse. The varied aspects range from questions ofinformation redundancy in information items to beprocessed and stored to those of error detection andcorrection with respect to an individual item recordas received, processed, stored, and/or retrieved.

Among these information control requirementsare: input and storage filtering and compression;quality control in the sense of the accuracy andreliability of the information to be processed in thesystem; questions of file integrity and the de. "aerateintroduction of redundancy; problems of formatting,normalization, and standardization, and error de-tection and error correction techniques.

More particularly, Davis (1967) is concerned withproblems of information control in a system withthe following characteristics:

"1. It has several groups of users of differingadministrative levels.

"2. The information within, the system has im-posed upon it varying privacy, security and/orconfidentiality constraints.The information entering the system is ofvarying quality with respect to its substantivecontent; that is, it may be raw or unevaluated,h may have been subjected to a number ofevaluation criteria or it may be invariant(grossly so) as standard reference data.The user audience is both local and remote.Individual users or user groups have individ-ual access to the informathm contabed withinthe system.

"6. The information within the system is multi-source information."

(Davis, 1967, p. 1-2).We may note first the problems of controls that

will govern the total amount of information that is tobe received, processed, and stored in the system.These may consist of input filtering operations 2.11as in sampling techniques applied to remote dataacquisition processes 2.12 or in checking for duplica-tions and redundancies in the file 2.13

Other information control requirements with re-spect to the total amount of information in the sys-tem relate to problems of physical storage access,withdrawals and replacements of items to and fromthe store, maintenance problems including questionsof whether or not integrity of the files must be pro-vided (i.e., a master copy of each item accessible atall times) 214 provisions for the periodic purging ofobsolete items,2.13 revisions of the file organizationin accordance with changing patterns of usage,2.18response requirements,2.17 and requirements fordisplay of all or part of an item and/or indications ofits characteristics prior to physical retrieval.2.18

Another important area of information control isthat of identification and authentication of material

"4.45.

3

Page 11: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

entering the system, with special problems likely tobe involved, for example, in the dating of reports.(Croxton, 1955). As Davis (1967) also points out, thetimeliness of information contained in the systemdepends not only on the time of its input bta alsoupon the date or time it was recorded or reportedand the date the information itself was originally ac-quired, including the special case of the "elasticruler" (Birch, 1966).2.19 Another typical problem isthat of transliterations and transcriptions betweenitems or messages recorded in many differentlanguages.2.2°

A crucial area of B. & D concern is that of the accu-racy, integrity, and reliability of information in thesystem, although these questions are all too oftenneglected in system design and use.2.21 Again, Davisemphasizes the importance of information contentcontrols. These may be achieved, on input, either byerror-detecting checks on quantitative data or by"correctness control through 'common sense' orlogical checks." (Davis, 1967, p. 10.) 2.22 Thus, theuse of reliability indicators and automatic inferencecapabilities may provide significant advantages inimproved information handling systems in thefuture.223

One of the obvious difficulties in controlling accu-racy and reliability of the information content ofitems in the system is that of correction and up-dating cycles.2.24 More commonly, however, errorsaffecting the accuracy and reliability of informationare those of human errors in observation, recording,or transcription and those of transmission or equip-ment failure during communication and input. Theincidence of such errors is in fact inevitable andposes a continuing challenge to the system designerswhich becomes increasingly severe as the systemsthemselves become more complex.2.25

It is to bo noted, of course, that a major area ofR & D concern in the communication sciences isthat of information theoretic approaches to errordetection, correction, and control. In terms of gener-alized information processing systems, however, weshall assume that advanced techniques of messageencoding and decoding are available to the extentrequired, just as we assume adequate productionquality controls in the manufacture and acceptancetesting of, say, magnetic cores. Thus our concernhere is with regard to the control, detection, and(where feasible), correction, of errors in informa-tion content of items in an information processingsystem or network, regardless of whatever protec-tive encoding measures have been employed.

It should be recognized first of all that any formu-lation of an information-carrying message or recordis an act of reportage, whether it is performed byman or by machine. Such reportage may itself bein error (the gunshots apparently observed duringriot conditions may have been backfiring from atruck, the dial indicator of a recording instrumentmay be out of calibration, and the like). The record-ing of the observation may be in error: misreading of,say, the dial indicator, transposition of digits in

4

copying a numerical data display, and accidental orinadvertent misspellings of names are obviousexamples.

With respect to errors introduced by transmission,examples of R & D requirements and progress werecited in the first report in this series ( "InformationAcquisition, Sensing, and Input", Section 3.4). Twofurther examples to be noted here include the dis-cussion by Hickey (1966) of techniques designed tohandle burst-type errors 2.26 and a report by Menk-haus (1967) on recent developments at the Bell Tele-phone Laboratories.227 For checking recordingand/or transmission errors, a variety of error de-tection devices (such as input interlocks,2.29 parityinformation,229 cheek digits,23° hash totals,2.31 for-mat controls and message lengths2.32) have beenwidely used.2.33

Problems introduced by alphanumeric digit trans-positions or simple misspellings can often beattacked and solved by computer routines, providedthat there is some sort of master authority list, orfile, or the equivalent of this in terms of prior con-ditional matching.2.34 For example, Alberga (1967)discusses the comparative efficiency of variousmethods of detecting errors in character strings.

The use of contextual information for error detec-tion and possible correction in the case of auto-matic character recognition processes has beennoted in a previous report in this series, that oninformation acquisition, sensing, and input. This is,of course, a special case of misspelling.2.35 Some ofthe pertinent literature references include Edwardsand Chambers (1964), Thomas and Kassler (1967)and Vossler and Branston (1964). The latter investi-gators, in particular, suggest the use of lookupdictionaries specialized as to subject field and analy-sis of part-of-speech transitions 2.36

Context analysis is important, first, because forthe human such capabilities enable him to predict(and therefore skim over or filter out) messageredundancies and to decide, in the presence ofuncertainties between alternative message readings,the most probably correct message contents whennoise, errors, or omissions occur in the actualtransmission of the message.2.37

Context analysis also provides means for auto-matic error detection and error correction in theinput of text at the character level, the wordlevel, and the level of the document itself suchas the detection of changes in terminology or theemergence of new content in a given subject field.For example, "various levels of context can besuggested, ranging from that of the characterssurrounding the one in question to the more nebu-lous concept of the subject class of the documentbeing read." (Thomas and Kassler, 1963, p. 5).In automatic character recognition, in particular,consideration has been given to letter digrams,trigrams, and syllable analysis approaches 2.38 aswell as to dictionary lookups.

Special problems, less amenable to contextualconsiderations, arise in the case of large files

4,15thr.

Page 12: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

containing many names (whether of persons orof drugs, for example) which are liable to mis-spellings or variant spellings or which are homo-nyinous .2'39 Information control requirements insuch cases may involve the use of phonetic indexingtechniques 2.40 as well as error detection andcorrection mechanisms.2.41

Automatic inference and consistency checksmay be applied to error detection and error cor-rection as well as to identification and authentica-tion procedures. Waldo and DeBacker (1958) givean early example as applied to chemical structuredata.2,42 A man-machine interactive example hasbeen described by North (1968).2'43 For the future,however, it can be predicted that: "Ways must befound for the machine to freely accept and useincomplete, qualitative information, to mix thatinformation with internally-derived information,and to accept modifications as easily as the originalinformation is accepted." (Jensen, 1967, p. 1-1).

Finally, we note that, in its broadest sense, theterm "control" obviously implies the ability topredict whether a given machine procedure willor will not have a solution and whether or not agiven computer program, once started running,will ever come to a halt. The field of informationcontrol may thus include the theories of automata,computability, and recursive functions, and ques-tions of the equivalence of Turing machines toother formal models of computable processes.

2.1.3. Other System Design Requirements

Other system design considerations with respectto requirements analysis include questions ofcentralization or decentralization of functionsand facilities, including compromises such asclusters; 2.44 questions of batch-processing asagainst time-sharing or mixtures of these modes,2.45and questions of formatting, normalization,2.46 andstandardization.2.47

A final area of requirements analysis involvesthe questions of system design change and modifica-tion2.48 and of system measurement.2.49 In particular,information on types of system usage by variousclients provides the basis for periodic re-designof system procedures and for appropriate re-organization of files. Such feedback informationmay also provide the client with system statisticsthat enable him to tailor his interest-profile orsearch strategy considerations to both the availablecollection characteristics and to his own selectionrequirements. As Williams suggests32" this kindof facility is particularly valuable in. systems wherethe client himself may establish and modify thecategories of items in the files that are most likelyto be of interest to him.

2.2. Resources Analysis

Collateral with comprehensive analyses ofpotential system clienteles, their needs and require-

5

rnents, their locations and the probable workloads(both as to types and also as to throughputs re-quired), are the necessary analyses of the resourcespresently or potentially available. Resourcesanalysis typically involves considerations ofmanpower availabilities, technological possibilities,and alternative procedural potentialities.

The question may well be raised with respect toan obvious spectrum of R & D requirements.Certainly there will be continuing areas of R & Dconcern with respect to advanced hardware tech-nologies in processor and storage system design,and in materials and techniques that are relatedto these requirements. Next there are problems of"software" that is, of programming techniquesto take full advantage of parallel processing capa-bilities, associative memory accessing and organiza-tion, multiprogrammed and multiple-accesssystem control.

Certain requirements are obviously overridingbecause they permeate the total system designand because they interact with many or all of thesub-systems involved. These include the problemsof comparative pay-offs between various possibleassemblies of hardware and software, the questionsof programming languages and of suitablehierarchies of such languages, and the problemsof man-machine interaction especially in the caseof time-shared or multiple access systems.

Similarly, the requirements for handling a varietyof input and output sensing modalities and forprocessing more than one 1-0 channel in an effec-tively simultaneous operation clearly indicate needsfor continuing research and development effortsin the design and use of parallel processing tech-niques, multi-processor networks, time-sharedmultiple access scheduling, and multi-programming.

Hierarchies of languages are implied, rangingfrom those in which the remote console user speaksto the machine system in a relatively natural lan-guage (constrained to a greater or lesser degree) tothose required for the highly sophisticated execu-tive control, scheduling, file protection, accounting,monitoring, and instrumentation programs. For thefuture, increasing consideration needs to be givennot only to hierarchies of languages for using sys-tems, but to hierarchies of systems as well .2.51

There are, of course, concurrent hardware re-search, development, and effective usage require-ments in all or most of these areas. Improvements inmicroform storage efficiency, lower per bit informa-tion-representation costs, communication channelutilization economies, improved quality of facsimilereproduction and transmission of items selected orretrieved, are obvious examples of directly fore-seeable future demands. Some of the above con-siderations will be discussed in later sections of thisreport. Here we are concerned in particular withresources analysis in terms of system modularity,configuration and reconfiguration, and with provi-sions for safeguarding the information to be handledin the system.

Page 13: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

2.2.1, System Modularity, Configuration, andReconfiguration

"x'oday, in increasingly complex information proc-essing systems, there are typically requirements forconsiderable modularity and replication of systemcomponents in order to assure reliable, dependable,and continuous operation. "52 The possibilities forthe use of parallel processing techniques are re-ceiving increased. R & D attention. Such techniquesmay be used to carry out data transfers simultane-ously with respect 2"53 to the processing operations,to provide analyses necessary to convert sequentialprocessing programs into parallel-path programsor to make allocations of system resources moreefficiently because constraints on the sequence inwhich processing operations are executed can berelaxed.255

In terms of system configuration and reconfigura-tion, there is a continuing question of the extent ofdesirable replication of input-output units and othercomponents or sub-assemblies. This may be particu-larly important for multiple-access and multiple-usesystems.2.58 A particularly important system con-figuration feature desired as a resource for large-scale information processing systems is that ofopen-endedness.2.57

System reconfigurations, often necessary aschanging task orders are received, are particularlyimportant in the area of shifting the system facilitiesfor system self-checking and repair.2'58 Thus Amdahlnotes that "the process of eliminating and introduc-ing components when changing tasks is reconfigura-tion. The time required to reconfigure upon occur-rence of a malfunction may be a critical systemparameter," (Amdahl, 1965, p. 39) and Dennis andGlaser emphasize that "the ability of a system toadapt to new hardware, improved procedures andnew functions without interfering with normal sys-tem operation is mandatory." (Dennis and Glaser,1965, p. 5.)

2.2.2. Safeguarding and Recovery Considerations

A first and obvious provision for "fail-safe" (or,more realistically, "fail-softly") 2.59 operation of aninformation processing system network is that ofadequate information controls (for example, asdiscussed above) on the part of all member systemsand components in the network 2.6° This require-ment reflects, of course, the familiar ADP apho-rism of 'garbage in, garbage out'. Again, the totalsystem must be adequately protected from inad-vertent misuse, abuse, or damage on the part ofits least experienced user or its least reliable com-ponent. Users must be protected from unauthorizedaccess and exploitation by other users, and they alsomust be protected from the system itself, not onlyin the sense of equitable management, scheduling,and costing but also in the sense that systemfailures and malfunctions should not cause intoler-able delays or irretrievable losses.2.61

Tie-ins to widespread communication networks

6

and the emergence of computer-communicationnetworks obviously imply some degree of bothmodularity and replication of components, pro-viding thereby some measure of safeguarding andrecovery protection.2,62 An extensive bibliographicsurvey of proposed techniques for improving systemreliability by providing various processes for intro-ducing redundancy is provided by Short (1968).2,63

Protective redundancy of system components is,as we have seen, a major safeguarding provisionin design for high system reliability and availa-bility."' In terms of continuing R. & D concerns,however, we note the desirability of minimizingthe costs of replication 2.88 and the possibilities fordevelopment of formal models that will facilitatethe choice of appropriate trade-offs between risksand coss.2'6°

Finally, there are the questions of resourcesanalysis with respect to the safeguarding of theinformation in the system or network that is,the provisions for recovery, backup, rollback, andrestart or repeat of messages, records, and files.7The importance of adequate recovery techniquesin the event of either system failure or destructionor loss of stored data, can hardly be overesti-mated."8

The lessons of the Pentagon computer installa-tion fire, in the early days of automatic data process-ing operations, still indicate today that, in manysituations, separate-site replication of the masterfiles (not only of data but also often of programs)is mandatory 2 °° Otherwise, the system designermust determine whether or not the essentialcontents of the machine-usable master files can berecreated from preserved source data.2.7° If thefile contents can be recreated, then the designermust decide in what form and on what storagemedia the backup source records are to bepreserved.27'

In terms of system planning and resource analysisfor information processing network design, we notethe following questions:

Can the network continue to provide at leastminimal essential services in the case of one ormore accidental or deliberate breaks in the links?

What are the minimal essential services tobe maintained at fail-safe levels? To what extentwill special priorities and priority re-schedulingbe required?

Must dynamic re-routing of information flowbe applied, or will store-and-forward with delayedre-routing techniques suffice?

There are known techniques for evaluatingoptimum or near-optimum paths through complexpaths in the sense of efficiency (economic, work-load balancing, and throughput or timelinessconsiderations). Can these techniques be re-applied to the fail-safe or fail-softly requirementsor must new methods and algorithms bedeveloped?

What are the fallback mechanisms at all levelsand nodes of the system for: (a) specific failures

Page 14: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

at a particular node, (b) breaks of one or morespecific link(s), (c) massive failures, such as theNew York area power blackout?

in general, with respect to areas of R & Dconcern affecting safeguarding and recovery provi-sions, we may conclude with Davis that "Rarely,if ever, are measurements made of the abilityof the system to respond when partially destroyedor malfunctioning, of the length of time required

for changing the system response to internal changein direction or to external stimuli, of the length oftime necessary for a newcomer to be inserted intohis assigned role in the system, of the redundancy,backup, or alternatives available at times of partialor total system destruction, and so forth. Clearly,there will be no adequately constructed system untilsuch measures of effectiveness are understood andincorporated into system design." (Davis, 1964,p. 28).

3. Problems of System Networking

Steadily mounting evidence of the nearlyinevitable development of information-processing-system networks, computer-communication utili-ties, and multiply-shared, machine-based, databanks illuminates a major and increasingly criticalarea of R & D concern. In this area, the problemsof "organized complexity" 3.' are likely to be atleast an order of magnitude more intractable thanthey are today in multiprogrammed systems, muchless in those systems requiring extensive man-machine interaction.

It is probable, in each of these three fields ofdevelopment, that there has been and will continueto be for some time to come: (1) inadequate requirements and resources fact-finding and analysis,3.2(2) inadequate tools for system design,3.3 and (3)the utter lack of appropriate means for evaluationin advance of extensive (and expensive) alternativesof system design and implementation.3.4 Certainlythe problems of system networking will involvethose of priority scheduling and dynamic alloca-tion and reallocation in aggravated form.3.5 More-over, the extensive prior experience in, for example,message-switching systems, is likely to be of rela-tively little benefit in the interactive systemnetwork.3.6

In particular, the practical problems of planningfor true network systems in the areas of documenta-tion and library services have scarcely begun to beattacked." Nevertheless, the development of com-puter-communications networks has begun toemerge as the result of some or all of the followingfactors:

(1) Requirements for data acquisition and collec-tion from a numbe),,. of remote locations."

(2) Demands for services and facilities not readilyavailable in the potential user's immediatelocality.

(3) Recognized needs to share data, programs andsubroutines, work loads, and system re-sources." In addition, various users may sharethe specialized facilities offered by one or moreof the other members of the network."°

Similar requirements were considered by variousmajor members of the aerospace industry as earlyas 1961, as follows:

"a, Load sharing among major computer cen-ters . .

"b. Data pick-up from remote test sites (or fromairborne tests). In some cases real-time proc-essing and retransmission of results to thetest site would be desirable.

"c. Providing access for Plant A to a computercenter at Location B. Plant A might have amedium-scale, small-scale, or no computer ofits own.

"d. Data pick-up from dispersed plants and officesfor processing and incorporation in overallreports. The dispersed points might be in thesame locality as the processing center, orpossibly as much as several thousand milesaway." (Perlman, 1961, p. 209.)

Three special areas of system network planningmay be noted in particular. These are the areas ofnetwork management and control, of distributionrequirements, and of information flow requirements.

3.1. Network Management and ControlRequirements

Effective provisions for network management andcontrol derive directly from the basic objectives andmission of the network to be established. First,there are the questions with respect to the potentialusers of the system such as the following:

1. What are the objectives of the system itself? Isit to be a public system, free and accessible toall? 3." Is it to serve a spectrum of clienteleinterests, privileges, priorities, and differentlevels of need-to-know? Is it subject, in theprovision of its services, to constraints of na-tional security, constitutional rights (assuranceof protection of the individual citizen's right tothe security, among other things, of his"papers" from unreasonable searches andseizures), laws and regulations involving penal-ties for violation such as "Secrecy of Com-munications," and copyright inhibitions?

2. What are the charging and pricing policies, ifany, to be assessed against different typesof service, different types of clients, and

Page 15: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

different priorities of service to the differentmembers of the clientele? 3.12

1 What different protections may be built intothe system for different contributors withvarying degrees of requirements for restric-tions upon access to or use of their data? 3'13

4. What are the priority, precedence, and inter-rupt provisions required in terms of theclientele? a."

Next are the questions, in terms of the potentialclient-market, of the location, accessibility, cost,volume of traffic, and scheduling allocations forsome determinate number of remote terminals,user stations, and communication links.

Then there are the questions of the performanceand technological characteristics required withrespect to these terminals, stations, and links.3,15Are the central system and the communicationnetwork both capable of handling, effectivelysimultaneously, the number of individual stationsor links required? Does the communication systemitself impose limitations on bandwidths available,data transmission rates, number of channelsoperable effectively in parallel? Are alternatetransmission modes available in the event of channelusurpation or nonavailability for other reasons?Is effectively on-line responsiveness of the communi-cation system linkages required and if so to whatextent?

More generally, the following design and planningquestions should be studied in depth if there isto be effective management and control:

"1. What is the scope of the network?a. Its geographical coverageb. Services to be provided by and to whomc. Location and facilities of participantsd. Existing capabilities availablee. Required rate of development

"2. What are the relevant software and datacharacteristics?

a. Privacy requirementsb. Accessibility and/or availability of

program servicesc. System management programs

What are the network management andcontrol requirements?

a. Standardization 116b. Membershipc. Information and program manipulationd. Feedback 3.17e. Documentationf. Cost of services

"4. What are the pertinent legal regulations andpractices?

a. FCC regulationsb. Carrier rate structurec. Common carrier used. Responsibilities for information contente. Privacy versus broadcast methodsf. Federal agency jurisdiction . . .

What are the technological constraints?

"3.

"5.

8

"6. What are the budgetary constraints andfinancially allowable rate of development?"

(Davis, 1968, p. 4- 5).The factors of geographical coverage, location

and facilities of participants, and membershippoint to some of the distribution requirements,to be considered next.

3.2. Distribution RequirementsA major area of concern with \respect to distri-

bution requirements in information processing net-work planning is that of the question of the typeand extent of centralization or decentralization ofthe various system functions. There is first thepossibility of a single master, supervisory, andcontrol processing center linked to many geo-graphically dispersed satellite centers (which carryout varying degrees of preprocessing and post-processing of the information handled by thecentral system) and terminals. Secondly, severalinterconnected but independent processors mayinterchange control and supervisory functions asworkload and other considerations demand.3.18Still another possibility is regional centralizationsuch has been recommended for a national docu-mentation network, for example.")

Different compromises in network and systemdesign to meet distribution requirements are alsoobviously possible.g" However, a variety of specialproblems may arise with respect to distributionrequirements when some of the network functionsare decentralized."'

Then there is the question of whether or notthe network is to be physically distributed thatis, "the term 'distributed network' is best used todelineate those communications networks basedon connecting each station to all adjacent stations,rather than to just a few switching points, as in acentralized network." (Baran, 1964, p. 5). Thisdistribution requirement consideration is closelyrelated to information flow analysis and planning,especially with respect to assurance of continuingproductive operation when certain parts of thenetwork are inoperative.3.22 It should be noted,moreover, that "solving the data base managementproblem has been beyond the state of the art."(Dennis, 1968, p. 373).

3.3. Information Flow RequirementsIn general, it may be concluded that "to determine

the correct configuration, certain basic factorsmust be investigated. These factors generallyrelate to the information flow requirements andinclude the following:

1. The kind of information to be transmittedthrough the communications network and thetypes of messages.

Page 16: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

2. The number of data sources and points ofdistribution to be encompassed by the networkand their locations.

3. The volume of information (in terms of mes-sages and lengths of messages) which mustflow among the various locations.

4. How soon the information must arrive to beuseful. What intervals the information is tobe transmitted and when. How mur,,h delayis permissible and the penalty for delays.

5. The reliability requirements with respect tothe accuracy of transmitted data, or systemfailure and the penalty for failure.

6. How the total system is going to grow and therate of growth."

(Probst, 1968, p. 19).More specifically, overall system design considera-

tions with respect to information flow requirementstypically involve calculations of average dailyvolume of message and data traffic, peak loadsanticipated, average message length, the number ofmessages to be transmitted in given time intervals,total transmission time requirements, and ques-tions of variable duty cycles for different systemand network components.3.23

Examples of relatively recent developments inthis area include RADA (Random Access DiscreteAddress) techniques 3'24 and a "hot-potato" routingscheme for distributed networks.3.25 A continuingR & D challenge in terms of scientific and technicalinformation services has been posed by Tell (1966)by analogy with the techniques of input-outputeconomics.3.26 Of major concern is the problem ofhigh costs of communication facilities necessary tomeet network information flow requirements."'

4. Input-Output, Terminal Design, and Character Sets

The area of input-output, especially for two-dimensional and even three-dimensional informa-tion processing, is currently receiving importantemphasis in overall information processing sys-tem design. One reason for this, as we have seen,is the increased attention being given to remotelyaccessed, time-sharing, or man-machine interactionsystems. In particular, as noted by Tukey and Wilk:"The issues and problems of graphical presentationin data analysis need and deserve attention frommany different angles, ranging from profound psy-chological questions to narrow technological ones.These challenges will be deepened by the evolutionof facilities for graphical real-time interaction."(Tukey and Wilk, 1966, p. 705).

4.1. General Input/Output ConsiderationsSince a multiplicity of input and output lines are

assumed for a variety of types of information tobe processed (including feedback information fromusers and from the system itself), development re-quirements with respect to both equipment andsortware processing operations include batching ofvarious input units, buffering of at least sometypes of input (as required, for example, to providenecessary reformatting), and multiplexing of inputoperations. Such considerations also apply evenmore forcefully to interfaces between the variousnodes of a network involving more than one type ofparticipating system."

Format control is typically needed both intoand out of the system, preferably under dynamicprogram control. The format control subsystem,by means of address storage registers or othertechniques, should enable the input data itselfto determine where it should go in storage, and othermeans of "self-addressing" should be provided

9

without the need for elaborate or inefficient pro -grammidg and related software requirements."

The overall output capability design shouldprovide ability to reformat conveniently and effi-ciently 4.4 as well as to select certain charactersequences. Because of the variety of equipmentsneeded for various tasks, provision should be madefor reversal of the bit order of input and outputdata so that either high or low order bits can beprocessed first. In the case of displays, specialprovisions may be required to prevent overlappingof symbols."

Related to format control is the question of vari-able byte size for input and output. For the future,system design will require ASCII (American Stand-ard Code for Information Interchange) code sortingand ordering capabilities, but in many circumatancesit will also be necessary to handle collapsed sub-sets of ASCII and other codes, longer byte lengthssuch as 10- and 14-bit codes for typesetting, andeven longer codes for monotype, numeric processcontrol, data logging, and equipment control.

Analog-digital and digital-analog convertibilityis needed for experimental applications in sourcedata automation, measurements automation, mapanalysis, map and contour plotting, pattern process-ing, and the like. One example of convergent effortsin the field is provided by Ramsey and Strauss(1966) who discuss interrupt handling in the areaof hybrid analog-digital computers as representa-tive of more general on-line scheduling problems.For some of these investigations, at least virtualreal-time clocks will be needed.' This impliesprocessor main frame and transfer trunks versatileenough to handle these requirements whether im-plemented by software or built into the hardware.

Another important requirement is for versatileand varied graphic input and output capability,

Page 17: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

including light pen, microfilm, FOSDIC-typescanning, mark-sensing, OCR (Optical CharacterRecognition), MICR (Magnetic Ink CharacterRecognition), color-code input (such Lovibondcolor network), and three-dimensional ,robe datain (see the first report in this series), and large-vocabulary character and symbol generation;diagram retrieval, construction and reconstruction,and perspective or three-dimensional projectioncapabilities out (as discussed in the second reportin this series). Photographic and TV-type inputand output with good resolution, hard-copy re-production capability, varying gray-scale facility,and at least the possibility of handling color inputor output display techniques will be required infuture system design.4.7 Audio input-output capa-bilities should include dataphone, acoustic signalinputs, and voice, with speech compression onoutput, requiring controlled timing of "bursts" or"slices."

In many system design situations, we shouldbe able to switch peripheral equipment configura-tions around for special purposes and we mayneed to have multiple access to various types ofperipheral devices simultaneously during thesame processing run, e.g., to be able to shiftbetween character recognition and graphic scanningtasks for input of material where text and graphicsare intermixed.

Related to these problems of input, output, andon-line responsiveness (especially for clients in-volved in problem-solving applications), is theconcept of graphical communication generally.This presupposes, first, a suitable language for theexchange of both pictorial data and control infor-mation between the designer and the machine, andsecondly, provisions for the dynamic manipulationof data and controls."

Recent programming techniques under investi-gation for the display of two-dimensional structureinformation are exemplified in work by Forgie,4.1°by Hagan et al. (1968) 4.11, and at the University ofMichigan (Sibley et al., 1968).4.12 Then there is theDIALOG programming system developed at the ITTResearch Institute in Chicago for graphical, textual,and numeric data input and display, online and off-line programming facilities, and hard-copy options.(Cameron et al., 1967). A special feature is a char-acter-by character man-machine interaction mode,so that the programmer may use only thoseinput symbols that are syntactically correct. Formore efficient machine use in production-typeoperations, a DIALOG compiler for the IBM7094 has been prepared following the "Trans-mographer" of McClure.4.13 (McClure, 1965).

Then we note that "in the area of displays, de-termining the information to be displayed andgenerating the procedures for retrieval and format-ting of the information are the difficult problems."(Kroger, 1965, p. 269). Further, as of today,"too many systems are designed to display all the

10

data, and not to display only the data needed for thedecisions the system is called upon to make."(Fubini, 1965, p. 2).

In general, the client of the on-line, graphicalinput-output, and problem-solving system needsconvenient means for the input of his initial data,effective control of machine processing operations,effectively instantaneous system response, dis-plays of results that are both responsive to hisneeds and also geared to his convenience, andhandy means for the permanent recording of thedecisions and design choices he has made.

With respect to these client desiderata, the identi-fiable R & D requiNatents relate to keyboardfunction key overlay design; 4.14 improvements inboth problem-oriented and client-oriented languagesfor man-machine communication and interaction;4.15fast, high-resolution, flicker-free display genera-tion; 4.16 ability to selectively emphasize variousareas of display,417 further development of the com-bination of static displays (such as maps) with com-puter-controlled dynamic displays, and rapidresponsivity of the system to feedback from theclient.

Since remote, reactive terminals are an in-creasingly important factor in systems involvingdynamic man-machine interaction, the questionof design of remote inquiry stations and consolesnecessarily raises problems of human engineeringfor whose solution there is inadequate experimentalcost-benefit, and motivational data 4.19 availableto date. Also involved are questions of acceptanceand interactive response by the client to feed-back outputs from the system, including requestsfor further information or additional inputs anddisplay of re-processed results.

4.2. Keyboards and Remote Terminal Design

Where graphic input and output facilities are tobe available to on-line users, there are unresolvedquestions of interrelated and interlocking systemand human factors. How clumsy are light pensor pointers to use? Are they heavy or difficult toaim? 4.20 Should light-pen imputs be displayeda little to the left or to the right of the actual light-pen location so that the active part of the inputis not blocked from view by the moving light-penitself? 4.21 Can flicker-rate be kept to a tolerablelevel without undue and costly regeneration de-mands on a multiply-accessed central processorused by the many clients, or must the remoteterminal have storage and display re-generationcapabilities at added cost and design complex-ity? 4.22 For graphic input and display shouldthe input surface be flat, upright, or slanted? 4.23

It has been pointed out, in the case of the recentdevelopment of a solid state keyboard, that "therequirements of today's keyboards are becomingmore complex. Increased reliability and moreflexibility to meet specialized demands are essential.Remote terminals are quite often operated by

Page 18: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

relatively untrained personnel and the keyboardmust be capable of error-free operation for thesepeople. At the same time it should be capableof high thru-put for the trained operator as will beused on a key tape machine.

"Some of the limitations of existing keyboards are:Mechanical interlocks which reduce operatorspeed.Excessive service (increasingly important forremote terminals).Contact bounce and wear of mechanicalswitches.Non-flexible format." (Vorthmann and Maupin,1969, p. 149).

For automatic typographic composition applica-tions, it is emphasized that "the application ofcomputers to typesetting only emphasizes thescope and the need for a radical re-thinking onkeyset design," and that, although "one may ima-gine that the keyboard is a relatively simple pieceof equipment . . . in fact, it presents a uniquecombination of mechanical, electrical and humanproblems." (Boyd, 1965, p. 152). Current R & Dconcerns with respect to keyboard redesign in-volve consideration of principles of motion studyas applied to key positioning, key shape, keypressures required, and the like.4.24

Nevertheless, it is to be emphasized that "in-put-output devices are still largely the result of aningenious engineering development and a some-what casual and often belated attention to operator,system attachment, and programming problems"and that ". . . no input-output device, includingall terminals combined, has yet received the care-ful and competent human factors study affordedthe cockpit of a military aircraft." 4.25 (Brooks,1965, p. 89).

Beyond this are questions of design requirementsfor dynamic on-line display. Thus we are con-cerned with requirements for improved remoteinput console and terminal design.4.28 Relativelyrecent input-output terminal developments, espe-cially for remote consoles or dynamic man-machineinteraction, have been marked by improved po-tentialities for two- and even three-dimensionaldata processing and by further investigation ofprospects for color, as discussed, for example, byRosa (1965),4.27 Mahan (1968) 4.28 and Arora et al.(1967), among others. Van Dam (1966) has providedan informative state-of-the-art review of such scan-ning and input/output techniques. Vlahos (1965)considers human factor elements in three-dimen-sional display. Ophir et al. (1969) discuss computer-generated stereographic displays, on-line.4.28

In the area of input-output engineering and sys-tem design, what is needed for more effective man-machine communication and interaction will includethe provision for remote consoles that are trulyconvenient for client use. Hardware, software, andbehavioral factors are variously interrelated in

I1

376-411 0 - 70 - 2

terms of desired display and console improve-ments.4.30

The desirable design specifications for remoteinquiry stations, consoles, and terminals and dis-play devices as discussed in the literature variouslyinclude: economy, dependability, and small enoughsize for convenient personal use.4.34 Some misgiv-ings continue to be expressed on this score. Thus,it is reported that Project Intrex will consider thedesign of much more satisfactory small consoles 4.32

and Wagner and Granholm warn that "at the mo-ment, it is difficult to predict whether remotepersonal consoles can be economically justified tothe same extent that technological advances willmake them feasible." (1965, p. 288). Cost certainlyappears to be a major factor in the limited natureof the use that has been made of remote terminalsto date.4.33

A second requirement is for the provision ofadequate buffering facilities including, for at leastsome recent systems, capabilities for local displaymaintenance 4.34 From the hardware standpoint, itis noted that "the major improvements in displayswill be in cost and in the determination and imple-mentation of the proper functions from the userstandpoint. The cathode-ray tube will probablybe dominant as the visual transducer for consoledisplays through 1970, but there are several newtechniques for flat-panel, digitally addressed dis-plays presently under development that may even-tually replace the CRT in many applications.The advances in memory and logic componenttechnologolies will permit significant improvementsin the logic and memory portions of console dis-plays." (Hobbs, 1966, p. 37).

Other features that are desirable may include acapability for relatively persistent display, forexample, up to several hours or several days,4.35and the capability, as in the Grafacon 1010 (a com-mercially available version of the RAND Tablet)for the tracing of material such as maps or charts tobe superimposed on the imput surface, or the Syl-vania Data Tablet ET 1, which also allows a modestthird axis capability.

As in the case of system outputs generally,hard-copy options are often desired through theterminal device. For example, the console "shouldhave a local storage device on which the usercan build up a file of the pieces of information heis retrieving, so that he can go back and forth inreferring to it. It should have means of giving himlow-cost hard copy of selected material he has beenshown and temporarily stored." (King, 1965, p. 92).

The use of markers and identifiers for on-linetext editing purposes should be simplified or elimi-nated to the maximum extent possible. If, for ex-ample, elaborate line and word sequence identi-fications must be used both by the machine systemand by the client, then the virtues of machineprocessing for this type of application will be largelylost. Such systems should not only be easy to use,but easy to learn how to use.

Page 19: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

An important question to be asked by the sys-tem designer is whether the output responses willbe of the types and in the formats that the clientwill expect to receive. It is noted in particular that"the closer the correspondence of the computeroutput is to the methods of presenting textualand tabular material familiar to the user, the greaterhis information absorption rate will be." (Morenoffand McLean, 1967, p. 20). Thus, in engineeringapplications, for example, the client may wantresults to be displayed in a familiar format suchas a Nyquist plot.4'36 Similarly, in operations onfiles or data banks, the user should be able to struc-ture and sequence files and subfiles for displayand selection to suit his own purposes.4.37

For effective online operation, the system shouldprovide responses within the reading rates of typicalusers, and with good resolution, little or no flicker,and with both upper and lower case 4.38 A somewhat more specific and stringent list of remote termi-nal desiderata is provided by Licklider, with par-ticular reference to the requirements of multipleaccess to the body of recorded knowledge. His de-sired features include, but are not limited to,color, or at least gradations of gray scale; 4'39terse or abbreviated modes of expression to themachine with full or "debreviated" response; 4.40

selective erasibility of the display by either programor user command, 4.41 and capabilities such as thefollowing: "Shortly thereafter, the system tellsme: 'Response too extensive to fit on screen. Doyou wish short version, multipage display, ortypewriter-only display?'." (Licklider, 1965, p. 50).Another design criterion affected by the factor ofclient convenience is that of the extent of displayon the console face of meta-information.4.42

Continuing needs for technological developmentshave also been indicated for improved terminaland output display design. Examples include thedevelopment of new, fast phosphors and othermaterials,4.43 use of analog predictive circuitry toimprove tracking performance,'" and variablesequencing of processor operations in computa-tion and display.4.45 A number of advanced tech-niques are also being applied to large-screendisplays,4.46 although some continuing R & D dif-ficulties are to noted.4.47 Multiplexing of graphicdisplay devices may also be required.4.48

Returning, however, to the human behavioralfactors in input-output and terminal design, we notethat man-machine relationships require furtherinvestigation both from the standpoint of humanengineering principles and also from that of attitudesof clients and users 4.49 that there are continuingrequirements for research and development effortson both sides of the interface 4.50 and that, in allprobability, "industry will require more specialprodding in the display-control area than in theother relevant areas of computer technology."(Licklider, 1965, p. 66).

We note further an area of R & D concern thatwill recur in many other aspects of information

12

processing system design and use, and especiallyin information selection and retrieval applications.More specifically: "The major problem today inthe design of display systems is that we cannotspecify in more than qualitative terms such criticalcriteria as 'context' and `meaning'." (Muckier andObermayer, 1965, p, 36). Swanson adds: "Otherrestrictions derive from the need of programs tosolve hidden line problems, to recognize context,and to make abstractions." (Swanson, 1967, p. 39).

Finally, we note the specific problem in documen-tary and library applications that large characterrepertoires are important to input and outputrepresentations of various levels of reference andemphasis in technical texts (especially, for example,in patent applications with multilevel referrals toaccompanying drawings and diagrams) and to de-lineation of different types of possible access pointsin indexes and catalog cards. In addition, a widevariety of special symbols and/or exotic alphabetsare typically employed in texts dealing primarilywith mathematical, logical or chemical subjects. Atext written principally in one particular languageand alphabet may frequently use the alphabet setof one or more other languages, as in reference toproper names, citations, and quotations.

4.3. Character Set RequirementsFor multiple-access systems "most creative

users want large character sets with lower-case aswell as capital letters, with Greek as well as Latinletters, with an abundance of logical and mathe-matical signs and symbols, and with all the commonpunctuation marks." (Licklider, 1965, p. 182). Inaddition, subscripts, superscripts and diacriticalmarks may be required.4.52

Attacks on problems of character-set require-ments for output begin with on-line printer varia-tions to provide larger output character-set vocabu-laries at the expense both of output printingspeed and of prior input precedence-coding and/orof processor programming. Larger character-set vocabularies are provided both by photocom-position techniques and by electronic character-generation methods, but again at the expense ofeither pre-coding or programming requirements 4.53

It should be noted, of course, that the internallanguage of most general-purpose information proc-essing systems is limited to no more than 64 dis-crete characters, symbols, and control, codes.Thus there must be extensive provision for multi-ple table lookups and/or for decoding and re-encoding of precedence codes or transformationsequences, on both input and output, if any internalmanipulations are to be performed on the textualmaterial. In general, the larger the character set,the more elaborate the precoding and/or program-ming efforts that will be required.

Then there is the problem of setting up key-board character sets that are adequate for applica-tion requirements and yet within reasonable

Page 20: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

human engineering limitations. One proposed solu-tion, the use of keyboard overlays and control codesto enable rapid shifting from one character subsetto another, is exemplified by developments atBunker-Ramo.4.54 Another possible solution to thecharacter set problem as related to human engi-neering factors that is receiving continuing R & Dconcern is to provide multiple inputs via a singlekeystroke, such as "chord" typewriters or Steno-type devices.4.55

Regardless of what may be available throughvarious conversion, transliteration, or translationprocesses on either input or output, there remainsthe s.,,,estion of the effects of internal characterset v,2on sorting, ordering, filing and interfilingoperations for a specific processor-storage system.For example, "other control elements which arefrequently required in the design of informationsystems are special sorting elements. In a directoryof personal names, such as those which might befound in an author bibliography, if names begin-ning with 'Mc' and those beginning with 'Mac'are to sort together, then special sorting codesmust be entered into the computer for this purpose."(Austin, 1966, pp. 243-244)4.56

The size of an adequate character set is a par-ticularly critical problem in at least two signifi-cant areas: those of automatic typographic-qualitytypesetting and of library automation.4.57 Complexcharacter set requirements are also to be notedin such multiple-access applications as computer-aided-instruction (CAI)4.55 Avram et al., consideringautomation requirements at the Library of Con-gress, stress that "keyboard entry devices must be

designed to facilitate the input of a variety of lan-guages and diacriticals." (1965, p: 4). These authorspoint out further that "if the problems associatedwith the design of input keyboards and photocom-position printing devices can be resolved for themultiplicity of alphabets, there still will remainthe formidable task of searching in a machine filewhich contains them." (Avram et al 1965, p. 89).Similarly, Haring (1968) points out that the 128symbols provided in the ASCII code is inadequatefor the augmented catalog under development atProject INTREX. '59

The very number and diversity of varied butrealistic cataloging, filing, and search considera-tions in terms of character-set and sort-orderrequirements that exist today may indeed surprisethe typical computer scientist facing library auto-mation implementation problems. Nevertheless,particular problems of sorting, filing, and reassem-bly orders in terms of practical usage needs andacceptability to the clients of mechanized systemsand services should be subjects of concern todesigners of machine languages, machine char-acter-sets, and of the processors as such.4.6°

The even more difficult case of extensive mathe-matical, chemical, and other special symbols de-sired on output imposes additional hardware re-quirements, whether for high speed printers,photocomposition devices, or character genera-tors.4.64 This, then, is the area that has been called"Caligraphy by Computer." 4.62 A final example ofunusual character set requirements is provided by"Type-A-Circuit" developments.4.63

5. Programming Problems and Languages and ProcessorDesign Considerations

The questions of design and development ofappropriate programming languages and of proc-essor design are obviously pertinent to all of theoperations shown in Figure 1. As of 1967-68, how-ever, special emphasis in terms of research require-ments lies in three principal areas: user-orientedinput, response and display languages; symbolmanipulation languages capable of handling ar-rays of multiply interrelated data, and increasinginterpenetration of hardware and software consid-erations in both system design and system use.

For example, in the operations of developingprocessing specifications from client requests forservice (Boxes 8 and 9 of Fig. 1), we need: new andmore powerful problem-oriented languages; versa-tile supervisory, executive, scheduling, and account-ing programs; hierarchies of programming lan-guages; multiprogramming systems; improvedmicroprogramming; new approaches to increasinginterdependence of programming and hardware;and more versatile and powerful simulation lan-guages.

13

The overall system design requirements of thefuture indicate R & D concerns with programminglanguages, and 'especially with hierarchies of suchlanguages at the present time.5.2 Controversiescertainly exist as between advocates of more andmore "universal" languages and proponents of prob-lem-oriented or user-oriented machine communica-tion techniques.

5.1. Programming Problems and LanguageContinuing R & D requirements for program-

ming language improvements represent twocontradictory requirements: on the one hand,there is recognized need for increasingly uni-versal, common-purpose languages compatiblewith a. wide variety of systems, hardware configura-tions, and types of applications; and on the otherhand for hierarchies of language systems. Inaddition, a number of special requirements formore flexible, versatile and powerful languagesare just beginning to emerge, especially in such

Page 21: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

areas as graphical communication, on-line problemsolving, multiple access and multiprocessor con-trol systems, simulation, and on-line instrumenta-tion.

We are concerned here, then, with general-purpose language and compatibility requirements;with special-purpose requirements such as prob-lem-oriented programs, list-processing and othertechniques for non-numeric data processing;with special problem areas such as very largeprograms and the requirements of multiple-accessand multiprogrammed, systems; with hierarchiesof programming languages, and with the increasinginterdependence of software and hardware con-siderations.

5.1.1. Problems of Very Large Programs and ofProgramDocumentation

In terms of continuing R & D concern, wenote first the problems of handling very largeprograms, defined as those that demand manytimes the available main storage capacity andthat are sufficiently complex in structure to requiremore than ten independent programmers to workon them." An obvious requirement is to developefficient techniques for segmentation: "Whenmany programmers are involved, there is the prob-lem of factoring the system into appropriatesubtasks. At the present time this is an art ratherthan a science, and very few people are good atits practice, because of the inability to find usefulalgorithms for estimating the size and degree ofdifficulty of programming tasks." (Steel, 1965,p. 234). The questions of automatic segmentation,although recognized as critical and difficult prob-lems, have therefore been raised."

In particular, the checkout of very large pro-grams presents special problems.* For example,"another practical problem, which is now be-ginning to loom very large indeed and offers littleprospect of a satisfactory solution, is that of check-ing the correctness of a large program." (Gill,1965, p. 203). Further, "the error reporting ratefrom a program system of several million instruc-tions is sufficient to occupy a staff larger than mostcomputing installations possess." (Steel, 1965,p. 233).

Other specific requirements in the programmingproblem areas include improved provision foradequate program documentation and related con-trols." For example, Pravikoff (1965) presentscogent arguments for the improved documenta-tion for programs generally. Mills (1967) pointsto the special documentation problems in multipleaccess systems where users are less and less aptto be trained programmers." Dennis (1968) pointsto the present high costs of large-scale programmingefforts as due to inadequate documentation thatprevents taking advantage of programming alreadyachieved," while Kay (1969) considers the advan-

*See also Section 7.1 of this report, on debugging problems generally.

14

tages of on-line documentation systems." Thus thearea of program documentation requires furtherstudy and concern.

A related difficulty is that of inadequate meansfor translation between machine languages, al-though some progress has been made." An in-triguing possibility deserving further investigationhas been raised by Burge (1966, p. 60) as follows:"Presented here is a problem and a framework forits solution. The problem is as follows: Can weget a computer program to scan a library of pro-grams, detect common parts of patterns, extractthem, and re-program the library so that thesecommon parts are shared?"

Another current question of R & 0 concernwith respect to programming problems is of thegenerality with which a given language systemcan or cannot cope with a wide variety of systemconfigurations and reconfigurations over time."°The questions of development of more effectivecommon-purpose or general-purpose languagesinvolve very real problems of mutually exclusivefeatures and of choices as between a number ofmeans of achieving certain desirable built-infeatures."

Areas of continuing R & D concern in program-ming language developments reflect, first, theneed for increasing generality, universality, andcompatibility (these objectives are followed ingeneral-purpose language construction and stand-ardization, on the one hand, and by increasingrecognition of the needs for hierarchies of language,on the other); secondly, the special requirements ofmultiple-access, multiprogrammed, multiprocessor,and parallel processing systems; thirdly, require-ments for problem-oriented and other special-pur-pose languages, and finally, needs for continuingadvances in hardware-software balances and infundamental programming theory.

5.1.2. General-Purpose Programming Requirements

The presently indicated transition from exclu-sively batch or job-shop operation to on-line, mul-tiple access system management 5.12 sharplyaggravates the problems of programming languagerequirements in a number of different ,rays. First,there are very real difficulties in translating fromprogramming languages and concepts geared tosole occupancy and use of system facilities tothose required in the multiple-access, and multi-programmed, much less the multiprocessor andnetwork environment."

As Brooks (1965) points out: "Today's excite-ment centers chiefly around (1) multiprogrammingfor time-sharing, (2) multiple-computer systemsusing a few computers for ultra-reliability, and(3) multiple-computer systems using a highly par-allel structure for specialized efficiency on highlystructured problems." 5.14 In all of these cases,moreover, the R & D requirements are typically

Page 22: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

aggravated by a persistent tendency to underesti-mate the difficulties of effective problem solution.515

An obvious first common-purpose requirement isfor truly efficient supervisory, accounting, andmonitoring control programs 5.16 that will effectivelyallocate and dynamically reallocate system re-sources, that will be secure from either inad-vertent or malicious interference, and that will beflexible enough to accommodate to changing cli-entele needs, often with new and unprecentedapplications.517

Also in the area of general-purpose programmingrequirements are the various emerging programsfor generalized file or data base management,maintenance, and use.5.18 The first requirement,here, is for reconciliation of variable input, filestorage, and output formats, together with flexiblemeans for dupe checking on input, combinatorialselection, and output reformatting.5.1° Closely re-lated are the questions of the so-called "formattedfile systems." 5.20

A first approach to such general-purpose pro-gramming systems was undoubtedly that of theUnivac B-0 or Flow -Matic system developed inthe mid-1950's.5.21 More recent examples includeGeneral Electric's GECOS III (General Compre-hensive Operating Supervisor III) 5.22 and Inte-grated Data Store concepts; 5.23 the TDMS (Time-Shared Data Management System) at System De-velopment Corporation 5.24 and GIS (GeneralizedInformation System) of IBM.5.25 Heiner and Leish-man (1966) describe a generalized program forrecord selection and tabulation allowing variableparameters for sort requirements, selection cri-teria, and output formats.5.26

In 1965, comparative operation of file manage-ment was demonstrated by different systemsincluding COLINGO (Mitre Corporation), MarkIII (Informatics, Inc.), BEST (National Cash Regis-ter), Integrated Data Store (G.E.), and an on-linemanagement system of Bolt, Beranek, and New-man, Ine.5.27 It was concluded that: "All of thesesystems were able to accomplish the processingrequired, but their approaches varied considerably,particularly in the file structures chosen for theapplication, executive control procedures, and levelof language used in specifying the processing to beperformed." (Climenson, 1966, p. 125).

In addition, we may note the examples of a fileorganization executive system,5.28 a program man-agement system involving a two-level file,5.29 anda file organization scheme developed for handlingvarious types of chemical information.5.3° Anotherdocumentation application involving a list-orderedfile is provided by Fossum and Kaskey (1966)with respect to word and indexing term associa-tions for DDC (Defense Documentation Center)documents.5.31

In the area of common-purpose languageo,also, there is need to reconcile the differing re-quirements with respect to different classesof data structures,5.32 from those of numerical

15

analysis processing through those of alphanumericfile management to those of list and multi-listprocessing.533 List structures have been notedin several of the file management systems mentionedand it is to be noted further that: "Linked indexesand self-defining entries are an extension of listprocessing techniques." (Bonn, 1966, p. 1869).For non-numeric data processing applications, ingeneral, symbol string manipulation, list processingand related programming techniques have been ofparticular concern.5.34

While many advantages of list-processing tech-niques have been noted, a number of disadvantagesare also reported. Among the major advantages arethe recursive nature of list processing languages 5'35and adaptability to dynamic memory allocation andreallocation.5.36 Also, languages of this type are"well suited to symbol manipulation, which meansthat it is possible to talk about the names of vari-ables, and perform computations which producethem" (Teitelman, 1966, p. 29), and "by means oflist structures . . . a three-dimensional spatial net-work can be modeled in computer memory."(Strom, 1965, p. 112).

Typical disadvantages to be noted include lackof standardization,537 degree of programmingsophistication required,5.38 and wastage of stor-age space.5.39 Major difficulties are often encoun-tered in updating and item deletion operations 5.40and in dealing with complex data structures.5.41

We may ask also to what extent the availablelist-processing and symbol manipulation program-ming languages are adequate for current applica-tion needs? 5.42 To what extent are they useful forthe investigation of further needs? In particular,it is noted that "unfortunately, while these lan-guages seem in many ways directly tailored to theinformation retrieval work, they are also in otherrespects very awkward to use in practice." (Salton,1966, p. 207).

List-processing languages and structures areparticularly clumsy, moreover, for multiply inter-related and cross-associated data.543 Short oftruly large-scale associative memories, effectivecompromises are needed as between list struc-tures, including multilist programming languages,and file organizations that will provide economyof both storage and access.5.44

Beyond list processing procedures there aretrees, directed graphs, rings, and other more complexassociative schemes.5.45 A relatively early exampleis that of "Rover" with "up links" and "sidelinks" as well as "down links".5.46 Savitt et al., 1967,describe a language, ASP (Association-StoringProcessor), that has been designed to simplify theprogramming of nonnumeric problems, togetherwith machine organizations capable of high-speedparallel processing.5.47

Ring structure language systems are representedby Sketchpad developments 5.48 and the work ofRoberts at M.I.T. for graphical data processing 5'49

and by systems for use in question-answering

Page 23: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

and similar systems, such as Project DE 4CON.5.5°Still other associative structure developments arediscussed for example, by Ash and Sibley (1968),55'Climenson (1966),5.52 Craig et al., (1966),553 Dodd(1966),554 and Pankhurst (1968).5,55

There are certainly those who need far moregeneralized multipurpose languages and, withequivalent force, those who advocate specialized,man-oriented, and special-purpose languages.A specific current design problem has to do withan appropriate compromise between these appar-ently contradictory comments. A future R & Drequirement is to seek more effective solutions.Thus Licklider insists upon the need to bringon-line "conversational" languages more nearlyabreast with the more conventional programminglanguages as an R & D challenge.556

5.1.3. Problem-Oriented and Multiple-Access LanguageRequirements

Increasing needs for advanced special-purposeprogram language developments have been recog-nized for the areas of man-machine interactiveproblem-solving and computer-aided instructionapplications, graphical manipulation operations,and simulation applications, among others. It isnoted in particular that "their power [that ofproblem-oriented languages] in the extensionof computing lies in the fact that the expert knowl-ledge of skilled analysts in particular fields canbe incorporated into the programs, together witha language which is native to the field, or nearly so,and can thereby break down the barriers of eco-nomics, of lack of familiarity with the computer,and of time." (Harder, 1968, p. 235).

Important requirements in programming lan-guages reflect, predictably, those of multiple-access and time-shared systems involving a suitablehierarchy of languages so that the relatively un-sophisticated user may converse freely with themachine without interference to other users orto the control and monitor programs and yet alsodraw upon common-use programs and data.The effective use of such systems in turn demandsextremely tight and sophisticated programmingto keep "overhead" to a minimum 5'57 and to providedynamic program and data location and relocation.Relatively recent developments in this area arediscussed by Bauer, Davis, Dennis and Van Horn,Licklider, Clippinger, Op ler, and Scherr amongothers.558 Bauer (1965, p. 23) suggests in particularthat: "Entirely new languages are needed toallow flexible and powerful use of the computerfrom remote stations."

Multiple-access systems and networks obviouslyrequire R & D efforts in the development of lan-guage systems "optimized for remote-on-lineuse" (Huskey, 1965, p. 142). Op ler points out,first, that "for the most difficult areas telecom-munication, process control, monitors, etc.real -time languages have provided little assistance,"

16

and, secondly, that "the task of developing acomplete new real-time language would be inter-esting and challenging since it would requirereconsideration of the conventional procedureand data defining statements from the viewpointof the real-time requirements." (Opler, 1966,p. 197).

Further, on-line problem-solving applications re-quire dynamic and flexible programming capabil-ities.5.59 Experimentation with the working programshould be permitted with due regard for systemprotection and control."° Generalized problem-solving capabilities should be available in thelanguage system without necessary regard to spe-cific applications."' Such programs should beextensively self-organizing, self-modifying, andcapable of adaptation or tentative "learning." 5.62

Then it is to be noted that languages for on-lineuse should be relatively immune to inadvertent usererrors.5.63 Teitelman comments, for example, that"in languages of this type, FORTRAN, COMIT,MAD, etc., it is difficult to write programs that con-struct or modify procedures because the communi-cation between procedures is so deeply embeddedin the machine instruction coding, that it is verydifficult to locate entrances, exits, essential vari-ables, etc." (1966, p. 28).

In language systems such as TRAC,554 the power-fulness is largely buried in specific machine proc-essors for this language the user is freed fromlearning a number of arbitrary exceptions, he isnot able to "clobber" the underlying processor byinadvertent goofs, he is able iteratively to name andrename strings and procedures he wishes to uselargely in terms of his own convenience, and he isable to make use of the language and system withminimal training or prior experience.

Related problems affect the design of special-purpose languages ("the design of special-purposelanguages is advancing rapidly, but it has a longway to go," Licklider, 1965, p. 66), the interactionof executive and console languages ("whereverremote consoles are used, we find the users en-thusiastic. However, they always hasten to add thatthere is much to learn about the design of execu-tives and processors for console languages,"(Wagner and Granholm, 1965, p. 287), the provisionof adequate debugging aids ("the creation of largeprogramming systems using remote facilities re-quires a number of debugging aids which range incomplexity from compilers to simple register con-tents request routines." Perlis, 1965, p. 229), andthe development of effective modularity in programsand compilers ("it will be necessary for the soft-ware to be modular to the greatest extent possiblebecause it will need to work up to the next level ofsoftware." Clippinger, 1965, p. 211).

In connection with the latter requirement Lockdiscusses the desirability of an incremental com-piler which is characterized by its ability "to com-pile each statement independently, so that anylocal change in a statement calls only for the re-

4,1,.,.1141ak& wociAtadwer,c- 04),w A,WALArt,1,1.,..1kAg-A,Afie......Aftlialied.,....,151,..,,..S.

Page 24: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

compilation of the statement, not the completeprogram." (1965, p. 462).

Time-sharing or time-slicing options and graphicalcommunication possibilities cast new light not onlyupon the problem-oriented but also upon the user-oriented languages and improved possibilities forman-machine communication and interaction. Ef-fective compromises between the system design,the programmer, and the client have not yet beenmet, even although the demands of modern systemsare increasing in complexity.

The problems of effective ,programming and uti-lization reach back beyond specific routines, spe-cific languages, and specific equipment. They areinvolved in all the many questions of systems plan-ning, systems managment and systems design. Forexample, in time-shared operations, "considerablygreater attention should be focused on questions ofeconomy in deciding on the tradeoff between hard-ware and software, in using storage hierarchies, andin determining the kind of service which the time-sharing user should be offered." (Adams, 1%5, p.488). Finally, the critical problems reach fartherback than the potentialities of hardware, softwareand systems planning combined, to fundamentalquestion of why, how, and when we should look tomachines for substantial aid to human decision-making and problem-solving processes.

"An analysis of the various requirements thata programming language must satisfy (to be knownboth by the job-originator and the job-executorand to be capable of expressing both what thefirst wants to be done and what the second is capableof doing) invoh es basic researc hes on the linguisticnature of programming languages." (Caracciolo diForino, 1965, p. 224).

5.1.4. Heirarehies of Languages and ProgrammingTheory

Direct machine-language encoding and pro-gramming was the first and obvious approachto both numeric and non-numeric data processingproblems. In both areas, forms of communicationwith the machine that are more congenial to thehuman user imve been developed, as formal"programming languages" (FORTRAN, ALGOL,COBOL, and the like).5.65 In addition, specialprogram languages to facilitate either problem-solving and question-answering systems, or textualdata processing, or both; (such as list-processingtechniques, IPLV, LISP, COMIT, SNOBOL,more recently TRAC, and others) 5.66 have beendeveloped. More and more, however, it is beginningto be recognized that hierarchies of language areessential to present and foreseeable progress.567

Burkhardt (1965, p. 3) lists a spectrum of pro-gramming languages on the basis of the "declarativefreedom" available to the user, from absolutemachine languages with none, to "declarativelanguages" which provide a description of theproblem and freedom of both procedure and solu-

17

tion. Certainly, for the future, "the entire spec-trum of language from binary machine code to thegreat natural languages will be involved in man'sinteraction with procognitive systems." (Licklider,1965, p. 104).

In the area of desired hierarchies of languages,we note such corroborating opinions as those ofSalton who asks for compiling systems capable ofhandling a variety of high-level languages, spe-cifically including list processing and string manip-ulations 5.68 and of Licklider who points out that ineach of many subfields of science and technologythere are specific individual problems of terminol-ogy, sets of frequently used operations, data struc-tures, and formulas, indicating a very real need formany different user-oriented languages.5.69 How-ever, in his 1966 review, the first ACM Turing Lec-ture, Perlis concludes: "Programmers should neverbe satisfied with languages which permit them toprogram everything, but to program nothing ofinterest easily. Our progress, then, is measured bythe balance we achieve between efficiency andgenerality." (1967, p. 9).

Moreover, even in a single system, it may benecessary not only to reconcile but to combine thecontradictory advantages and disadvantages of dif-fering levels of programming language in variousways. Thus Salton states "it is possible to recognizefive main process types which must be dealt within an automatic information system: string manipu-lation and text processing methods, vector andmatrix operations, abstract tree and list structuremanipulations, arithmetic operations, and finallysorting and editing operations." (Salton, 1966, p.205). A second complicating factor relates to thestill unusually fluid situation with respect to hard-ware developments, logical design, and the increas-ing interdependence of hardware-software factorsin the consideration of future system and networkdesign possibilities.

Beyond the investigation, development, andexperimental application of advanced programminglanguages for specific types of application such asgraphical data processing, simulation, or on-linequestion-answering and problem-solving systems,otudy is needed of fundamental problems of pro-gramming theoITy57° For example Halpern notessi.

. the increasingly wide gulf between researchand practice in the design of programming lan-guages." (1967, p. 141), while Alt emphasizes that"we do not yet have a good theory of computerlanguages, and we are nowhere near the limit ofthe concepts which can be expressed in such lan-guages." (1964, p. B.2--1).

There is also to be noted in the current state of theart in the computer and information sciences anincreasing concern for the relationships betweenformal modelling techniques, generally, the ques-tions of formal languages, and the developmentof powerful, general-purpose programming lan-guages.' Karush emphasizes that "the develop-ment of an integrated language of mathematical

Page 25: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

and computer operations is part of a more generalproblem associated with the automation of controlfunctions, This is the problem of formalism whichembraces both mathematical representations ofsystems and representations of the processes ofactual execution of systems, by computer or other-wise." (1963, p. 81).

A similar concern is expressed by Gelernter:"Just as manipulation of numbers in arithmetic isthe fundamental mode of operation in contempo-rary computers, manipulation of symbols in formalsystems is likely to be the fundamental operatingmode of the more sophisticated problem solvingcomputers of the future. It seems clear that whilethe problems of greatest concern to lay societywill be, for the most part, not completely formaliz-able, they will have to be expressed in some sortof formal system before they can be dealt with bymachine." (1960, p. 275).

High-order relational schemes both in languagesand in memory structures will thus be required.5.72For example, "the incorporation in proceduralprogramming languages of notations for describingdata structures such as arrays, files, and trees,and the provision to use these structures recur-sively together with indications of the scope ofdefinition, will help greatly with the storage alloca-tion problem and assist the programmer orga-nizationally . . ." (Barton, 1963, p. 176). In thisconnection, Press and Rogers (1967) have describedthe IDEA (Inductive Data Exploration and Analysis)program package for the detection of inherentstructure in multivariate data.5,73

We note also that "even in the more regulardomain of formal and programming languages,many unsolved practical and theoretical prob-lems remain. For example, the matter of recoveryfrom error in the course of compilation remainsin a quasi-mystical experimental state, althoughsome early results, applicable only to the simplestof languages suggest that further formal studyof this problem could be worth while." (Oettinger,1965, p. 16).

Requirements for continuing research anddevelopment activities in the area of computingand programming theory are increasingly seen asdirectly related to needs for more effective multi-access time-sharing, multi-programmed, and multi-processor systems. It can be foreseen that "Orga-nizational generality is an attribute of underratedimportance. The correct functioning of on-linesystems imposes requirements that have beenmet ad hoc by current designs. Future systemdesigns must acknowledge the basic nature ofthe problems and provide general approachesto their resolution." (Dennis and Glaser, 1965,P. 5).

What are some of the difficulties that can beforeseen with respect to the further developmentof hierarchies of systems? Scarrott suggeststhat "the problems of designing and using multi-level storage systems are in a real sense central

18

to the design of computing systems" (1965, p.137), and Burkhardt warns: "As long as actualcomputers are not well understood there willnot be much hope for very successful developmentof useful universal processors." (1965, p. 12).

5.2. Processor and Storage System DesignConsiderations

In the area of information processing systemsthemselves, current trends have been markednot only by new extensions to the repertoiresof system configurations available for the large,high-cost, high-speed processors but also by acontinuing tendency to the development of com-puter system "families." Increasing attentionis being given to both "upward" and "downward"compatibility that is, to means by which pro-grams operable on a large system may also oper-ate on a smaller, slower, or less expensive memberof the same family, and vice versa.

Increasing attention has also been given to pro-viding adjuncts to existing and proposed systemswhich will give them better adapatability to time-sharing and on-line multiple-access requirements.Similarly, the requirements for handling a varietyof input sensing modalities and for processingmore than one input channel in an effectuallysimultaneous operation clearly indicate needs forcontinuing research and development efforts inthe design and use of parallel processing tech-niques, multiprocessor networks, time-sharedmultiple access scheduling and multiprogramming.

Hierarchies of languages are implied as we haveseen, ranging from those in which the remote con-sole user speaks to the machine system in a rela-tively natural language (constrained to a greater orlesser degree) to those required for the highly so-phisticated executive control, scheduling, file pro-tection, accounting, monitoring, and monitoringinstrumentation programs.

Tie-in to various communication links, generally,should include remote consoles, closed circuit TV,facsimile, voice quality circuits, and the like, withcapability for real-time processing. Message securityprotection facilities are often required, includingencoding and decoding. Access to error detectionand error correction mechanisms are also necessary.

Overall system design requirements indicate alsothe necessary exploitation of new hardware tech-nologies, new storage media, associative-memoryprocedures for file and data bank organization andmanagement, the use of dynamic reallocations ofspace and access to both programs and files or databanks in multiple-access systems, protective andfail-safe measures, and the development of hier-archies of languages of access and usage hier-archies of stored data files, and hierarchies ofsystems.5.74

Many of the above factors are discussed in othersections of this report or in other reports in thisseries. Here, we will consider briefly some of the

Page 26: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

problems of central processor design, parallel proc-essing, and hardware-software interdependence,

5.2.1. Central Processor Design

With regard to the central processing system, it isnoted that it should be operable in both time-sharingand batch processing modes, and that it providesimultaneous access to many users. Efficient mul-tiple access should be provided to the hierarchies ofstorage with the user having, in effect, virtually un-limited memory.5.75 There should be a capability foraccessing either internal or auxiliary associativememory devices.

A continuing problem in processor-storage systemdesign is that of address- circuitry.5,7° Closely relatedto this problem are questions of content-identifica-tion-matching whether for sequential or parallelaccess. Indirect addressing and multiple relativeaddressing via a number of index registers is animportant consideration.5.77

Moreover, direct program access to all registersby ordinary instructions and with interlock protec-tion features is often required. If the index registerscan be simultaneously and interchangeably used asinstruction counters, there are additional parallelprocessing benefits. For example, such capabilitiescan provide for jumping to the nth instruction fromthe present one, determining whether an instructionhas been executed or not, and other flexible capa-bilities for debugging, diagnostic, and evaluationpurposes. It should be possible to manipulate indexregisters several at a time.

System users may need a relatively long wordlength to provide numerical accuracy in long floatingpoint values, for manipulation of large matrices, toprovide duplex operations on two sections of theword simultaneously as in complex number process-ing, and the like. Automatic unpacking of wordsubsets in variable sized bytes is also recommendedfor future processor design. Salton indicates needsfor "flexible instructions operating on individualbits and characters, and flexible branching orders.Pushdown store instructions, such as 'pop' or 'push'should also be useful for the list operations."(1966, p. 209).

Also desirable is the ability to access a wordsimultaneously from more than one computer sys-tem or component with automatic protection inter-locks in case of conflicts. Power-failure protectionsystems should be available and protection shouldalso be provided against unauthorized access tovarious memory and data bank sections.

More flexible pagination 5.78 is needed for someimportant applications. For example, in graphic dataprocessing, dynamic memory reallocation proce-dures requiring fixed pagination would be awkward touse and highly inefficient. Storage allocation shouldbe under programmer control. For example, ifpictorial data overflows the boundaries of a page, atleast 4 and as many as 9 pages may be required sincesuch data must be processed as a two-dimensional

19

array. The system designer needs to avoid as faras possible the problems of unnecessary andclumsy programming in order to apply a singleprocedure to such arrays. For example, Wunderlichpoints out in connection with sieving proceduresfor computer generation of sets and sequences that"there are obvious programming difficulties con-nected with sieving on a field of bits," (1967, p. 13).

Another hardware-software desideratum here isobviously the need for efficient bit-manipulationcapabilities. For example, the User would like to find,for a given gray-level representation of a graphicinput that, at a given location and blackness level,some or all or none of the neighboring locations havethe same blackness level recordings (this is im-portant in eliminating "fly specks" from furtherprocessing, in filling-in "holes" that result from im-perfect printing impressions, and also in deter-mination of the relative locations of centers ofblackness when attempting to reconstruct three-dimensional imaging for serial sequences of two-dimensional image representations).

Problems of computer design as well as program-ming for array processing are discussed by Senzigand Smith (1965) in terms of a worldwide weatherprediction system and by Roos (1965) in terms ofthe ICES (Integrated Civil Engineering System) atM.I.T.5.79 Association matrices present a specialform of data arrays requiring efficient manipulationand processing. Such considerations are particularlyimportant in the experimental research or on-lineinstrumentation situations.5.8° In addition, bit-manipulation and array-processing requirementsare severely constrained in commercially availablesystems.

A requirement of major future importance (espe-cially for chemical information searching, file,organization, mapping functions and graphic dataprocessing) is for efficient bit manipulation capa-bilities, including convenient Boolean processingand transplant features. Again, bit manipulationcapabilities are important because many operationsrequire consideration of all the orthogonal neighborsof a single bit position.

In future system designs, increasing needs formultivalued logic approaches can also be foreseen.In general, a binary (two-valued) logic pervades in-formation processing system design and the basicmethods of information representation as of today.For the future, however, attention needs to be di-rected toward multivalued logic systems and todirect realizations of the n-ary relations between thedata elements of stored information. There are newtechnological possibilities that point in this direc-tion (e.g., new devices that are capable of at leastternary response,5." or multiple response by color-coding techniques from a single "bit" recording onadvanced storage media). In addition, parallel proc-essing, associative processing, and iterative circuittechniques point the way to new complexities ofprogram command and control and to new, multi-valued, processing opportunties as well.

Page 27: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Then, as Wooster comments: "Radically differenttypes of computers may well be needed. At present,the best way of building these seems to be throughthe creation of logical structures tending more andmore in the direction of distributed logic nets,wherein vast numbers of processes occur simultane-ously in various parts of the structure. Right now,the best building blocks for such systems seem to bemultifunctional microprogramable logic elements."(1961, p. 21).

Increasing complexity of central processor designis indicated by developments in advanced hardwaretechnologies; 5.82 while increasing flexibility is dic-tated by dynamic reconfiguration requirements."3Modularity is an important consideration."' Thecritically challenging, interacting problems of bothdesign, programming and utilization of multiproc-essors and of parallel processing are emphasized byBrooks (1965), Amdahl (1965), Burkhardt (1965), andOp ler (1965), among others. Thus: "The use of multi-computers implies intercommunication, with theassociated implications of interconnection, recon-figuration and interlocking." (Amdahl, 1965, p. 38).

5.2.2. Parallel Processing and Multiprocessors

The possibilities for the use of parallel processingtechniques should receive increased R & D atten-tion. Such techniques may be used to carry out datatransfers asynchronously with respect to the proc-essing operations,"5 to provide analyses necessaryto convert sequential processing programs intoparallel -path programs,5.88 or to make allocations ofsystem resources more efficiently because con-straints on the sequence in which processing opera-tions are executed can be relaxed.587 Applicationsto the area of pattern recognition and classificationresearch and to other array processing operationsare obviouS.5'88

However, there are problems of effective use ofparallel processing capabilities. Some examplesof the discernible difficulties with respect to currentparallel-processing research and developmentefforts have been noted in the literature as follows:

(1) "Multiprogrammed processors will requiremore explicit parallel control statements inlanguages than now occur." (Perlis, 1965,p. 189.)

(2) "Much additional effort will have to be putinto optimizing compilers for the parallelprocessors that may dominate the computerscene of the future." (Fernbach, 1965, p. 84).

(3) "Computers with parallel processing capa-bilities are seldom used to full advantage."(Opler, 1965, p. 306):

There are also problems of R & D concern inprogramming language developments involved withincreased use of parallel processing capabilities.The possibilities of "Do Together" provisions incompilers, first raised by Mme. Poyen in 1959,5.89add a new dimension of complexity for analysis, con-struction, and interpretation. Fernbach comments

on the sparcity of attempts made to date on thepotentials of parallel processing in programs formany problems. He states further that while thetasks of segmenting the problem itself for parallelprocessing attack are formidable, "they must beundertaken to establish whether the future develop-ment lies in the area of parallel processing ornot." 5.90 Then there is need for judicious inter-mixtures of parallel and sequential processingtechniques in specific design, programming, orapplication situations.5.9'

Parallel processing potentials are also closelyrelated to, and may be interwoven with, multi-processing systems which involve: "The simul-taneous operation of two or more independentcomputers executing more-or-less independentprograms, with access to each other's internalmemories . . ." (Riley, 1965, p. 74). In particular,the Solomon Computer and Holland Machine con-cepts may be noted.5.92 For another example,increasing parallelism of operation of a multipleaccess processing system has been investigated atthe Argonne National Laboratory in terms of an"Intrinsic Multiprocessing" technique consisting ofn time-phased "virtual" machines which time-sharevery high speed execution hardware (Aschenbrenneret al., 1967), while ILLIAC III has been designedfor parallel processing of pictorial data.5.93

20

5.2.3. Hardware-Software Interdependence

The earlier dichotomy as between "hardware"and "software" considerations is beginning to yield,not only to the increasing interdependence of thetwo factors in many information processing applica-tion requirements, but also to technological de-velopments in "firmware" (in effect, wired-inmicroprogramming 5`4) and to growing recognition ofthe critical importance of more precise and compre-hensive "brainware" in systems planning, design,specification, and implementation.5.95

Is it currently possible to separate computer andstorage system design considerations from those ofprogramming language design and of programmedexecutive control? Several experts testify that, if itis still possible today, it will soon be so no longer 5.98It can be quite clearly seen that the areas of com-puter theory and program design are becomingincreasingly interdependent with those of adequateprogramming languages, "software", and user-tolerance levels. At the same time, new possibilitiesfor multicommunicator, multiprocessor, and multi-user networks are increasingly coming to the fore.

The growing interdependence is stressed, forexample, by Schultz (1967),5.97 and by Lock (1965)who notes the strongly increasing influence ofmultiprogrammed, on-line systems upon the or-ganization of the storage facilities. Scarrott (1965,p. 137) for another example, insists that "theproblems of designing and using multilevel storagesystems are in a real sense central to the design ofcomputing systems."

Page 28: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Thus, "as we enter an era of bigger and morecomplex systems some new requirements arecoming to be of major importance.

Will we be able to minimize the program han-dling by proper allocation to primary, e.g.,core, or secondary, e.g., drum or disk, storage?

Will we be able to incorporate changes to thefunctional operation of the system?

Will we be able to modify the system to accom-modate new or additional hardware?Will we be able to add a completely new func-tion to an already operating system?" (Perry,1965, p. 243).

In the next section, therefore, we will considersome of the advanced hardware developmentsbefore discussing such overall system design con-siderations as debugging, on-line instrumentationand diagnosis, and simulation.

6. Advanced Hardware Developments

Certain obvious overall system design require-ments have to do with the further extensive develop-ment and application of advanced hardwaretechnologies, especially opto-electronics generally(and lasers and holography in particular), withintegrated circuit techniques, and with improvedhigh-density storage media.

Recurrent themes in current progress towardvery-high-speed, computer-controlled access toprimary, secondary and auxiliary storage banks,from the standpoint of hardware technology, includethe questions of matching rates of data-and/or-instruction access to those of internal processingcycles, and of the prospects for integrated circuitand batch fabrication advantages in design andconstruction.

These new technologies may also be combined invarious ways. For example, Lockheed Electronicshas been using deflected laser beams to scanphotochromic planes very rapidly and very ac-curately," laser and holographic techniques areconjoined in equipment designed to photograph fogphenomena in three dimensions," and it has beenreported that "laser devices show promise of veryfast switching which together with optical inter-connections could provide digital circuits that arefaster than electronic circuits." (Reimann, 1965,p. 247).

6.1. Lasers, Photochromics, Holograph, ,and Other Optoelectronic Techniques

New hardware developments that are technicallypromising in terms of the long range research anddevelopment necessary to support future improve-ments in information processing and handling sys-tems include the development of special laser tech-niques for switching, storage, and other purposes,and the possible use of holograms or kinoforms for3-dimensional pattern recognition and storage.

6.1.1. Laser Technology

Writing in 1965, Baker and Rugari have pointedout that "a wide variety of lasers have been dis-covered and developed since the first laser devicewas operated five years ago." Lasers can be classed

21

into three basic types: solid-state, semiconductor,and gaseous. Typical examples are the ruby solid -state laser, gallium arsenide semiconductor laser,and the neon-helium gas laser." (1966, p. 38).

Certainly, lasers, whether of the gas, fluorescentcrystal, or semiconductor type, are finding manynew possibilities of application in computer, com-munication, and information processing systems."As sources of illumination they can provide greaterdisplay efficiency 6.6 and greater resolution withrespect to display systems involving lightbeamdeflection techniques (Soref and McMahon, 1965,p. 60), and they provide an effective aid to theboundary and contrast enhancement techniques forimage processing developed at the National PhysicalLaboratory at Teddington, England. More specifi-cally, this technology promises new developments inspace communications," in memory constructionand design," in the development of analyticaltechniques such as Raman spectroscopy and photo-microscopypo in the identification of finger-prints," in quantization of high resolution photo-graphs 6'12 and in the use of holograms for collection,storage, and regeneration of two- and three-dimen-sional data."

Small, very high-speed memories may be drivenby laser beams,6"4 and laser components contributeto the design of "all-optical" computers 6.15 andcomputer circuits and components. Laser in-scribing techniques are being investigated for suchapplications as large screen real-time display s,1317and for highly compressed data recording, forexample, at Precision Instrument Company. 618As of March 1969, it could be reported that ordershad been placed for UNICON systems by PanAmerican Petroleum Corporation, and were underconsideration by several other organizations, in-cluding U.S. Government agencies." In particular,the National Archives and Records Service hasbeen studying the possibilities of converting presentmagnetic tape storage to this syster,.620

Investigations of future technical feasibility ofusing laser devices for high speed data storage and/or processing have thus been complemented byexploration of possibilities for recording onto verylarge capacity storage media as also in develop-ments at Honeywell,' at the Itek Corporation,6.22

Page 29: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

and at RCA,6.23 an IBM system designed for and nowin operation at the Army Electronics Command aswell as IBM developments in variable frequencylasers,6.24 and a recording system from Kodak thatuses fine-grained photographic media, diffractiongrating patterns, and laser light sources,"26 Ho lo-graphic techniques may also be applied to the de-velopment of associative memories with possibleanalogies to human memory-recall systems.6.26

Kump and Chang (1966) describe a thermo-strictive recording mechanism effected on uniaxialPermalloy films by the application of a local stressinduced by either a laser or an electron beam,promising large capacity memories of better than106 bits per square inch storage efficiency.6.27 Thenthere are combined optical and film techniques fordigital as well as image or analog recording andstorage. Specific examples include IBM photo-chipdevelopments 8.28 thermal recording developmentsat the NCR research laboratories,6.26 PrecisionInstrument's UNICON System 6.3o and, in general,the area of photochromic storage technology.

Areas of continuing R & D concern with respectto laser communications possibilities includequestions of modulation and transmission1631acquisition and tracking problems,632 isolation fromatmospheric interferrence conditions 8'33 andpossibilities for controlled atmosphere systems.634

Vollmer (1967) notes that an experimental short-range laser communication system has narrow beamwidth with significant advantage for privacy. Inparticular,, "operation at 9020 A enhances thisprivacy by virtue of its invisibility." (p. 67.) Someexamples of the experimental use of lasers forcommunications purposes were given in an earlierreport in this series,636 and it was noted that themost successful ventures to 'date have been atopposite ends of the distance spectrum 8.36 However,laser scanning techniques combined with othermeans of communication may offer important gainsin high-resolution facsimile transmission. For ex-ample, a system developed by CBS Laboratoriesuses a laser beam to scan photographic (llm, convertto video signals, and transmit, via satellite, militaryreconnaissance pictures from Viet Nam toWashington 6.37

Continuing requirements for further develomentsin the application of lasers in display systemsinvolve, for example, efforts directed toward lessexpensive high quality semiconductor lasers 8'38 andtoward solving problems of deflection, modulation,and focusing." Kesselman suggests that resultsto date in terms of laser displays are inconclusiveand that practical applications are not likely in thenear future (1967, p. 167). As in the case of laserversus electron beam displays.ao the absence ofrequirements for vacuum techniques favors theeventual use of laser rather than electron beamtechniques in many high density data storagerecording applications 6.41

22

6.1.2. Photochromic Media and Technique

By definition, photochromic (or phototropic)compounds exhibit reversible effects or colorchanges, resulting from exposure to radiant energyin the visible or near visible portions of the spec-trum.642 Such media give excellent resolution andreduction characteristics, and because of thereversibility property, they can theoretically beerased and rewritten repeatedly,6.43 although acontinuing area of R & D concern is that of problemsof fatigue.6.44 They also enable storage of imageswith a wide range of gray scale.646 Such materialshave been known for at least a hundred years ormore (Smith, 1966). In fact, as Smith suggests, theymay have provided the means for achieving theworld's first "wrist-watch." 6.46

Tauber and Myers (1962) and Hanlon et al. (1965)offer summaries of NCR efforts to provide com-mercial applicability to photochromic recordingtechniques for large-capacity microimage storagefiles.6.47 A British example of application is the Tech-nical Information on Microfilm Service.648

A less favorable characteristic of the photo-chromic material appears in the case of storagefiles the permanency of recording depends onambient temperature, ranging from only a fewnoun', at normal room temperatures to perhapsseveral years under rigid temperature controls.646Therefore, for mass and archival storage, theNCR system involves transfer from the photo-chromic images to a high-resolution photographicemulsion for permanent files." The remainingadvantages are two-fold. First, the reduction isimpressive: 1,400 pages of the text of the Bible onan approximately 2" x 2" film chip is the widelydemonstrated example (Fig. 2). Secondly, 'spot'erasure and rewriting provides an important in-spection and error correction capability. It isclaimed that: "Instantaneous imagery followed byimmediate inspection permits the production ofessentially 'errorless' masters for the first time".(Hanlon et al., 1965, p. 10).

In the area of internal memory and switchingdesign, Reich and Dorion (1965) report of the photo-chromic techniques that: "The photochromicmedium has extremely large storage capacitylatently available in physically small dimensions.The basic photochromic switches are the moleculesthemselves . . . Photochromic media can be em-ployed for many write-erase-rewrite cycles andgive almost nondestructive read . . . Appropriatephotochromic systems can retain stored data with-out power consumption . . . The memory canprobably be designed to be stored for quite a longtime." (p. 572) 6.51

A photochromic medium in the form of trans-parent silicate glass containing silver halide particleshas been suggested for such applications as eras-able memories, displays for air traffic controloperations, and optical transmission systems."

tAilg.taliaidard.16,1A.L

Page 30: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

a

ria419F1,kisa *

r"; 7.47i7v1"

fa-L-17.4:1 -

$1"'";:-

r*

FIGURE 2. Photochromic data reduction.

In addition, it is to be noted that photochromicfilms may be activated by CRT phosphors for use ininformation display systems (Dorion et al., 1966)6.53and may also be used for real-time target tracking.654Recent developments suggest the use of photo-chromics for digital data storage.6.55

Finally, the properties of photochromic materialsmight be used for improved performance of holo-graphic recording, reconstruction and displaysystems.6.56 Thus, "the use of self-developing photo-chromic devices in the place of the photographicplate would enhance the value of wavefront recon-struction microscope by permitting nearly real-timeoperation and eliminating the chemical develop-ment process." (Leith et al., 1965, p. 156).

6.1.3. Holographic Techniques

Holography is a new information processingtechnique, but it is, in fact, highly illustrative of

23

r

needs for truly long-range R & D planning in manyareas of computer and information sciences andtechnology, since it is by no means a recent area ofinvestigation, the principles having been announcedby Gabor as early as 1948.6.57 The basic holographicphenomena are described by Cutrona (1965, p. 89)as follows: "A hologram is produced by recording onphotographic film the interference pattern, resultingfrom the illumination of some object with a wave-front from the same source".6.58

Leith et al. (1965, p. 151) point out further that"by combining conventional wavefront reconstruc-tion techniques with interferometry, it has beenpossible to produce holograms from which high-quality reconstructions can be obtained. Thesereconstructions bear close likeness to the originalobject, complete with three-dimensional charac-teristics . . . The object can be a transparency, orit can be a solid, three-dimensional object".6."

Page 31: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Armstrong (1965) emphasizes that, in general, nolenses are required and the reconstructed imagecan be magnified or demagnified as desired.6.6°

As of early 1969, however, the question has beenraised that holograms may be already outdated.6.6"A new wavefront reconstruction device the kino-form is a computer-generated device intended toprovide a reconstructed three-dimensional imageof various objects with greater efficiency than isavailable with holographic procedures (Lesem et al.,1969).6.62

In addition to the use of holographic techniquesfor three-dimensional image storage and recall(including rotation 6'63)9 these techniques are alsobeing explored for bandwidth compression inpictorial data storage 6.64. the production of highlymagnified images,6.65 and ,)ther novel applica-tions.6.66 In particular, it has been claimed thatholographic techniques offer a new potential forhigh-quality-image capture in regions of the electro-magnetic spectrum extending beyond those thathave been achieved by optical recording techniquesin the the region of visible light6.87

Then it has been reported that "General Electric'sAdvanced Development Laboratories, Schenectady,build a laser holograph reader a device capable ofreading characters in several ways. It can detect asingle object out of many without scanning, or ifscanning is used, can recognize up to 100 differentcharacters. The holographic reader is said to showwide tolerance for variations in type font and isexpected to find applications in the computer field."(Veaner, 1966, p. 208.)

Laser and holographic techniques in combinationare also being investigated for high density digitaldata storage, for example, at the Bell TelephoneLaboratories,6.68 at Carson Laboratories,666 and atIBm.676

Some special areas where advanced optoelectronictechniques and improved materials or storage mediacontinue to be needed include "certain operations,such as two-dimensional spatial filtering (that) canbe readily accomplished, in principle, with coherentlight optics. Problems under consideration ,include:the effect of film-grain noise on the performance ofa coherent optical system; the relation of film thick-ness and exposure; techniques for the making ofspatial filters; and the effect on the reconstructedpicture of various operations (such as sampling,quantization, noise addition) upon the hologram."(Quarterly Progress Report No. 80, ResearchLaboratory for Electronics, M.I.T., 221 (1966).)

McCamy (1967) reports recent extension of pre-vious R & D investigations into fading and agingblemishes in conventional microforms to the effectof formation of such blemishes on information storedby means of holograms. It is to be noted further thatcertain types of holograms have an important im-munity to dust and scratches."'

Possibilities for a holographic read-only store areunder investigation at IBM (Gamblin, 1968), RCA(Viklomerson et al., 1968), and at the U.S. Army

24

Electronics Command, Fort Monmouth. In partic-ular, "it is intended that a hologram of a binary dataarray would constitute the card-like removablemedia. Upon insertion into the memory read unit,the hologram would continuously focus a real imageon the data onto a photodetector matrix. Such anarrangement can permit electronic random accessto the information within the array while eliminatingthe stringent optical requirements on the detectorsinvolved." (Chapman and Fisher, 1967, p. 372).6.72Other R & D possibilities of interest include ex-perimentation with acoustic 6'73 and computer-gener-ated holograms 8.74 The computer generation ofholographic or kinoform recordings is thus anotherdevelopment in this area of advanced technology.For example, digital holograms may be generatedby computer simulation of wave fronts that wouldemanate from particular optical elements arrangedin specific geometrical relationships (Hirsch et al.,1968).6.75 An interesting area of investigation is thatof computer synthesis of holograms of three-dimen-sional objects which do not, in fact, physicallyexist."

6.1.4. Other Optoeleetronie Considerations

In general, it is emphasized that increasing in-terest has been evident in the use of optoelectronictechniques for both computer and memory designfor a wide variety of reasons. Scarrott (1965) andChapman and Fisher (1967) point to the high den-sities achievable with photographic media.6.77Reich and Dorion (1965) suggests a photochromicfilm memory plane, 2" x 2", with 645 subarrays in-dividually accessible and a total capacity, assumingonly 50 percent utilization of the film area, of betterthan 12 million bits.6.76 Potentially, then, many ofthese techniques promise significant advances indata storage, in logic and processing circuitry, inalternative communications means, in computingor access speed, and in data collection with respectto two- and three-dimensional object representa-tion, including spatial filtering.6.76

Bonin and Baird (1965, p. 100) list other applica-tions of optoelectronic techniques for tape and cardreaders, position indicators, and recognition equip-ment. In addition, important new areas will includeuse in communication and transmission systems.Here it is noted that optical techniques as appliedto advanced communication systems planning relatealso to continuing theoretical investigations. Thus,"preliminary studies of communication systems em-ploying optical frequencies have indicated threetopics to which the concepts and techniques ofmodern communication theory may most profitablyby addressed. They are (i) the import of quantumelectrodynamics for the characteristics of efficientcommunication systems, (ii) the relevant descrip-tion of device noise as it affects the performance ofcommunication systems, and (iii) the statisticalcharacterization of the atmosphere as a propagationchannel at optical frequencies." (Quarterly Progress

ysWy

Page 32: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Report No. 80, Research Laboratory for Electronics,M.I.T. 178 (1966).)

For use as computer logical elements, somewhatless attention has been paid to date to the opto-electronic techniques. However, Ring et al. (1965,p. 33) point out that "it well may be that opticaldevices which do not appear at all suitable as binarycomputer elements may be very effective computingdevices in the context of some other logic structure(e.g., majetrity logic, multivalued logics)."

Reimann adds: "With the advent of the laser,efficient light-emitting diodes, and high-speedphotodetectors, interest in the application of higherspeed opto-electronic circuits to digital logic hasincreased. . . . We may in the future expect tosee opto-electronic circuits which will combinelaser amplifiers with other high-speed semicon-ductor devices." (1965, p. 248).

Then we note that optoelectronic techniques mayalso be used to attack some of the problems thatincreasingly plague the circuit designer.68° Possi-bilities for circumventing interconnection limitationswhich become more severe as physical area percomponent is reduced are also stressed by Reimann(1965, p. 247). He states: "The possibility of signalconnection between parts of the system withoutelectrical or actual physical contacts are veryattractive for integrated circuit techniques. Withoptical signals, a totally new approach to the inter-connection of digital devices is possible."

Optoelectronic techniques as applied to the prob-lems of large, inexpensive memories are not onlypromising as such,681 they also may be used toattack the noise problems still posed by integratedcircuits.6.82 Thus Merryman says "one attractiveproperty of optoelectronic devices is their potentialfor isolation; they can get rid of the noise that isgenerated when two subsystems are coupled. Thenoise problem is even tougher in integrated circuitsystems, because the transformers used in tradi-tional methods of isolation are too bulky." (1965,p. 52).

6.2. Batch Fabrication and IntegratedCircuits

In very recent years, it has been claimed thatintegrated circuitry is the most significant advancein computer hardware technology since the introduc-tion of the transistor; 6.83 that it will bring importantchanges in the size, cost, reliability and speed ofsystem design components," and that advancedhigh-speed techniques paradoxically also indicateeventual lower costs.685

Many potential advantages of increased usage ofLSI techniques are cited in the literature. Theseinclude, for example, applications in improvedcentral processor unit speed or capacity perform-ance, in system control and reliability, and incontent-addressable (associative) memory construc-tion and operation.6.86 Wilkes suggests that parallel-ism achieved by use of these techniques may

25

overcome present-day deficiencies of processingsystems in such applications as pattern recogni-tion.

In terms of relatively recent R & D literature,Minnick (1967) provides a review of microcellularresearch, with emphasis upon techniques usefulfor batch-fabricated circuit design; Bilous et al.(1966) discuss IBM developments of large scaleintegration techniques to form monolithic circuitarrays, where on only nine chips it was possible toreplicate a specific System/360 computer model,and, under RADC auspices, Savitt et al. (1967) haveexplored both language development and advancedmachine organization concepts in terms of largescale integration (LSI) fabrication techniques 6'88That is, in general, where integrated circuits basedon etched circuit board techniques had replaceddiscrete components, the LSI techniques of fabri-cation produce sheets of integrated logic compcnents as units.6.89

To what extent do integrated fabrication tech-niques hold promise for future developments invery large yet inexpensive memories? Rajchmansuggests that "the dominance of non-integratedmemories is likely to be finally broken or at leastseriously challenged by integrated memories, ofwhich the laminated-ferrite-diode and the super-conductive-thin-sheet-cryotron memories are prom-ising examples." (Rajchman, 1965, p. 128.) And,further, that "it appears certain that energeticefforts will continue to be devoted towards inte-grated technologies for larger and less costlymemories, as this is still the single most importanthardware improvement possible in the computerart." (Rajchman, 1965, p. 128.) Other advocatesinclude Gross, Hudson '6'91 Van Dam andMichener,692 Pyke,693 and Conway andSpandorfer.6.94

Hobbs says of silicon-on-sapphire circuits thattheir fabrication is suitable for large arrays and thatthey are indeed "promising, but presently beingpursued by only one company." (1966, p. 38.) Ofactive thin-film circuits, he concludes: "Potentiallycheaper and easier to fabricate very large arrays.Feasibility is not proven and utilization much furtheraway." (Hobbs, 1966, p. 38.) The same reviewercontinues: "Costs are expected to range between 3and 5 cents per circuit in large interconnectedcircuit arrays . . . However, the ability to achievethese costs is dependent upon the use of large inter-connected arrays of circuits and, hence, upon thecomputer industry's ability to develop logical designand machine organization techniques permittingand utilizing such arrays." (Hobbs, 1966, p. 39.)

Continuing R & D problems in terms of LSItechnology include those of packaging design 6.95error detection and correction with respect tomalfunctioning components; ".96 the proper balanceto be achieved between flexibility, redundancy, andmaintenance or monitoring procedures, and ques-tions of segmentation or differentiation of functionallogic types.697 One example of many special prob-

Page 33: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

lems is reported by Kohn as follows: "In all batchfabricated memories, the problem of unrepairableelement failures is predominant . . . It is an openquestion how complex and expensive the additionalelectronic circuits will be, which will disconnect thedefective elements and connect the spare ones."(Kohn, 1965, p. 132.) On the other hand, specialadvantages of LSI techniques for self-diagnosis andself-repair have been claimed.698

6.3. Advanced Data Storage Developments

In the area of advanced hardware, the prospectsfor much larger, much faster, and more versatilestorage systems must of course be a major R & Dconsideration. Current technological advances indi-cate the desirability of increasing use of integratedconstruction methods using ferrite aperture plates,thin films, laminated-diode combinations, field-effecttransistors, and superconductive thin film systems,among other recent developments.699 For anotherexample, possible applications of echo resonancetechniques for microwave pulse delay lines thatwould be suitable for high-speed memories are beingexplored at the Lockheed Palo Alto Research Labor-atory. (Kaplan and Kooi, 1966).

Advanced hardware developments for improveddata storage emphasize both higher speeds of accessand readout and larger capacities at higher densitiesof storage. There are the small capacity, ultra-high-speed, memories of the read-only, scratchpad, andassociative type. These typically supplement signifi-cantly larger capacity and slower speed "mainmemories". Next, there are continuing prospectsfor high density, very large capacity stores.

There is finally the question of R & D require-ments in the area where the development of"artificial" memories are designed to replicate, sofar as possible, known neurophysiological phe-nomena. For example, Borsellino and his colleaguesat the University of Genoa are studying physical-chemical simulation, such as collagen "inemories",in terms of possible mechanisms of axon action,connectivity of pulses, and currents through mem-branes. (Stevens, 1968, p. 31).

We may thus conclude with Licklider that"insofar as memory media are concerned, currentresearch and development present many possibil-ities. The most immediate prospects advanced forprimary memories are thin magnetic films, coatedwires, and cryogenic films. For the next echelons,there are magnetic disks and photographic films andplates. Farther distant are thermoplastics andphotosensitive crystals. Still farther away almostwholly speculative are protein molecules andother quasi-living structures." (Licklider, 1965,pp. 63-64).

6.3.1. Main Memories

Questions of advanced tehcnological develop-ments related to data and program information

26

storage and recall concern first of all the problemsof "main memory" that is, the preloaded, im-mediately accessible, information-recording spaceallocated at any one time to necessary systemsupervision and control, to user(s) programs anddata, and .o temporary work space requirements.

It is to be noted that "this 'main' memory size isrelated to the processing rate; the faster the arith-metic and logic units, the faster and larger the mem-ory must be to keep the machine busy, or to enableit to solve problems without waiting for data."(Hoagland, 1965, p. 53).

Further, "this incompatibility between logic andmemory speeds has led to increased parallel opera-tion in processors and more complex instructions asan attempt to increase overall system capability."(Pyke, 1967, p. 161).

As of current technology, main memories are stillusually magnetic core, with typical capacities of amillion bits and cycle times of about one micro-second.61" One relatively recent exception is theNCR Rod Memory C -mputer, which is claimed tohave "about the fastec main memory cycle time ofany commercial computer yet delivered 800nanoseconds." (Data Processing Mag. 7, No. 11,12 (Nov. 1965).) This is a thin-film memory, con-structed from beryllion-copper wires plated withmagnetic coating. 6.101

Petschauer lists the following trends which maybe expected in magnetic memory developments inthe near future:

"1. Trend toward simple cell structures 2 or 3wire arrays.

"2. More automated assembly and conductortermination or batch-fabricated arrays.More fully automated plane testing.

"4. More standardization.Extended use of integrated or hybrid circuits.

"6. Improved methods of packaging for stack andstack interface circuits to reduce packagingand assembly costs.Reduced physical size." (Petschauer, 1967,p. 599).

With respect to current prospects for much larger,much faster main memories, Rajchman (1965)reviews possibilities for integrated constructionmethods using ferrite aperture plates, thin films,laminate-diode combinations, field-effect transistors,and superconductive thin film cryotrons.6102 Itis noted further that "planar magnetic film memoriesoffer many advantages for applications as maincomputer storage units in the capacity range of200K to 5M bits." (Simkins, 1967, p. 593), and that"perhaps the most significant system advantageavailable to users of plated magnetic cylindricalthin film memory elements is a nondestructivereadout capability. For main memory use, NDROwith equal Read-Write drive currents is most ad-vantageous. It allows the greatest possible flexibilityof organization and operation. For maximum econ-omy, many memory words (or bytes) may be ac-

"3.

"5.

Page 34: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

cessed by a single word drive line without need formore than one set of sense amplifiers and bitcurrent drivers. The set contains only the number ofamplifiers needed to process the bits of one word(or byte) in parallel." (Fedde, 1967, p. 595). Simpson(1968) discusses the thin film memory developed atTexas Instruments.6.1°3

Nevertheless, the known number of storageelements capable of matching' ultrafast processingand control cycle times (100-nanosecond or less) arerelatively few,6.1°4 and there are many difficulties tobe encountered in currently available advancedtechniques.6106 Some specific R & D requirementsindicated in the literature include materials researchto lower the high voltages presently required forlight-switching in optically addressed memories(Kohn, 1965),63°6 attacks on noise problems in inte-grated circuit techniques (Merryman, 1965),6.1°7 andthe provision of built-in redundancy against elementfailures encountered in batch fabrication techniques(Kohn, 1965). In the case of cryotrons used formemory design, Rajchman (1965) notes that the"cost and relative inconvenience of the necessarycooling equipment is justified only for extremelylarge storage capacities" (p. 126), such as thoseextending beyond 10 million bits, and Van Dam andMichener (1967) concur.6.1°8 Considerations of"break-even" economics with respect to cryogenic-element memories such as to balance high densitystorage and high speed access against the "cooling"costs has been assessed at a minimum random-access memory requirement of 107 bits.6109 As of1967-68, however, practical realizations of suchtechniques have been largely limited to small-scale,special-purpose auxiliary and content-addressablememories, to be discussed next.

6.3.2. High-Speed, Special-Purpose, and Associative orContent-Addressable Memories

Small, high-speed, special-purpose memorieshave been used as adjuncts to main memories incomputer design for some years.6.11° One majorpurpose is to provide increased speed of instructionaccess or address translation, or both. The "read-only-stores" (ROS) in particular represent relativelyrecent advances in "firmware," or built-in micro-programmingpii

It is noted that "the mode of, implementing ROM'sspans the art, from capacitor and resistor arrays andmagnetic core ropes and snakes to selectivelydeposited magnetic film arrays." (Nisenoff, 1966,p. 1826.) An Israeli entry involves a two-levelmemory system with a microprogrammed "ReadOnly Store" having an access time of 400 nano-seL 9nds. (Dreyer, 1968.) A variation for instruction-access processes is the MYRA (MYRi Aperture)ferrite disk described by Briley (1965). This, whenaccessed, produces pulses in sequential trains on64 or more wires. A macro instruction is addressedto an element in the MYRA memory which thenproduces gating signals for the arithmetic unit andsignals for fetching both operands and the next

376-411 0 -'70 - 3

lEalLi,44411,1112-,2d1111(1, fWd44f 4

27

macro instructions. Further, "Picoinstructions arestored at constant radii upon a MYRA disk, in theproper order to perform the desired task. Theadvantages of the MYRA element are that the pico-instructions are automatically accessed in se-quence . . ." 6.112 Holographic ROM possibilitiesare also under consideration.6.113

In the area of associative, or content-addressablememories ,6314 advanced hardware developments todate have largely been involved in processor designand provision of small-scale auxiliary or "scratch-pad" memories rather than for massive selection-retrieval and data bank management applications 6.115"Scratchpad" memories, also referred to as "slave"memories, e.g., by Wilkes (1965),6.116 are defined byGluck (1965) as "small uniform access memorieswith access and cycle times matched to the clockof the logic." They are used for such purposes asreducing instruction-access time, for micro-programming, for buffering of instructions or datathat are transferable in small blocks (as in the"four-fetch" design of the B 8500),6.117 for storage ofintermediate results, as table lookup devices,6.118as index registers and, to a limited extent, forcontent addressing. 6.119

Another example is the modified "interactive"cell assembly design of content-addressable memorywhere entries are to be retrieved by coincidence ofa part of an input or query pattern with a part ofstored reference patterns, including other variationson particular match operations (Gaines and Lee,1965).6.12° In addition, we note developments withrespect to a solenoid array 6.121 and stacks of plasticcard resistor arrays,6.122 both usable for associativememory purposes; the GAP (Goodyear AssociativeProcessor),6.123 the APP (A ssoci ativ e ParallelProcessor) described by Fuller and Bird (1965),6.124the ASP (A ssoci ati on-Storing Processor) machineorganization,6.126 and various approaches whichcompromise somewhat on speed, including bit-rather than word-parallel searching 6.126 or the useof circulating memories such as glass delay lines.6.127

Cryogenic approaches to the hardware realizationof associative memory concepts have been underinvestigation since at least the mid-1950's (Sladeand McMahon, 1957), while McDermid and Peterson(1961) report work on a magnetic core technique asof 1960. However, the technology for developinghigh-speed reactivity in these special-purpose mem-ories has been advanced in the past few years. Onthe basis of experimental demonstration, at least,there have been significant advances with respectto parallel-processing, associative-addressing, in-ternal but auxiliary techniques in the form of mem-ories built-into some of the recently developed largecomputer systems 6328

The actual incorporation of such devices, evenif of somewhat limited scale, in operational com-puter system designs is of considerable interest,whether of 25- or 250-nanosecond performance. Forexample, Ammon and Neitzert report RCA experi-ments that "show the feasibility of a 256-word

Page 35: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

scratchpad memory with an access time of 30 nano-seconds . . . The read/write cydle time, however,will still be limited by the amplifier recovery so thatwith the best transistors available it appears that60 nanoseconds are required". (1965, p. 659). RCAdevelopments also include a sonic film memory inwhich thin magnetic films and scanning strain wavesare combined for serial storage of digital informa-tion.6329

Crawford et al. (1965) have claimed that an IBMtunnel diode memory of 64 48-bit words and a read/restore or clear/write cycle time of less than 25nanoseconds was "the first complete memory sys-tem using any type of technology reported in thissize and speed range". (p. 636).6330 Then there is anIBM development of a read-only, deposited magneticfilm memory, having high-speed read capability(i.e., 19ns access time) and promising economicsbecause the technique is amenable to batch fabrica-tion."' (Matick et al., 1966).

Catt and associates of Motorola describe "an in-tegrated circuit memory containing 64 words of 8bits per word, which is compatible in respect toboth speed and signal level with high-speed current-mode gates. The memory has a nondestructive readcycle of 17 nanoseconds and a write cycle of 10 nano-seconds without cycle overlap." (Catt et al., 1966,p. 315).6.132 Anacker et al. (1966) discuss 1,000-bitfilm memories with 30 nanosecond access times.6.133Kohn et al. (1967) have investigated a 140,000 bit,nondestructive read-out magnetic film memorythat can be read with a 20-nanosecond read cycletime, a 30-nanosecond access time, and a 65-nano-second write time. More recently, IBM has an-nounced a semi-conductor memory with 40 nano-second access.6.134

Memories of this type that are of somewhat largercapacity but somewhat less speed (in the 100-500nanosecond range) are exemplified by such com-mercially-announced developments as those ofElectronic Memories,6333 Computer Control Com-pany,6336 and IBM 6.13' Thus, Werner et al. (1967)describe a 110-nanosecond ferrite core memory witha word capacity of 8,192 words,6.138 while Pugh et al.(1967) report other nig developments involving a120-nanosecond film memory of 600,000-bit capacity.McCallister and Chong (1966) describe an experi-mental plated wire memory system of 150,000-bitcapacity with a 500-nanosecond cycle time and a 300 -nanosecond access time, developed at UNIVAC.6.139Another UNIVAC development involves planar thinfiims.o.140 A 16,384-word, 52-bit, planar film memorywith half-microsecond or less, (350 nanosecond)cycle time, under development at Burroughs lab-oratories for some years, has been described byBittman (1964).6.141 Other recent developments havebeen discussed by Seitzer (1967) 6.142 and Raffelet al. (1968),6.143 among others.

For million-bit and higher capacities, recent IBMinvestigations have been directed toward the use of"chain magnetic film storage elements" 6.144 inboth DRO and NDRO storage systems with 500-

28

nanosecond cycle times 8.145 It is noted, however,that "a considerable amount of development work isstill required to establish the handling, assembly,and packaging techniques." (Abbas et al., 1967,p. 311).

A plated wire random access memory is underdevelopment by UNIVAC for the Rome Air Develop-ment Center. "The basic memory module consistsof 107 bits; the mechanical package can hold 10modules. The potential speed is a 1-to-2 micro-second word rate. . . . Ease of fabrication has beenemphasized in the memory stack design. Thesefactors, together with the low plated wire elementcost, make an inexpensive mass plated wire store adistinct possibility." (Chong et al., 1967, p. 363).6.146RADC's interests in associative processing are alsoreflected in contracts with Goodyear AerospaceCorp., Akron, Ohio, for investigation and experimental fabrication of associative memories andprocessors. (See, for example, Gall, 1966).

6.3.3. High-Density Data Recording and StorageTechniques

Another important field of investigation withrespect to advanced data recording, processing, andstorage techniques is that of further development ofhigh-density data recording media and methods andbulk storage techniques, including block-orientedrandom access memories.6.147 Magnetic tech-niques cores, tapJs, and cards continue to bepushed toward multimillion bit capacities.6.148 Asingle-wall domain magnetic memory system hasrecently been patented by Bell Telephone Labora-tories.6.143 In terms of R & D requirements for thesetechniques, further development of magnetic heads,recording media, and means for track location hasbeen indicated,6.13° as is also the case for electronor laser beam recording techniques.6.131 Videotapedevelopments are also to be noted.6.132

In addition to the examples of laser, holographic,and photochromic technologies applied to highdensity data recording previously given, we maynote some of the other advanced techniques thatare being developed for large-capacity, compactstorage. These developments include the materialsand media as well as techniques for recording withlight, heat, electrons, and laser beams. In particular,"a tremendous amount of research work is beingundertaken in the area of photosensitive materials.Part of this has been sparked by the acute shortageof silver for conventional films and papers. InOctober, more than 800 people attend a sym-posium in Washington, D.C., on UnconventionalPhotographic Systems. Progress was described in anumber of areas, including deformable films,electrophotography, photochromic systems, uncon-ventional silver systems, and photopolymers."(Hartsuch, 1968, p. 56).

Examples include the General Electric Photo.charge,6.133 the IBM Photo-Digital system,4 theUNICON mass memory,6.133 a system announced

Page 36: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

by Foto-Mem InC.6.156 and the use of thin dielectricfilms at Hughes Research Laboratories.6"87 AtStanford Research Institute, a program for the U.S.Army Electronics Command is concerned with in-vestigations of high-density arrays of micron-sizestorage elements, which are addressed by electronbeam. The goal is a digital storage density of 108bits per square centimeter.0.188

Still another development is the NCR heat-moderecording technique. (Carlson and Ives, 1968).This involves the use of relatively low power CWlasers to achieve real-time, high-resolution (150: 1)recording on a variety of thin films on suitable sub-strates."88 In particular, microimage recordings canbe achieved directly from electronic character-generation devices.6.'60 Newberry of General Elec-tric has described an electron optical data storagetechnique involving a 'fly's eye' lens system forwhich a "a packing density of 108 bits per squareinch has already been demonstrated with 1 micronbeam diameter." (1966, p. 727-728).

Then there is a new recording-coding system,from Kodak, that uses fine-grained photOgraphicmedia, diffraction grating patterns, and laser lightsources.8.181 As a final example of recent record-ing developments we note that Gross (1967) hasdescribed a variety of investigations at Ampex,including color video recordings on magnetic filmplated discs, silver halide film for both digital andanalog recordings, and use of magneto-optic effectsfor reading digital recordings.8"82

Areas where continuing R & D efforts appear to beindicated include questions of read-out from highlycompact data storage,6'163 of vacuum equipment inthe case of electron beam recordin 0.164 and ofnoise in some of the reversible media.8.'" Then it isnoted that "at present it is not at all clear what com-promises between direct image recording and holo-graphic image recording will best preserve high

information density with adequate redundancy, butthe subject is one that attracts considerable re-search interest." (Smith, 1966, p. 1298),

Materials and media for storage are also sub-jects of continuing R & D concern in both theachievement of higher packing densities with fastdirect access and in the exploration of prospectsfor storage of multivalued data at a single physicallocation. For example: "A frontal attack on newmaterials for storage is, crucial if we are to use theinherent capability of the transducers now at ourdisposal to write and read more than 1 bit of dataat 1 location . . .

"One novel approach for a multilevel photo-graphic store now being studied is the use of colorphotography techniques to achieve multibit storageat each physical memory location . . Color filmcan store multilevels at the same point because bothintensity and frequency can be detected." (Hoagland,1965, p. 58).

"An experimental device which changes the colorof a laser beam at electronic speeds has beendeveloped . . . IBM scientists believe it couldlead to the development of color-coded computermemories with up to a hundred million bits of infor-mation stored on one square inch of photographicfilm." (Commun. ACM 9, 707 (1966).)

Such components and materials would haveextremely high density, high resolution character-istics. One example of intriguing technical possi-bilities is reported by Fleisher et al. (1965) in termsof a standing-wave, read-only memory where n colorsources might provide n information bits, one foreach color, at each storage location.6,'66 Theseauthors claim that an apparently unique feature ofthis memory would be a capability for storing bothdigital and analog (video) information 0167 and thatparallel word selection, accomplished by fiber-opticlight splitting or other means, would be useful inassociative selection and retrieva1.8.188

7. Debugging, On-Line Diagnosis, Instrumentation, and Problems of Simulation

Beyond the problems of initial design of informa-tion processing systems are those involved in theprovision of suitable and effective debugging, self-monitoring, self-diagnosis, and self-repair facilitiesin such systems. Overall system design R & Drequirements are, finally, epitomized in increasedconcern over the needs for o -line instrumentation,simulation, and formal modelling of informationflows and information handling processes, and withthe difficulties so far encountered in achievingsolutions to these problems. In turn, many of theseproblems are precisely involved in questions ofsystems evaluation.

It has been cogently suggested that the area ofaids to debugging "has been given more lip serviceand less attention than any other" 7.1 in considera-tions of information processing systems design.

29

Special, continuing, R & D requirements are raisedin the situations, first, of checking out very largeprograms, and secondly, of carrying out checkoutoperations under multiple-access, effectually on-line, conditions." In particular, the checkout ofvery large programs presents special problems."

7.1. Debugging Problems

Program checkout and debugging are also prob-lems of increasing severity in terms of multiple-access systems. Head states that "testing of manynon-real-time systems even large _ones has alltoo often been ill-planned and haphazard withnumerous errors discovered only after cutover. . . .

In most real-time systems, the prevalence of errorsafter cutover, any one of which could force the

Page 37: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

system to go down, is intolerable." (1963, p. 41.)Bernstein and Owens (1968) suggest that conven-tional debugging tools are almost worthless in thetime-sharing situation and propose requirementsfor an_improyed debugging support system.7.4

On-line debugging provides particular challengesto the user, the programmer and the system de-signer.7.5 It is important that the console provideversatile means of accomplishing system and pro-gram self-diagnosis, to determine what instructioncaused a hang-up, to inspect appropriate registersIn a conflict situation, and to display anticipatedresults of a next instruction before it is executed.A major consideration is the ability to provide inter-pretation and substitution of instructions, withtraps, from the console. A recent system for on-linedebugging, EXDAMS (EXtendable Debugging andMonitoring System), is described by Balzer (1969).7.°

Aids to debugging and performance evaluationprovided by a specific system design should thereforeinclude versatile features for address traps, in-struction traps, and other traps specified by theprogrammer. For example, if SIMSCRIPT programsare to be run, a serious debugging problem arisesbecause of the dynamic storage allocation situationwhere the clients needs to find out where he is andprovide dynamic dumping, e.g., by panel interruptwithout halting the machine. Programmers checkingout a complex program need an interrupt-and-trap-to-a-fixed location system, the ability to bounce outof a conflict without being trapped in a halt, to jumpif a program accesses a particular address, to takespecial action if a data channel is .tied up for ex-pected input not yet received, or to jump somewhereelse on a given breakpoint and then come back toscheduled address, e.g., on emergence of an over-flow condition.7.7

Problems of effective debugging, diagnostic,and simulation languages are necessarily raised.7.8For example, McCarthy et al. report: "In ouropinion the reduction in debugging time made pos-sible by good typewriter debugging languages andadequate access to the machine is comparable tothat provided by the use of ALGOL type languagesfor numerical calculation." (McCarthy et al., 1963,p. 55). Still another debugging and diagnostic R & Drequirement is raised with respect to reconfigura-tions of available installations and tentative evalua-tions of the likely success of the substitution of oneconfiguration for another.7.°

In at least one case, a combined hardware-soft-ware approach has been used to tackle anotherspecial problem of time-shared, multiple-user sys-tems, that of machine maintenance with minimuminterference to ongoing client programs. TheSTROBES technique (for Shared-time-repair ofbig electronic systems) has been developed at theComputation Center of the Carnegie Institute ofTechnology.7.10 This type of development is of sig-nificance because as Schwartz and his co-authorsreport (1965, p. 16): "Unlike more traditional sys-tems, a time-sharing system cannot stop and start

30

over when a hardware error occurs. During time-sharing, the error must be analyzed, corrected ifpossible, and the user or users affected must benotified. For all those users not affected, no sig-nificant interruption should take place.".

7.2. On-Line Diagnosis and Instrumentation

Self-diagnosis is an important area of R & D con-cern with respect both to the design and the utiliza-tion of computer systems.7.11 In terms of potentialsfor automatic machine -self- repair, it is noted that"a self-diagnosable computer is a computer whichhas the capabilities of automatically detecting andisolating a fault (within itself) to a small numberof replaceable modules." (Forbes et al., 1465, p.1073).7.12 To what extent can the machine itself beused to generate its own programs and procedures?Forbes et al. suggest that: "If the theory of self-diagnosing computers is to become practical for afamily of machines, further study and developmentof machine generation of diagnostic procedures isnecessary." (1965, p. 1085).

Several different on-line instrumentation* tech-niques have been experimentally investigated byEstrin and associates (1967), by Hoffman (1965),Scherr (1965) and by Sutherland (1965), amongothers.7.13 Monitoring systems for hardware, soft-ware, or both are described, for example, byAviiiensis (1967, 1968),7.14 Jacoby (1959),7.15 andWetherfield (1966),7.1° while a monitoring system forthe multiplexing of slow-speed peripheral equipmentat the Commonwealth Scientific and IndustrialResearch Organization in Australia is described byAbraham et al. (1966). Moulton and Muller (1967)describe DITRAN (Diagnostic FORTRAN), a com-piler with extensive error checking capabilities thatcan be applied both at compilation time and duringprogram execution, and Whiteman (1966) discusses"computer hypochondria".7.17

Fine et al. (1966) have developed an interpreterprogram to analyze running' programs with respect todetermining sequences of instructions between pagecalls, page demands by time intervals, and pagedemands by programs. In relatively early work inthis area, Licklider and Clark report that "ProgramGraph and Memory Course are but two of manypossible schemes for displaying the internal proc-esses of the computer. We are working on othersthat combine graphical presentation with symbolicrepresentation . . . By combining graphical withsymbolic presentation, and putting the mode ofcombination under the operator's control via lightpen, we hope to achieve both good speed and gooddiscrimination of detailed information." (1962, p.120). Howlver, Sutherland comments that: "Theinformation processing industry is uniquely wantingin good instrumentation; every other industry hasmeters, gauges, magnifiers instruments to measure

*Instrumentation" in this context means diagnostic and monitoring procedureswhich are applied to operating programa in a "subject" computer as they are beingexecuted in order to assemble records of workload, system utilisation, and othersiormilar data.

Page 38: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

and record the performance of machines appropriateto that industry." (Sutherland, 1965, p. 12). Moreeffective on-line instrumentation techniques are thusurgently required, especially for the multiple-accessprocessing system.

Huskey supports the contentions of Sutherlandand of Amdahl that; "Much more instrumentation ofon-line systems is needed so that we know what isgoing on, what the typical user does, and what thevariations are from the norms. It is only with thisinformation that systems can be 'trimmed' so as tooptimize usefulness to the customer array."(Huskey, 1965, p. 141).

Sutherland in particular points out that plots oftimes spent by the program in doing various sub-tasks, can tighten up frequently used program andsub-routine loops and thus save significant amountsof processor running-time costs." He also refersto a system developed by Kinslow in which a pic-torial representation of "which parts of memorywere 'occupied' as a function of time for his time-sharing system. The result shows clearly the smallspaces which develop in memory and must remainunused because no program is short enough to fitinto them." (Sutherland, 1965, p. 13). In general, itis hoped that such on-line instrumentation tech-niques will bring about better understanding of theinteractions of programs and data within the proc-essing system.7.'"

Improved techniques for the systematic analysisof multiple-access systems are also needed. AsBrown points out: "The feasibility of time-sharingdepends quite strongly upon not only the time-sharing procedures, but also upon . . . the followingproperties, characteristic of each program when itis run alone:

(1) The percentage of time actually required forexecution of the program . .

(2) The spectrum of delay times during which theprogram awaits a human response . . .

(3) A spectrum of program execution burstlengths . . .

A direct measurement of these properties is diffi-cult; a reasonable estimate of them is important,however, in determining the time-sharing feasibilityof any given program." (1965, p. 82). However, mostof the analyses implied are significantly lacking todate, although some examples of benefits to beanticipated are given by Cantrell and Ellison (1968)and by Campbell and Heffner (1968).

Schwartz et al. emphasize that "another research-able area of importance to proper design is themathematical analysis of time-shared computeroperation. The object in such an analysis is to pro-vide solutions to problems of determining the usercapacity of a given system, the optimum values forthe scheduling parameters (such as -:uantum size)to be used by the executive systen- .nd, in general,the most efficient techniques for sequencing theobject programs." (Schwartz et al., 1965, p. 21).

Continuing, they point to the use of simulation

31

techniques as an alternative, "Because of the largenumber of random variables many of which areinterdependent that must be taken into account ina completely general treatment of time-sharingoperation, one cannot expect to proceed very farwith analyses of the above nature. Thus, it seemsclear that simulation must also be used to studytime-shared computer operation." (Schwartz et al.,1965, p. 21). A 1967 review by Borko reaches similarconclusions,7'20

7.3. Simulation

The on-going analysis and evaluation of informa-tion processing systems will clearly require thefurther development of more sophisticated and moreaccurate simulation models than are availabletoday."' Special difficulties are to be noted in thecase of models of multiple access system where"the addition of pre-emptive scheduling complicatesthe mathematics beyond the point where modelscan even be formulated" (Scherr, 1965, p. 32) andin that of information selection and retrieval applica-tions where, as has been frequently charged, "noaccurate models exist". (Hayes, 1963, p. 284).

In these and other areas, then, a major factor isthe inadequacy of present-day mathematical tech-niques.2.22 In particular, Scherr asserts that "simula-tion models are required because the level of detailnecessary to handle some of the features studied isbeyond the scope of mathematically tractablemodels." ( Scherr, 1965, p. 32). The importance ofcontinuing R & D efforts in this area, even if theyshould have only negative results, has, howeverbeen emphasized by workers in the field.7.23

Thus, for example, at the 1966 ACM-SUNYConference, "Professor C. West Churchman . . .

pointed to the very large [computer] models thatcan now be built, and the very much larger modelsthat we will soon be able to build, and stated thatthe models are not realistic because the qualityof information is not adequate and because the rightquestions remain unasked. Yet he strongly favoredthe building of models, and suggested that muchinformation could be obtained from attempts tobuild several different and perhaps inconsistentmodels of the same system.' (Commun. ACM 9,645 (1966).)

We are led next, then, to problems of simulation.There are obvious problems in this area also. Firstthere is the difficulty of "determining and buildingmeaningful models" (Davis, 1965, p. 82), especiallywhere a high degree of selectivity must be imposedupon the collection of data appropriately representa-tive of the highly complex real-life environments andprocesses that are to be simulated.7.24

Beyond the questions of adequate selectivity insimulation-representation of the phenomena, op:ma-tions, and possible system capabilities beingmodelled are those of the adequacy of the simulationlanguages as discussed by Scherr, Steel, andothers.7.22 Teichroew and Lubin present a compre-

Page 39: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

hensive survey of computer simulation languagesand applications, with tables of comparative charac-teristics, as of 1966.7." In addition, IBM hasprovided a bibliography on simulation, also as of1966.Again, as in the area of graphic input manipula-

tion and output, the field of effective simulation hasspecific R & D requirements for improved andmore versatile machine models and programminglanguages. Clancy and Fineberg suggest that "thevery number and diversity of languages suggeststhat the field [of digital simulation languages] suffersfrom a lack of perspective and direction." (1965,p. 23).

The area of improved simulation languages is onethat has a multiple interaction between software andhardware, especially where a computer is to be usedto simulate another computer, perhaps one whosedesign is not yet complete 7.27 or to simulate manydifferent scheduling, queuing and storage allocationalternatives in time-shared systems (see, forexample, Blunt 1965). Such problems are also dis-cussed by Scherr (1965) and by Larsen and Mano(1965), among others, while Parnas (1966) describesa modification of ALGOL (SFD-ALGOL, for"System Function Description") applicable to thesimulation of synchronous systems.

However, there are difficult current problems inthat languages such as SIMSCRIPT do not takeadvantage of the modularity of many processingsystems, that conditional scheduling of sequencesof events is extremely difficult 7.28 and that "we arestill plagued by our inability to program for simul-taneous action, even for the scheduling of largeunits in a computing system." (corn, 1966, p. 232).

In addition, for simulation and similar applications,heuristic or ad hoc programming facilities may berequired. Thus, "a computer program which is toserve as a model must be able to have well-organizedyet manipulatable data storage, easily augmentableand modifiable. The program must be self-modifyingin a similarly organized way. It should be able tohandle large blocks of data or program routinesby specification of merely a name." (Strum, 1965,p. 114.)

For simulations or testings with controls, and with-out discernible interruption or reallocation of normalservicing of client processing requests, compilersmust be available that will transform queriesexpressed in one or more commonly availablecustomer languages to the language(s) most effec-tively used by the substituted experimental systemand to the format(s) available in a master data base.

Then there are problems in the development of anappropriate "scenario", or sequence of events to besimulated.7.29 Burdick and Naylor (1966) provide asurvey account of the problems of design andanalysis of computer simulation experiments.

The problems of effective simulation of complex,interdependent processes are another area ofincreasing concern. Suppose, for example, that weare seeking to simulate a process in which many

32

separate operations are carried out concurrently orin parallel, and that the simulation techniquerequires a serial sequencing of these operations.Depending upon the choice of which one of thetheoretically concurrent operations is processedfirst in the sequentializing procedure, the resultsof the simulation may be significantly different inone case than in another.7.3°

For example, the SL/1 language being developedat the University of Pisa under Caracciolo di Forino(1965) is based in part on SOL (Simulation-OrientedLanguage, see Knuth and McNeley, 1964) and inpart on SIMULA (the ALGOL extension 'developedby 0. J. Dahl, of the Norwegian Computing Center,Oslo).7,31 A second version, SL/2, now underdevelopment, will provide self-adapting features tooptimize the system. Caracciolo emphasizes that,for any set of deterministic processes that are to beapplied simultaneously, but where problems ofincompatibility may arise, the problems can bereduced to a set of probabilistic processes. Other-wise, if one sequentializes parallel, concurrentprocesses actually dependent upon the order ofsequentialization, then hidden problems of incom-patibility may vitiate the results obtained.

Despite difficulties, however, progress has beenand is being made. Thus computer simulation hasbeen investigated as a means of system simulationfor determination of probable costs and benefitsin advance of major investments in equipment orprocedures.7.32 Then, as reported by Gibson (1967),simulation studies have been used to determinethat block transfers of 4 to 16 words will facilitatereduction of effective internal access times to a fewnanoseconds. Other programs to simulate digitaldata processing, time-shared system performance,and the like, are discussed by Larsen and Mano(1965) and by Scherr (1965). Simulation studies interms of multiprocessor systems are represented byLindquist et al. (1966) 7.33 and by Huesmann andGoldberg (1967).7.34

Other advantages from research and developmentefforts to be anticipated from computer simulationexperiments are those of transfer of applicationsfrom a given computer to another not yet installed oravailable,7" advancements in techniques of pic-torial data processing and transmission,7.36 advanceappraisals of performance of time-shared systems,737and investigations of probable performance ofadaptive recognition systems.7.38

Finally, we note prospects for system simulationas a means of evaluation and of redesign, includingthe alteration of scheduling priorities to meetchanging requirements of the system's clientele.Three examples from the literature are as follows:

(1) "Use of a simulator permits the installation tocontinue running its programs as reprogram-ming proceeds on a reasonable schedule."(Trimble, 1965, p. 18).

(2) "Effective response time simulation can be

Page 40: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

easily modified to provide operating costs ofretrieval." (Blunt, 1965, p. 9).

(3) "When large systems are being developedanother set of programs is involved to performa function not required for simpler situations.These are the simulation and analysis pro-grams for system evaluation and for semi-automated systems having a human com-ponent system training." (Steel, 1965, p. 232).

On the other hand, as Davis warns: "It is obviousthat there is some threshold beyond which the realenvironment is too complex to permit meaningfulsimulation." (1965, p, 82). For the future, therefore,a system of multiple-working-hypotheses might well

be developed: "The benefits and drawbacks ofempirical data gathering vs. simulation vs. mathe-matical analysis are well documented. What wewould really like to be able to do is a little of allthree, back and forth, until our gradually increasingcomprehension of the problem becomes the desiredsolution." (Greenberger, 1966, p. 347). Similarly, itmay be claimed that simulation models ". . . areoften cumbersome and difficult to adapt to newconfigurations, with results of somewhat uncertaininterpretation due to statistical sampling variability.Ideally, simulation and analytic techniques shouldsupplement each other, for each approach has itsadvantages." (Gayer, 1967, p. 423).

8. Conclusions

As we have seen, major trends in input/output,storage, processor, and programming design relateto multiple access, multiprogrammed, and multi-processor systems. On-line simulation, instrumenta-tion, and performance evaluation capabilities arenecessary in order to effectively measure and testproposed techniques, systems, and networks ofbroad future significance to improved utilization ofautomatic data processing techniques.

We may therefore close this report on overallsystem design considerations with the followingquotations:

(1) "In rating the completeness, clarity, andsimplicity of the system diagnostics, com-mand language and keyboard procedures, wefound their 'goodness' was inversely relatedto the running efficiency of the system . . .System developers should examine thiscondition to determine whether inefficientexecution is an inherent feature of system[s]supplying complete and easily understooddiagnostics, or a function of the specificinterests and prejudices of the developers."(O'Sullivan, 1967, p. 170).

(2) "An engineer who wishes to concern himselfwith performance criteria in the synthesis ofnew systems is frustrated by the weaknessof measurement of computer system behavior."(Estrin et al., 1967, p. 645.)

(3) "The setting up of criteria of evaluation . . .

demands user participation and provides anindication of whether the user understands thereason for the system, the role of the systemand his responsibilities as a prospective sys-tem user." (Davis, 1965, p. 82.)

33

(4) "Today, and to an even greater extenttomorrow, the use of multiple functional unitswithin the information processing system, themultiplexing of input and output messages, andthe increased' use of software to permit multi-programming will require more subtle meas-ures to evaluate a particular system's perform-ance." (Nisenoff, 1966, p. 1828.)

(5) "Broad areas for further research are indi-cated . . . Comparative experimental studiesof computer facility performance, such asonline, offline, and hybrid installations, sys-tematically permuted against broad classes ofprogram languages (machine-oriented, pro-cedure-oriented, and problem-oriented lan-guages), and representative classes ofprogramming tasks." (Sackman et al., 1968,p. 10), and

(6) "Improved methods of simulation, optimizingtechniques, scheduling algorithms, methodsof dealing with stochastic variables, t'hese arethe important developments that are pushingback the limits of our ability to deal with verylarge systems." (Harder, 1968, p. 233.)

Finally we note that the problems of the informa-tion processing system designer, then, are todayaggrevated not only by networking, time-sharing,time-slicing, multiprocessor and multiprogrammingpotentialities, but also by critical questions involvingthe values and the costs of maintaining the integrityof privileged files. By the terminology "privilegedfiles", we suggest the interpretation of all datastored in a machine-useful system that may havevarying degrees of privacy, confidentiality, orsecurity restrictions placed upon unauthorizedaccess. Some of the background considerationsaffecting both policy and design factors will bediscussed in the next report in this series.

Page 41: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Appendix A. Background Notes on Overall System Design RequirementsIn this Appendix we present further discussion and background material intended to highlightcurrently identifiable research and development requirements in the broad field of the computer andinformation sciences, with emphasis upon overall system design considerations with respect to informa-tion processing systems. A number of illustrative examples, pertinent quotations from the literature,and references to current R and D efforts have been assembled. These background notes have beenreferenced, as appropriate, in the summary text.

1. Introduction1.1 There are certain obvious difficulties with

respect to the organization of material for a series ofreports on research and development requirementsin the computer and information sciences and tech-nologies. These problems stem from the overlapsbetween functional areas in which man-machineinteractions of both communication and controlare sought; the techniques, tools, and instrumenta-tion available to achieve such interactions, and thewide variety of application areas involved.

The material that has been collected and reviewedto date is so multifaceted and so extensive as torequire organization into reasonably tractable(but arbitrary) subdivisions. Having consideredsome of the R and D requirements affecting specificBoxes shown in Figure 1 (p. 2) in previous reports,we will discuss here some of the overall systemdesign considerations affecting more than one ofthe processes or functions shown in Figure 1.

Other topics to be covered in separate reports inthis series will include specific problems of informa-tion storage, selection and retrieval systems and thequestions of maintaining the integrity of privilegedfiles (i.e., some of the background considerationswith respect to the issues of privacy, confidentiality,and/or security in the case of multiply-accessed,machine-based files, data banks, and computer-communication networks).

In general, the plan of attack in each individualreport in the series will be to outline in relativelyshort discursive text the topics of concern, sup-plemented by background notes and quotations andby an appendix giving the bibliographic citations ofquoted references. It is planned, however, that therewill be a comprehensive summary, bibliography,and index for the series as a whole.

Since problems of organization, terminology, andcoverage have all been difficult in the preparation ofthis series of reports, certain disclaimers and obser-vations with respect to the purpose and scope of thisreport, its necessary selectivity, and the problemsof organization and emphasis are to be noted. Ob-viously, the reviewer's interests and limitations willemerge at least indirectly in terms of the selectivitythat has been applied.

35

71i T1:aLlutl

In general, controversial opinions expressed orimplied in any of the reports in this series are thesole responsibility of the author(s) of that report andare not intended in any way to represent the officialpolicies of the Center for Computer Sciences andTechnology, the National Bureau of Standards, orthe Department of Commerce. However, everyeffort has been made to buttress potentially contro-versial statethents or implications either withdirect quotations or with illustrative examples fromthe pertinent literature in the field.

It is especially to be noted that the refer-ences and quotations included in the text of thisreport, in the corroborative background notes, or inthe bibliography, are necessarily highly selective.Neither inclusion nor citation is intended in anyway to represent an endorsement of any specificcommercially available device or system, of anyparticular investigator's results with respect to thoseof others, or of the objectives of projects that arementioned. Conversely, omissions are often in-advertent and are in no sense intended to implyadverse evaluations of products, materials andmedia, equipment, systems, project goals andproject results, or of bibliographic references notincluded.

There will be quite obvious objections to thisnecessary selectivity from readers who are alsoR & D workers in the fields involved as to the repre-sentativeness of cited contributions from their ownwork or that of others. Such criticisms are almostinevitable. Nevertheless, these reports are notintended to be state-of-the-art reviews as such, but,rather, they are intended to provide provocativesuggestions for further R & D efforts. Selectivitymust also relate to a necessarily arbitrary cut-offdate in terms of the literature covered.

These reports, subject to the foregoing caveats,are offered as possible contributions to the under-standing of the general state of the art, especiallywith respect to long-range research possibilities in avariety of disciplines that are potentially applicableto information processing problems. The reportsare therefore directed to a varied audience amongwhom are those who plan, conduct, and support

Page 42: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

resew in these varied disciplines. They are alsoaddressed to applications specialists who may hopeeventually to profit from the results of current re-search efforts. Inevitably, there must be somerepetitions of the obvious or over-simplifications ofcertain topics for some readers, and there must alsobe some too brief or inadequately explained discus.sions on other topics for these and other readers.What is at best tutorial for one may be difficult foranother to follow. It is hoped, however, that thenotes and bibliographic citations will providesufficient clues for further follow-up as desired. Theliterature survey upon which this report is basedgenerally covered the period from mid-1962 to mid-1968, although a few earlier and a few later refer-ences have also been included as appropriate.

1.2 Certain features of the information flow andand process schema of Figure 1 are to be noted. It isassumed, first, that the generalized informationprocessing system should provide for automaticaccess from and to many users at many locations.This implies multiple inputs in parallel, system

interruptibility, and interlacings of computer pro-grams. It is assumed, further, that the overallscheme involves hierarchies of systems, devicesand procedures, that processing involves multistepoperations, and that multimode operation is pos-sible, depending on job requirements, prior ortentative results, accessibility, costs, and the like.

It should be noted, next, that techniques sug-gested for a specific system may apply to more thanone operational box or function shown in thegeneralized diagram of Figure 1. Similarly, in aspecific system, the various operations or processesmay occur in different sequences (including itera-tions) and several different ones may be combinedin various ways. Thus, for example, questions ofremote console design may affect original item input,input of processing service requests, output, andentry of feedback information from the user or thesystem client. The specific solutions adopted may beimplemented in each of these operational areas, orcombined into one, e.g., by requiring all imputsand outputs to flow through the same hardware.

2. Requirements and Resources Analysis2.1 "The single information flow concept . . .

is input-oriented. The system is organized so thatessential data are inserted into a common reservoirthrough point-of-origin input/output devices. Userrequirements are then satisfied from this reservoirof fundamental data about transactions.

"Thus, the single information flow concept ischaracterized by random entry of data, direct accessto data in the system, and complete real-timeprocessing . . . fast response, a high degree ofreliability, and an easily expansible system."(Moravec, 1965, p. 173).

2.2 "In a highly distributed system, however,information on inputs to the organization flowdirectly to relatively low-level way stations whereall possible processing is done and all actions aretaken that are allowed by the protocol governingthat level. In addition to the direct actions thatit takes, the lowest, or reflexive, level of informationprocessing ordinarily generates two classes ofinformation. These are, first, summaries of actionstaken or anticipated and, second, summaries ofinformation inputs that, because of their type,salience, or criticality, fall outside the range ofaction that policy has established as appropriate forthat level. . .

"In computer terms, a highly distributed systeminvolves a primary executive program that adds andsubtracts subroutines to various primary librariesfrom which alternative subroutines are to be drawnand combined. Secondary executive programs,responding to separate inputs and conditions, selectand organize subroutines from each of these primarylibraries and add and subtract subroutines to varioussecondary libraries from which tertiary executiveprograms select alternative subroutines for use at

their level and for controlling the library one leveldown, and so forth. The flexibility of a distributedsystem is an outgrowth of the ability of each of thelower executive programs to organize its programon the basis of separate inputs reaching it directly."(Bennett, 1964, pp. 104-106).

"By a distributed implementation of an informationservice system we mean that the data process-ing activity is carried out by several or many installa-tions . . . The data base is now distributed amongthe installations making up the information networkfor this service system . . .

"The distributed information network should offerconsiderable advantage in reducing the cost ofterminal communications by permitting installationsto be located near concenrations of terminals."(Dennis, 1968, p. 373).

2.3 "A large number of factors (user communi-ties,, document forms, subject disciplines, desiredservices, to name but a few) compete for the atten-tion of the designer of information service systems.A methodology for the careful organization of thesefactors and the orderly consideration of theirrelationships is essential if intelligent decisions areto be made." (Sparks et al., 1965, pp. 1-2).

"The lack of recognition of the nature and even,in some cases, the existence of the problems facingthe information systems designer has meant thatthere has been little or no orderly development ofgenerally agreed upon system methodology."(Hughes Dynamics, 1964, p. 1-7).

"To the best of our knowledge, no one has yetdeveloped a completely satisfactory theory of in-formation processing. Because there is no strongtheoretical basis for the field, we must rely on intui-tion, experience and the application of heuristic

36

Page 43: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

notions each time we attempt to solve a new informa-tion processing problem." (Clapp, 1967, p. 4).

2.4 Additional examples are as follows:"Preliminary data support the previous indica-

tions (Werner, Trueswell, et al.) that the introduc-tion of new services is not followed by an im-mediately high level use of them. The state-of-the-art of equipment, personnel, and documentationstill offers continuing problems. Medical researchersin the study do not seem to look upon the system asbeing an essential source of information for theirwork, but as a convenient ancillary activity."(Rath and Werner, 1967, p. 62).

"A major study recently conducted by AuerbachCorporation into the manpower trends in theengineering support functions concerned withinformation . . . which involved investigationsof a large number of company and governmentoperations, was both surprising and disconcertingbecause it showed that there are large areas ofboth government and industry in which there isvery little concern about, or work underway toward,solving the information flow and utilizationproblem." (Sayer, 1965, p. 24).

2.5 "There are seven properties of a system thatcan be stated explicitly by the organization request-ing the system design: WHAT the system shouldbe, WHERE the system is to be used, and WHERE,WHEN, WITH WHAT, FOR WHOM, and WITHWHOM the system is to be designed." (Davis,1964, p. 20).

2.6 Consider also the following:"Consequently, it appears that two early areas

of required investigation are those of determining:1) who are the potential users of science and/orengineering information systems, where are theylocated, what is their sphere of activity? and 2)What is the real nature and volume of materialthat will flow through a national information sys-tem? . . .

"In undertaking a program to establish informa-tion service networks it is necessary to know:

1. Who are the users?2. What are the user information needs?3. Where are these users?4. How many users and user groups are there,

and how, do their needs differ?5. What information products and services will

meet these needs?6. What production operations are necessary to

produce these information products andservices?

7. Which of these products and services arereally being produced now; by whom and whereand how well is an ultimate purpose alreadybeing achieved?

8. How will any new system best integrate withexisting practices?

9. What are the operations best performed froma standpoint of quality and timeliness ofservice to users, economy of costs and overall

37

network operations, available trained man-power, and ability to respond to change?"(Sayer, 1965, pp..144-145).

"Some of the details the user must determine arethe number and location of remote points, frequencyof use, response time required, volume of data to becommunicated, on line storage requirements, andthe like." (Jones, 1965, p. 66).

2.7 "Neglect of 'WHERE the system is to beused' is the most frequent cause of inadequate sys-tem designs." (Davis, 1964, p. 21).

2.8 Thus Sayer points out the need for "popula-tion figures describing the user community in detail,its interest in subject disciplines, and the effect ofthis interest on the effective demand on the systemfrom both initiative and responsive demands."(Sayer, 1965, p. 140).

Sparks et al. raise the following considerations:"There are certain basic dimensions of an informa-tion service system which it is appropriate to recog-nize in a formal way. One of these is the spectrum ofselected disciplines which are to be represented inthe information processed by the system. Anotherof these is the geographical area to be served by thesystem and in which the user population will bedistributed . . .

"The number of user communities into which theuser population is divided determines (or is deter-mined by) the number of direct-service informationcenters in the system. Thus, it has a major effecton system size and structure." (Sparks et al., 1965,pp. 2-6, 2-7).

2.9 "In structuring shiny, new information sys-tems, we must be careful to allow for resistance tochange long before the push buttons are installed,especially when the users of the systems have notbeen convinced that there is a real need for change."(Aines, 1965, p. 5.)

"Examine the various systems characteristicssuch as: user/network interface; network usagepatterns; training requirements; traffic control;service and organization requirements; responseeffectiveness; cost determinations; and networkcapacity." (Hoffman, 1965, p. 90-91.)

"As an appendage to a prototype network, someexperimental retraining programs would be welladvised . . .

"A massive effort directed at retraining largenumbers of personnel now functioning in librarieswill be required to produce the manpower necessaryfor a real-time network ever to reach a fully opera-tional status." (Brown et al., 1967, p. 68).

"Where do experimental studies of user perform-ance fit into burgeoning information services? Theanswer is inescapable: the extent of experimentalactivity will effectively determine the level of ex-cellence, in method and in substantive findings,with which key problems regarding user perform-ance will be met. If experimental studies in man-computer communication continue to be virtuallynonexistent, the gap in verified knowledge of userbehavior will continue to be dominated by. immediate

Page 44: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

cost and narrow technical considerations ratherthan by the users' long range interests. Everyone willbe a loser. Neither the managers of computer utili-ties, or the manufacturers, or the designers of cen-tral systems will have tested, reliable knowledge ofwhat the user needs, how he behaves, how long ittakes him to master new services, or how well heperforms. In turn, the user will not have reliable,validated guidance to plan, select, and becomeskilled in harnessing the information services bestsuited to his needs, his time, and his resources.Since he is last, the user loses most." (Sackman,1968, p. 351).

2.10 "Everyone talks about the computer user,but virtually no one has studied him in a systematic,scientific manner. There is a growing experimentallag between verified knowledge about users andrapidly expanding applications for them. Thisexperimental lag has always existed in computertechnology. Technological innovation and aggres-sive marketing of computer wares have consist-ently outpaced established knowledge of userperformance a bias in computer technologylargely attributable to current management outlookand practice. With the advent of time-sharingsystems, and with the imminence of the much-heralded information utility, the magnitude of thisscientific lag may have reached a critical point.If unchecked, particularly in the crucial area ofsoftware management, it may become a cripplinghumanistic lag a situation in which both theprivate and the public use of computers would becharacterized by overriding concern for immediatemachine efficiency and economy, and by an en-trenched neglect of human needs, individualcapabilities, and long-range social responsibilities."(Sackman, 1968, p. 349).

"Quite often the most important parameter ina system's performance is the behavior of theaverage user. This information is very rarely knownin advance, and can only be obtained by gatheringstatistics. It is important to know, for example,how long a typical user stays on a time-sharingsystem during an average session, how many lan-guage processors he uses, how much computingpower he requires during each 'interaction' withthe system, and so forth. Modeling and simulationcan be of great help in pre-determining this infor-mation if the environment is known, but in manycommercial or University time-sharing systemsthere is little control over or prior knowledge ofthe characteristics of the users." (Yourdon, 1969,p. 124).

"The lag in user studies is a heritage which stemsmainly from the professional mix that originallydeveloped and used the technology of man-computercommunications. For two critical, formativedecades, the 1940's and the 1950's comprisingthe birth and development of electronic digitalcomputers social scientists, human engineersand human factors specialists, the professionalstrained to work with human subjects under experi-

38

mental conditions, were only indirectly concernedwith man-computer communications, dealing largelywith knobs, buttons and dials rather than with theinteractive problem-solving of the user. In allfairness, there were some exceptions to this rule,but they were too few and too sporadic to make asignificant and lasting impact on the mainstreamof user development. Since there was, in effect,an applied scientific vacuum surrounding man-computer communication, it is not at all surprisingthat there does not exist today a significant, cumula-tive experimental tradition for testing and evaluatingcomputer-user performance." (Sackman, 1968,p. 349).

"The problem is, of course, to get the rightinformation to the right man at the right time andat his work station and with minimum effort onhis part. What all this may well be saying is that theinformation problem that exists is considerablymore subtle and complex than has been set forth . . .

The study for development of a Methodology forAnalysis of Information Systems Networks arrives,both directly and by implication at the same con-clusion as have a number of other recent studies.That conclusion is that much more has to be knownabout the user and his functions, and much morehas to be known about what the process of RDT & Eactually is and how can information, as raw materialinput to the process, flow most efficiently and mosteffectively." (Sayer, 1965, p. 146).

"The recurrent theme in general review articlesconcerned with man-computer communication isthe glaring experimental lag. Innovation and un-verified applications outrace experimental evalua-tion on all sides.

"In a review of man-computer communication,Ruth Davis points out that virtually no experimentalwork has been done on user effectiveness. Shecharacterizes the status of user statistics as inade-quate and 'primitive', and she urges the specifica-tion and development of extensive measures ofuser performance. . . .

"Pollack and Gildner reviewed the literature onuser performance with manual input devices forman-computer communication. Their extensivesurvey covering large numbers and varieties ofswitches, pushbuttons, keyboards and encodersrevealed 'inadequate research data establishingperformance for various devices and device char-acteristics, and incomplete specification of operatorinput tasks in existing systems.' There was a vastexperimental gap between the literally hundredsof manual input devices surveyed and the verysmall subset of such devices certified by some formof user validation. They recommended an initialprogram of research on leading types of task/devicecouplings, and on newer and more natural modes ofmanual inputs such as speech and handwriting."(Sackman, 1968, p. 350).

2.11 "Information control at input can be usedto achieve improved system efficiency in severaldifferent ways. First, a reduction in the total

Page 45: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

volume of infOrmation units or reports to be received,processed, or stored can be gained through the useof filtering procedures to reduce the possible re-dundancies between items received. (Timing con-siderations are important in such procedures, asnoted elsewhere, because we won% want a delayedand incorrect message to 'update' its own correc-tion notice.)

"Secondly, input filtering procedures serve toreduce the total bulk of information to be processedor stored both by elimination of duplicate itemsas such and by the compression of the quantitativeamount of recording used to represent the originalinformation unit or riaessage within the system.

"A third technique of information control atinput is directed to the control of redundancy withina single unit or report. Conversely, input filteringprocedures of this type can be used to enhance thevalue of information to be stored. For example, inpictorial data processing, automatic boundary con-trast enhancements or `skeletonizations' may im-prove both subsequent human pattern perceptionand system storage efficiency. Another example isnatural text processing, where systematic elimina-tion of the 'little', 'common', and 'non-informing'words can significantly reduce the amount of textto be manipulated by the machine." (Davis, 1967,p. 49).

2.12 In this area, R & D requirements for thefuture include the very severe problems of siftingand filtering enormous masses of remotely collecteddata. For example, "our ability to acquire data is sofar ahead of our ability to interpret and manage itthat there is some question as to just how far wecan go toward realizing the promise of much of thisremote sensing. Probably 90% of the data gatheredto date have not been utilized, and, with large multi-sensor programs in the offing, we face the danger ofending up prostrate beneath a mountain of utterlyuseless films, tapes, and charts." (Parker and Wolff,1965, p. 31).

2.13 "Purging because of redundancy is ex-tremely difficult to accomplish by computer programexcept in the case of 100% duplication. Redundancypurging success is keyed to practices of standardiza-tion, normalization, field formatting, abbreviationconventions and the like. As a case in point, docu-ment handling systems universally have problemswith respect to bibliographic citation conventions,transliterations of proper names, periodical titleabbreviations, corporate author listing practices andthe like." (Davis, 1967, p. 20).

See also Ebersole (1965), Penner (1965), andSawin (1965) who points to some of the difficultieswith respect to a bibliographic collection or file, asfollows:

"1. Actual errors, such as incorrect spelling ofwords, incorrect report of pagination, in oneor more of the duplicates. The error may bemechanically or humanly generated; the errormay have been made in the source bibliog.

39

raphy, or by project staff in transcriptionfrom source to paper tape. In any case, erroris a factor in reducing the possibility ofidentity of duplicates.

"2. Variations among bibliographies bOth instyle and content. A bibliographical citationgives several different kinds of information;that is, it conteins several 'elements,' suchas author of item, title, publication data, re-views and annotations. Each source bibliog-raphy more or less consistently employs onestyle for expressing information, but eachstyle differs from every other in some or allof the following ways:

a. number of elementsb. sequence of elementsc. typographical details" (1965, p. 96).

2.14 "File integrity can often be a significantmotivation for mechanization. To insure file integ-rity in airline maintenance records, files have beenrepublished monthly in cartridge roll-microfilmform, since mechanics would not properly insertupdate sheets in maintenance manuals. FreemontRider's original concept for the microcard, whichwas a combination of a catalog card and documentin one record, failed in part because of the lack offile integrity. Every librarian knows that if therewasn't a rod through the hole in the catalog card theywould not be able to maintain the integrity of thecard catalog." (Tauber, 1966, p. 277).

2.15 "Retirement of outmoded data is the onlylong-range effective means of maintaining anefficient system." (Miller et al., 1960, p. 54).

With respect to maintenance processes involvingthe deletion of obsolete items, there are substantialfact finding research requirements for large-scaledocumentary item systems in terms of establishingefficient but realistic criteria for "purging". Kesslercomments on this point as follows: "It is not justa matter of throwing away 'bad' papers as 'good'ones come along. The scientific literature is uniquein that its best examples may have a rather shortlife of utility. A worker in the field of photoelec-tricity need not ordinarily be referred to Einstein'soriginal paper on the subject. The purging of thesystem must be based on criteria of operationalrelevance rather than intrinsic value. Thesecriteria are largely unknown to us and representanother basic area in need of research and inven-tion." (1960; pp. 9-10).

"Chronological cutoff is that device attemptedmost frequently in automated information systems.It is employed successfully in real-time systemssuch as aircraft or satellite tracking or airlinereservations systems where the information isuseless after very short time intervals and whereit is so voluminous as to be prohibitive for futureanalyses . . .

"That purging which is done is primarily replace-ment. Data management or file management

Page 46: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

systems are generally programmed so that uponproper identification of an item during the manualinput process it may replace an item already inthe system data bank. The purpose of replacementas a purging device is not volume control. It isfor purposes of accuracy, reliability or timelinesscontrols." (Davis, 1967, p. 15).

"The reluctance to purge has been a leadingreason for accentuating file storage hierarchy con-siderations. Multi-level deactivation of informationis substituted for purging. Deactivation proceedsthrough allocating the material so specified firstto slower random-access storage devices and thento sequentially-accessed storage devices withdecreasing rates of access all on-line with thecomputer. As the last step of deactivation theinformation is stored in off-line stores . . .

"Automatic purging algorithms have been writtenfor at least one military information system and forSDC's time-sharing system . . . In the militarysystem . . . the purging program written allowedall dated units of information to be scanned andthose prior to a prescribed date to be deleted andtranscribed onto a magnetic tape for printing. Theinformation thus nominated for purging was re-viewed manually. If the programmed purge decisionwas overridden by a manual decision the falselypurged data then had to be re-entered into systemfiles as would any newly received data." (Davis,1967, pp. 16-18).

"Automatic purging algorithms have been ex-plored for the past three years. The current schemeattempts to dynamically maintain a 10 percentdisc vacancy factor by automatically deleting theoldest files first. User options are provided whichpermit automatic dumping of files on a backup,inactive file tape . . . prior to deletion." (Schwartzand Weissman, 1967, p. 267).

"The newer time-sharing systems contemplatea hierarchy of file storage, with 'percolation'algorithms replacing purging algorithms. Fileswill be in constant motion, some moving 'down'into higher-volume, slower-speed bulk store,while others move 'up' into lower-volume, higher-speed memory all as a function of age and refer-ence frequency." (Schwartz and Weissman, 1967,p. 267).

2.16 "Some computer-oriented statistics areprovided to assist in monitoring the system withminimum cost or time. Such statistics are tapelength and length of record, checks on dictionarycode number assignment, frequency of additionsor deletions to the dictionary, and checks to seethat the correct inverted file was updated." (Smithand Jones, 1966, p. 190).

"Usage statistics as obsolescence criteria arecommonly employed in scientific and technicalinformation systems and reference data systems . . .

"Usage statistics are also used in the deactivationprocess to organize file data in terms of its refer-ence frequency. The Russian-to-English automatedtranslation system at the Foreign Technology Divi-

sion, Wright-Patterson AFB had its file systemorganized on this basis by IBM in the early 1960's.It was found from surveys of manual translatorsthat the majority of vocabulary references wereto less than one thousand words. These were isolatedand located in the fastest-access memory: the restof the dictionary was then relegated to lower prioritylocations . . ." (Davis, 1967, pp. 18-19).

"The network might show publications beingpermanently retained at a particular location. Thiswould allow others in the network to dispose oflittle-used materials and still have access to acopy if the unexpected need arose . . .

'Such an 'archival' copy could, of course, berelocated to a relatively low-cost warehouse areafor the mutual benefit of those agencies in the net-work. Statistics on frequency of usage might bevery helpful in identifying inactive materials, andthe network could also fill this need." (Brown et al.,1967, p. 66).

"Periodic reports to users on file activity mayreveal possible misuse or tampering." (Petersenand Turn, 1967, p. 293).

2.17 "Accessibility. For a system output ameasure of how readily the proper informationwas made available to the requesting user on thedesired medium." (Davis, 1964, p. 469).

2.18 Consider also the following:"The system study will consider that the docu-

ment-retrieval problem lies primarily withinthe parameters of file integrity; activity and activitydistribution; man-file interaction; the size, natureand organization of the file; its location and work-place layout; whether it is centralized or decentral-ized; access cycle time; and cost. Contributingfactors are purging and update; archival considera-tions; indexing; type of response; peak-hour,peak-minute activity; permissable-error rates;and publishing urgency." (Tauber, 1966, p. 274).

Then there are questions of sequential decision-making and of time considerations generally. "Timeconsideration is explicitly, although informally,introduced by van Wijngaarden as 'the value of atext so far read'. Apart from other merits of vanWijngaarden's approach, and his stressing the inter-action between syntax and semantics, we would liketo draw attention to the concept of 'value at time t',which seems to be a really basic concept in pro-gramming theory." (Caracciolo di Forino, 1965, p.226). We note further that "T as the time a factassertion is reported must be distinguished from thetime of the fact history referred to by the assertion."(Travis, 1963, p. 334).

Avram et al., point more prosaically to practicalproblems in mechanized bibliographic referencedata handling, as in the case of different types ofsearches on date: The ease of requesting all workson, say, genetics, written since 1960 as against thatof all works on genetics published since 1960 withrespect to post-1960 reprints of pre-1960 originaltexts.

For the future, moreover, "In some instances, the

40

Page 47: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

search request would have to take into account,which data has been used in the fixed field. Forexample, should one want a display of all the booksin Hebrew published during ,a specific time frame,an adjustment would have to be made to the datein the search request to compensate for the adjust.ment made to the data at input time," (Avram et al.,1965, p. 42).

2.19 "Here you run into the phenomenon of the`elastic ruler', At the time when certain data wereaccumulated, the measurements were made witha standard inch or standard meter . . . whetherresearchers were using an inch standardized beforea certain date, or one adopted later." (Birch, 1966,p. 165).

2.20 "Large libraries face the problem of con-verting records that exist in many languages. Themost complete discussion of this problem to date isby Cain & Jolliffe of the British Museum. They sug-gest methods for encoding different languages andspeculate on the extent to which certain translitera-tions could be done by machine. The possibility ofstoring certain exotic languages on videotapes issuggested as a way of handling the printing problem.At the Brasenose Conference at which this paperwas presented, the authors analyzed the difficultiesin bibliographic searching caused by transliterationof languages (this is the scheme most generally sug-gested by those in the data processing field)."(Markuson, 1967, p. 268).

2.21 "The question of integrity of informationwithin an automated, system is infrequently ad-dressed." (Davis, 1967, p. 13).

"No adequate reference service exists that wouldallow users to determine easily whether or not rec-ords have the characteristics of quality and com-'patibility that are appropriate to their analyticalrequirements." (Dunn, 1967, p. 22).

2.22 "Controls through 'common sense' orlogical checks . . . include the use of allowablenumerical bounds such as checking bearings byassuming them to be bounded by 0° as a minimumand 360° as a maximum. They include consistencychecks using redundant information fields such associal security number matched against aircrafttype and aircraft speed. They also include currentawareness checks such as matches of diplomat byname against reported location by city againstknown itinerary against known political views."(Davis, 1967, p. 36).

"A quite different kind of work is involved inexaming for internal consistency the reports fromthe more than 3 million establishments coveredin the 1954 Censuses of Manufacturers and Busi-ness. If these reports were all complete and self-consigtent and if we were smart enough to foreseeall the problems involved in classifying them, andif we made no errors in our office work. the job ofgetting out the Census reports would be laboriousbut straightforward. Unfortunately, some of thereports do contain omissions, errors, and evidenceof misunderstanding. By checking for such incon-

41

sistencies we eliminate, for example, the largeerrors that would result when something has beenimproperly reported in pounds instead of in thou-sands of pounds. Perhaps one-third to one -.half ofthe time our UNIVACs devote to processing theseCensuses will be spent checking for such incon-sistencies and eliminating them. . . .

"Similar checking procedures are applied to theapproximately 7,000 product lines for which wehave reports. In a like manner we check to seewhether such relationships as annual man hoursand number of production workers, or value ofshipments and cost of labor and materials, arewithin reasonable limts for the industry and areainvolved. . . .

"For example, the computer might determinefor an establishment classified as a jewelry repairshopohat employees' salaries amounted to less than10 percent of total receipts. For this kind of servicetrade, expenditures for labor usually represent themajor item of expenses and less than 10 percentfor salaries is uncommonly low. Our computerwould list this case for inspection, and a reviewof the report might result in a change in classifi-cation from 'jewelry repair shop to retail jewelrystore', for example." (Hansen and McPherson,1956, pp. 59-60).

2.23 "The use of logical systems for errorcontrol is in beginning primitive stages. Question-answering systems and inference-derivationprograms may find their most value as error controlprocedures rather than as query programs orproblem-solving programs." (Davis, 1967, p. 47).

"A theoretically significant result 'of introducingsource indicators and reliability indicators to becarried along with fact assertions in an SFQA[question-answering] system is that they providea basis for applying purifying programs to the factassertions stored in the system i.e., for resolvingcontradictions among different assertions, forculling out unreliable assertions, etc. . . .

"Reliability information might indicate suchthings as: S's degree of confidence in his ownreport if S is a person; S's probable error if S isa measuring instrument; S's dependability asdetermined by whether later experience con-firmed S's earlier reports; conditions under whichS made its report, etc." (Travis, 1963, p. 333).

2.24 "Another interesting distinction can bemade between files on the basis of, their accuracy.A clean file is a collection of entries, each of whichwas precisely correct at the time of its inclusionin the file. On the other hand, a dirty file is a filethat contains a significant portion of errors. A re-circulating file is purged and cleansed as it cycles autility-company billing file is of this nature. Afterthe File 'settles down,' the proportion of errorsimbedded in the file is a function of the new activityapplied to the file. The error rate is normalizedwith respect to the business cycle." (Patrick andBlack, 1964, p. 39).

Page 48: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

"When messages are a major source of theinformation entering the system corrections 'to apreviously transmitted original message can bereceived before the original message itself. Ifentered on an earlier update cycle the correctiondata can actually be 'corrected' during a laterupdate cycle by the original incorrect message."(Davis, 1967, p. 24).

2.25 "Errors will occur in every data collectionsystem, so it is important to detect and correctas many of the errors as possible." (Hillegass andMe lick, 1967, p. 56).

"The primary purpose of a data communicationssystem is to transmit useful information from onelocation to another. To be useful, the received copyof the transmitted data must constitute an accuraterepresentation of the original input data, withinthe accuracy limits dictated by the applicationrequirements and the necessary economic tradeoffs.Errors will occur in every data communicationssystem. This basic truth must be kept in mindthroughout the design of every system. Importantcriteria for evaluating the performance of any com-munications system are its degree of freedom fromdata errors, its probability of detecting the errorsthat do occur, and its efficiency in overcoming theeffects of these errors." (Reagan, 1966, p. 26).

"The form of the control established, as a resultof the investigation, should be decided only afterconsidering each situation in the light of the threecontrol concepts mentioned earlier. Procedures,such as key verification, batch totals, sight verifica-tion, or printed listings should be used only whenthey meet the criteria of reasonableness, in lightof the degree of control required and the cost ofproviding control in relation to the importance andvolume of data involved. The objective is to estab-lish appropriate control procedures. The mannerin which this is donei.e., the particular com-bination of control techniques used in a given setcircumstances will be up to the ingenuity of theindividual systems . designer." (Baker and Kane,1966, pp. 99-100).

2.26 "Two basic types of codes are foundsuitable for the burst type errors. The first is theforward-acting Hagelbarger code which allowsfairly simple data encoding and decoding withprovisions for various degrees of error size cor-rection and error size detection. These codes,however, involve up to 50 percent redundancyin the transmitted information. The second codetype is the cyclic code of the Bose-Chauduri typewhich again is fairly simple to encode and can detectvarious error burst sizes with relatively low redun-dancy. This code type is relatively simple to decodefor error detection but is too expensive to decodefor error correction, and makes retransmission theonly alternative." (Hickey, 1966, p. 182).

2.27 "Research devoted to finding ways tofurther reduce the possibility of errors is progress-ing on many fronts. Bell Telephone Laboratoriesis approaching the problem from three angles:

42

error detection only, error detection and correctionwith a non-constant speed of end-to-end datatransfer (during the correction cycle transmissionstops), and error detection and correction with aconstant speed of end-to-end data transfer (duringthe correction cycle transmission continues)."(Menkhaus, 1967, p. 35).

"There are two other potential 'error injectors'which should be given close attention, since morecontrol can be exercised over these areas. Theyare: the data collection, conversion and input de-vices, and the human being, or beings, who collectthe data (or program a machine to do it) at thesource. Bell estimates that the human will commitan average of 1,000 errors per million charactershandled, the mechanical device will commit 100 permillion, and the electronic component, 10 permillion. . .

"Error detection and correction capability is a`must' in the Met Life system and this is providedin several ways. The input documents have Honey-well's Orthocode format, which uses five rows ofbar codes and several columns of correction codesthat make defacement or incorrect reading virtuallyimpossible; the control codes also help regeneratepartially obliterated data. . . .

"Transmission errors are detected by using a dualpulse code that, in effect, transmits the sic, ils for amessage and also the components of those signals,providing a double check on accuracy. The papertape reader, used to transmit data, is bi-directional;if a message contains a large number of errors, duepossibly to transmission, noise, the equipment in thehead office detects those errors and automaticallytells the transmitting machine to 'back up and startover'." (Menkhaus, 1967, p. 35).

2.28 "Input interlocks checks which verifythat the correct types and amounts of data have beeninserted, in the correct sequence, for each trans-action. Such checks can detect many proceduralerrors committed by persons entering input datainto the system." (Hillegass and Melick, 1967, p. 56).

2.29 "Parity addition of either a 'zero' or 'one'bit to each character code so that the total numberof 'one' bits in every transmitted character codewill be either odd or even. Character parity checkingcan detect most single-bit transmission errors, butit will not detect the loss of two bits or of an entirecharacter." (Hillegass and Melick, 1967, p. 56).

"Two of the most popular error detection and cor-rection devices on the market Tally's System 311and Digitronics' D500 Series use retransmissionas a correction device. Both transmit blocks ofcharacters and make appropriate checks for validparity. If the parity generated at the transmitterchecks with that which has been created from thereceived message by the receiver, the transmissioncontinues. If the parity check fails, the last block isretransmitted and checked again for parity. Thismethod avoids the disadvantages of transmitting

Page 49: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

the entire message twice and of having to comparethe second message with the first for validity."(Davenport, 1966, p. 31).

"Full error detection and correction is provided.The telephone line can be severed and reattachedhours later without loss of data . . Error detectionis accomplished by a horizontal and vertical paritybit scheme similar to that employed on magnetictape." (Lynch, 1966, p. 119).

"A technique that has proven highly successfulis to group the eight-level characters into blocksof eighty-four characters. One of the eighty-fourcharacters represents a parity character, assuringthat the summation of each of the 84 bits at eachof eight levels is either always odd or always even.For the block, there is now a vertical parity check(the character parity) and a horizontal parity check(the block parity character). This dual paritycheck will be invalidated only when an evennumber of characters within the block have aneven number of hits, each at the same level. Theprobability of such an occurrence is so minutethat we can state that the probability of an unde-tected error is negligible. In an 84-character block,constituting 672 bits, 83 + 8 = 91 bits are redundant.Thus, at the expense of adding redundancy of 13.5per cent, we have assured error-free transmission.At least we know that we can detect errors withcertainty. Now, let us see how we can utilize thisknowledge to provide error-free data at hightransmission rates. One of the most straightforwardtechniques is to transmit data in blocks, auto-matically checking each block for horizontal andvertical parity at the receiving terminal. If theblock parities check, the receiving terminal de-livers the block and an acknowledgment character(ACK) is automatically transmitted back to thesending terminal. This releases the next block andthe procedure is repeated. If the block parities donot check, the receiving terminal discards theblock Pnd a nonacknowledgment character (NACK)is returned to the sender. Then, the same block isretransmitted. This procedure requires that storagecapacity for a minimum of one data block beprovided at both sending and receiving terminals."(Rider, 1967, p. 134).

2.30 "What then can we say that will summarizethe position of the check digit? We can say that itis useful for control fields that is, those fieldswe access by and sort on, customer number, em-ployee number, etc. We can go further and saythat it really matters only with certain controlfields, not all. With control fields, the keys bywhich we find and access records, it is essentialthat they be correct if we are to find the correctrecord. If they are incorrect through juxtapositionor other errors in transcription, we will 1) not findthe record, and 2) find and process the wrongrecord. . . .

"One of the most novel uses of the check digitcan be seen in the IBM 1287 optical scanner. Thewriter enters his control field followed by the

43

376-411 0 - 70 - 4

check digit. If one of his characters is not clear,the machine looks at the check digit, carries outits arithmetic on the legible characters, and sub-tracts the result from the result that would givethe check digit to establish the character in doubt.It then rebuilds this character." (Rothery, 1967,p. 59.)

2.31 "A hash total works in the following way.Most of our larger computers can consider alphabeticinformation as data. These data are added up,just as if they were numeric information, andmeaningless total produced. Since the high-speedelectronics are very reliable, they should producethe same meaningless number every time the samedata fields are summed. The transfer of informationwithin the computer and to and from the variousinput/output units can be checked by recomputingthis sum after every transmission and checkingagainst the previous total.. .

"Some computers have special instructions builtinto them to facilitate this check, whereas othersaccomplish it through programming. The filedesigner considers the hash total as a form of built-in audit. Whenever the file is updated, the hashtotals are also updated. Whenever a tape is read,the totals are reconstituted as an error check.Whenever an error is found, the operation is re-peated to determine if a random error has occurred.If the information is erroneous, an alarm is soundedand machine repair is scheduled. If information hasbeen actually lost, then human assistance is usuallyrequired to reconstitute the file to its correct con-tent. Through a combination of hardware and pro-gramming the validity of large reference files canbe maintained even though the file is subject torepeated usage." (Patrick and Black, 1964, pp.46-47).

2.32 "Message length checks which involvea comparison of the number of characters as spec-ified for that particular type of transaction. Messagelength checks can detect many errors arising fromboth improper data entry and equipment or linemalfunctions." (Hillegass and Milick, 1967, p. 56).

2.33 "In general, many standard techniquessuch as check digits, hash totals, and format checkscan be used to verify correct input and trans-mission. These checks are performed at thecomputer site. The nature and extent of the checkswill depend on the capabilities of the computerassociated with the response unit. One effectivetechnique is to have the unit respond with a verbalrepetition of the input data." (Melick, 1966, p. 60).

2.34 "Philco has a contract for building what iscalled a Spelling-Corrector . . . It reads text andmatches it against the dictionary to find out whetherthe words are spelled correctly." (Gibbs and Mac-Phail, 1964, p. 102).

"Following keypunching, the information retrievaltechnician processes the data using a 1401 com-puter. The computer performs sequence checking,editing, autoproofing (each word of input is checkedagainst a master list of correctly spelled words

Page 50: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

to determine accuracy a mismatch is printed outfor human analysis since it is either a misspelledor a new word), and checking for illegitimate char-acters. The data is now on tape; any necessarycorrection changes or updating can be madedirectly." (Magnin°, 1965, R. 204),

"Prior to constructing the name file, a 'legitimatename' list and a 'common error' name list are tabu-lated . . The latter list is formed by taking char-acter error information compiled by the instrumenta-tion system and thresholding it so only errors withsignificant probabilities remain; i.e., `e' for 'a'.These are then substituted one character at a timein the names of the 'legitimate name' list to create a`common error' name list. Knowing the probabilityof error and the frequency of occurrence of the`legitimate name' permits the frequency of occur-rence for the 'common error' name to be calculated."(Hennis, 1967, pp. 12-13).

2.35 "When a character recognition device errsin the course of reading meaningful English words itwill usually result in a letter sequence that is itselfnot a valid word; i.e., a 'misspelling'," (Cornew,1968, p. 79).

2.36 "Several possibilities exist for using theinformation the additional constraints provide. Aparticularly obvious one is to use special purposedictionaries, one for physics texts, one for chem-istry, one for novels, etc., with appropriate wordlists and probabilities in each. . . ."

"Because of the tremendous amount of storagewhich would be required by such a 'word digram'method, an alternative might be to associate witheach word its one or more parts of speech, and makeuse of conditional probabilities for the transiC.,nfrom one part of speech to another." (Vossler andBranston, 1964, p. D2.4-7).

2.37 "In determining whether or not to adopt anEDC system, the costliness and consequences ofany error must be weighed against the cost of in-stalling the error detection system. For example, ina simple telegram or teleprinter message, in whichall the information appears in word form, an errorin one or two letters usually does not prevent areader from understanding the message. With train-ing, the human mind can become an effective errordetection and correction system; it can readilyidentify the letter in error and make corrections. Ofcourse, the more unrelated the content of the mes-sage, the more difficult it is to detect a random mis-take. In a list of unrelated numbers, for example, itis almost impossible to tell if one is incorrect."(Gentle, 1965, p. 70).

2.38 In addition to examples cited in a previousreport in this series, we note the following:

"In the scheme used by McElwain and Evens,urAisturbed digrams or trigrams in the garbledmessage were used to locate a list of candidatewords each containing the digram or trigram. Thesewere then matched against the garbled sequencetaking into account various possible errors, such asa missing or extra dash, which might have occurred

44

in Morse Code transmission." (Vossler and Brans-ton, 1964, p. D2.4-1).

"Harmon, in addition to using digram frequenciesto detect errors, made use of a confusion matrix todetermine the probabilities of various letter sub-stitutions as an aid to correcting these errors."(Vossler and Branston, 1964, pp. D2.4-1 D2.4-2).

"An interesting program written by McElwainand Evens was able to correct about 70% of thegarbles in a message transmitted by Morse Code,when the received message contained garbling in0-10% of the characters." (Vossler and Branston,1964, p. D2.4-1).

"The design of the spoken speech output mo-dality for the reading machine of the CognitiveInformation Processing Group already calls fora large, disc-stored dictionary . . . The possibilityof a dual use of this dictionary for both correctspelling and correct pronunciation prompted thisstudy." (Cornew, 1968, p. 79).

"Our technique was first evaluated by a testperformed on the 1000 most frequent words ofEnglish which, by usage, comprise 78% of thewritten language . . . For this, a computer pro-gram was written which first introduced into eachof these words one randomly-selected, randomly-placed letter substitution error, then applied thistechnique to correct it. This resulted in the fol-lowing overall statistics 739 correct recoveries ofthe original word prior to any other; 241 incorrectrecoveries in which another word appeared sooner;20 cases where the misspelling created anothervalid word." (Cornew, 1968, p. 83).

"In operation, the word consisting of all firstchoice characters is looked up. If found, it is as-sumed correct; if not, the second choice charactersare substituted one at a time until a matching wordis found in the dictionary or until all second choicesubstitutions have been tried. In the latter casea multiple error has occurred (or the word readcorrectly is not in the dictionary)." (Andrews, 1962,p. 302).

2.39 "There are a number of different tech-niques for handling spelling problems having todo with names in general and names that are homo-nyms. Present solutions to the handling of namefiles are far from perfect." (Rothman, 1966, p. 13).

2.40 "The chief problem associated with . . .

large name files rests with the misspelling or mis-understanding of names at time of input and withpossible variations in spelling at the time of search.In order to overcome such difficulties, variouscoding systems have been devised to permit filingand searching of large groups of names phoneticallyas well as alphabetically . . A Remington RandUnivac computer program capable of performingthe phonetic coding of input names has beenprepared." (Becker and Hayes, 1963, p. 143).

"A particular technique used in the MGH[Massachusetts General Hospital] system is probablyworth mentioning; this is the technique for phoneticindexing reported by Bolt et al. The use described

OW

Page 51: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

involves recognition of drug names that have beentyped in, more or less phonetically, by doctors ornurses; in the longer view this one aspect of a largeeffort that must be expended to free the man-machine interface from the need for letter-perfectinformation representation by the man. Peoplejust don't work that way, and systems must bedeveloped that can tolerate normal human impre-cision without disaster." (Mills, 1967, p. 243).

2.41 ". . . The object of the study is to deter-mine if we can replace garbled characters innames. The basic plan was to develop the empiricalfrequency of occurrence of sets of characteresin names and use these statistics to replace amissing character." (Carlson, 1966, p. 189).

"The specific effect on error reduction is im-pressive. If a scanner gives a 5% character errorrate, the trigram replacement technique can correctapproximately 95% of these errors. The remainingerror is thus . . . 0.25% overall. . . .

"A technique like this may, indeed, reduce thecost of verifying the mass of data input coming fromscanners . . . [and] reduce the cost of verifyingmassive data conversion coming from conven-tional data input devices like keyboards, remoteterminals, etc." (Carlson, 1966, p. 191.)

2.42 "The rules established for coding struc-tures are integrated in the program so that thecomputer is able to take a fairly sophisticated lookat the chemist's coding and the keypunch operator'swork. It will not allow any atom to have too many ortoo few bonds, nor is a '7' bond code permissiblewith atoms for which ionic bonds are not 'legal'.Improper atom and bond codes and misplacedcharacters are recognized by the computer, asare various other types of errors." (Waldo andDe Backer, 1959, p. 720).

2.43 "Extensive automatic verification of thefile data was achieved by a variety of techniques.As an example, extracts were made of principallines plus the sequence number of the record:specifically, all corporate name lines were ex-tracted and sorted; any variations on a given namewere altered to conform to the standard. Similarly,all law firm citations were checked against eachother. All city-and-state fields are uniform. A zip-code-and-place-name abstract was made, with theresultant file being sorted by zip code: errors wereeasy to sort and correct, as with Des Moinesappearing in the Philadelphia listing." (North,1968, p. 110).

Then there is the even more sophisticated casewhere ". . An important input characteristicis that the data is not entirely developed for proc-essing or retrieval purposes. It is thus necessaryfirst to standardize and develop the data beforemanipulating it. Thus, to mention one descriptor,`location', the desired machine input might be`coordinate', 'city', and 'state', if a city is men-tioned; and 'state' alone when no city is noted.However, inputs to the system might contain acoordinate and city without mention of a state.

45

It is therefore necessary to develop the data andstandardize before further processing commences.

"It is then possible to process the data against theexisting file information . . The objective of theprocessing is to categorize the information withrespect to all other information within the files . . .To categorize the information, a substantial amountof retrieval and association of data is often required

. Many [data] contradictions are resolvable bythe system." (Gurk and Minker, 1961, pp. 263-264).

2.44 "A number of new developments are basedon the need for serving clustered environments. Acluster is defined as a geographic area of about threemiles in diameter. The basic concept is that withina cluster of stations and computers, it is possibleto provide communication capabilities at low cost.Further, it is possible to provide communicationpaths between clusters, as well as inputs to andoutputs from other arrangements as optional fea-tures, and still maintain economies within eachcluster. This leads to a very adaptable system. It isexpected to find wide application on universitycampuses, in hospitals, within industrial complexes,etc." (Simms, 1968, p. 23).

2.45 "Among the key findings are the following:Relative cost-effectiveness between time-sharing and batch processing is very sensitiveto and varies widely with the precise man-machine conditions under which experimentalcomparisons are made.Time-sharing shows a tendency toward fewerman-hours and more computer time for experi-mental tasks than batch processing.The controversy is showing signs of narrowingdown to a competition between conversation-ally interactive time-sharing versus fast-turn-around batch systems.Individual differences in user performance aregenerally much larger and are probably moreeconomically important than time-sharing/batch-processing system differences.Users consistently and increasingly preferinteractive time-sharing or fast turnaroundbatch over conventional batch systems.Very little is known about individual perform-ance differences, user learning, and humandecision-making, the key elements underlyingthe general behavioral dynamics of man-com-puter communication.Virtually no normative data are available ondata-processing problems and tasks, nor onempirical use of computer languages and sys-tem support facilities the kind of data neces-sary to permit representative sampling ofproblems, facilities and subjects for crucialexperiments that warrant generalizable re-sults." (Sackman, 1968, p. 350).

However, on at least some occasions, some clientsof a multiple-access, time-shared system may besatisfied with, or actually prefer, operation in a

Page 52: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

batch or job-shop mode to extensive use of the con-versational mode.

"Critics (see Patrick 1963, Emerson 1965, andMacDonald 1965) claim that the efficiency of time-sharing systems is questionable when compared tomodern closed-shop methods, or with economicalsmall computers." (Sackman et al., 1968, p. 4),

Schatzoff et al. (1967) report on experimentalcomparisons of time-sharing operations (specifically,MIT's CTSS system) with batch processing as em-ployed on IBM's IBSYS system.

. . One must consider the total spectrum oftasks to which a system will be applied, and theirrelative importance to the total computing load."(Orchard-Hays, 1965, p. 239).

44.. A major factor to be considered in the

design of an operating system is the expected jobmix." (Morris et al., 1967, p. 74).

"In practice, a multiple system may contain bothtypes of operation: a group of processors fed froma single queue, and many queues differentiatedby the type of request being serviced by the attachedprocessor group . . ." (Scherr, 1965, p. 17).

2.46 "Normalization is a necessary preface tothe merge or integration of our data. By merge, orintegration, as I use the term here to represent thelast stage in our processes, I am referring to acomplex interfiling of segments of our data theentries. In this ` interfiling,' we produce, for eacharticle or book in our file, an entry which is a com-posite of information from our various sources. Ifone of our sources omits the name of the publisherof a book, but another gives it, the final entry willcontain the publisher's name. If one source givesthe volume of a journal in which an article appears,but not the month, and another gives the month,but not the volume, our final entry will containboth volume and month. And so on." (Sawin, 1965,p. 95).

"Normalize. Each individual printed source,which has been copied letter by letter, has featuresof typographical format and style, some of whichare of no significance, others of which are themeans by which a person consulting the workdistinguishes the several 'elements' of the item.The family of programs for normalizing the severalfiles of data will insert appropriate informationseparators to distinguish and identify the elementsof each item and rearrange it according to a selectedcanonical style, which for the Pilot Study is onewhich conforms generally to that of the ModernLanguage Association." (Crosby, 1965, p. 43).

2.47 "Some degree of standardized processingand communication is at the heart of any informa-tion system, whether the system is the basis formounting a major military effort, retrieving docu-ments from a central library, updating the clericaland accounting records in a bank, assigning airlinereservations, or maintaining a logistic inventory.There are two reasons for this. First, all informationsystems are formal schemes for handling the infor-mational aspects of a formally specified venture.

Second, the job to be done always lies embeddedwithin some formal organizational structure."(Bennett, 1964, p. 98).

"Formal organizing protocol exists relativelyindependently of an organization's purposes, origins,or methods. These established operating proceduresof an organization impose constraints upon theavailable range of alternatives for individual be-havior. In addition to such constraints upon thedegrees of freedon within an organization as re-strictions upon mode of dress, conduct, range ofmobility, and style of performance, there areprotocol constraints upon the format, mode,pattern, and sequence of information processingand information flow. It is this orderly constraintupon information processing and informationflow that we call, for simplicity, the informationsystem of an organization. The term 'system'implies little more than procedural restriction andorderliness. By 'information processing' we meansome actual change in the nature of data or docu-ment,. By 'information flow' we indicate a similarchange in the location of these data or documents.Thus we may define an information system assimply that set of constraining specifications forthe collection, storage, reduction, alteration,transfer, and display of organizational facts,opinions, and associated documentation which isestablished in order to manage, command if youwill, and control the ultimate performance of anorganization. . . .

"With this in mind, it is possible to recognizethe dangers associated with prematurely standardiz-ing the information-processing tools, the forms, thedata codes, the message layouts, the proceduresfor message sequencing, the file structures, thecalculations, and especially the data-summaryforms essential for automation. Standardizationof these details of a system is relatively simpleand can be accomplished by almost anyone familiarwith the design of automatic procedures. However,if the precise nature of the job and its organiza-tional implications are not understood in detail,it is not possible to know the exact influence thatthese standards will have on the performance ofthe system." (Bennett, 1964, pp. 99, 103).

2.48 "There is a need for design verification.That is, it is necessary to have some method forensuring that the design is under control and thatthe nature of the resulting system can be predictedbefore the end of the design process. In command-and-control systems, the design cycle lasts fromtwo to five years, the design evolving from a simpleidea into complex organizations of hardware, soft-ware, computer programs, displays, human opera-tions, training, and so forth. At all times during thiscycle the design controller must be able to specifythe status of the design, the impact that changesin the design will have on the command, and theprobability that certain components of the systemwill work. Design verification is the process thatgives the designer this control. The methods that

Page 53: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

make up the design-verification process range fromanalysis and simulation on paper to full-scalesystem testing." (Jacobs, 1964, p. 44).

2.49 "Measurement of the system was a majorarea which was not initially recognized. It wasnecessary to develop the tools to gather data andintroduce program changes to generate counts andparameters of importance. Future systems designersshould give, this area more attention in the designphase to permit more efficient data collection."(Evans, 1967, p. 83.)

2.50 "[The user] is given several control statis-tics which tell him the amount of dispersion in eachcategory, the amount of overlap of each categorywith every other category, and the discriminatingpower of the variables . . . These statistics arebased on the sample of documents that he assignsto each category . . . Various users of an identicalset of documents can thus derive their own structureof subjects from their individual points of view."(Williams, 1965, p. 219).

2.51 "We will probably see a trend toward theconcept of a computer as a collection of memories,buses and processors with distributed control oftheir assignments on a dynamic basis." (Clippinger,1965, p. 209).

"Both Dr. Gilbert C. McCann of Cal. Tech andDr. Edward E. David, Jr., of Bell Telephone Labo-ratories stressed the need for hierarchies of com-puters interconnected in large systems to performthe many tasks of a time-sharing system." (Commun.ACM 9, 645 (Aug. 1966).)

2.52 "Every part of the system should consist ofa pool of functionally identical units (memories,processors and so on) that can operate independentlyand can be used interchangeably or simultaneouslyat all times . . -

"Moreover, the availability of duplicate unitswould simplify the problem of queuing and theallocation of time and space to users." (Fano andCorbatO, 1966, pp. 134-135).

"Time-sharing demands high system reliabilityand maintainability, encourages redundant, modu-lar, system design, and emphasizes high-volumestorage (both core and auxiliary) with highly parallelsystem operation." (Gallenson and Weissman, 1965,p. 14).

"A properly organized multiple processor systemprovides great reliability (and the prospect of con-tinuous operation) since a processor may be triviallyadded to or removed from the system. A processorundergoing repair or preventive maintenancemerely lowers the capacity of the system, ratherthan rendering the system useless." (Saltzer, 1966,p. 2).

"Greater modularity of the systems will meaneasier, quicker diagnosis and replacement of faultyparts." (Pyke, 1967, p. 162).

"To meet the requirements of flexibility of capac-ity and of reliability, the most natural form . . . isas a modular multiprocessor system arranged sothat processors, memory modules and file storage

,I,...v.t.fbVEtil$kiiktgrgtiaakfri

units may be added, removed or replaced in accord-ance with changing requirements." (Dennis andVan Horn, 1965, p. 4). See also notes 5.83, 5.84.

2.53 "The actual execution of data movementcommands should be asynchronous with the mainprocessing operation. It should be an excellent useof parallel processing capability." (Opler, 1965, p.276).

2.54 "Work currently in progress [at WesternData Processing Center, UCLA] includes: investi-gations of intra-job parallel processing which willattempt to produce quantititative evaluations ofcomponent utilization; the increase in complexityof the task of programming; and the feasibility ofcompilers which perform the analysis necessary toconvert sequential programs into parallel-path pro-grams." (Dig. Computer Newsletter 16, No. 4, 21(1964).)

2.55 "The motivation for encouraging the use ofparallelism in a computation is not so much to makea particular computation run more efficiently as itis to relax constraints on the order in which parts ofa computation are carried out. A multi-programscheduling algorithm should then be able to takeadvantage of this extra freedom to allocate systemresources with greater efficiency." (Dennis and VanHorn, 1965, pp. 19-20).

2.56 Amdahl remarks that "the principal moti-vations for multiplicity of components functioningin an on-line system are to provide increasedcapacity or increased availability or both." (1965,p. 38). He notes further that "by pooling, thenumber of components provided need not be largeenough to accommodate peak requirementsoccurring concurrently in each computer, but mayinstead accommodate a peak in one occurring atthe same time as an average requirement in theother." (Amdahl, 1965, pp. 38-39).

2.57 "No large system is a static entityit mustbe capable of expansion of capacity and alterationof function to meet new and unforeseen require-ments." (Dennis and Glaser, 1965, p. 5).

"Changing objectives, increased demands foruse, added functions, improved algorithms and newtechnologies all call for flexible evolution of thesystem, both as a configuration of equipment andas a collection of programs." (Dennis and Van Horn,1965, p. 4).

"A design problem of a slightly different char-acter, but one that deserves considerable emphasis,is the development of a system that is 'open-ended';i.e., one that is capable of expansion to handlenew plants or offices, higher volumes of traffic,new applications, and other difficult-to-foresee de-velopments associated with the growth of the busi-ness. The design and implementation of a datacommunications system is a major investment;proper planning at design time to provide for futuregrowth will safeguard this investment." (Reagan,1966, p. 24).

2.58 "Reconfiguration is used for two primepurposes: to remove a unit from the system for

47

Page 54: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

service or because of malfunction, or to reconfigurethe system either because of the malfunction ofone of the units or to 'partition' the system so asto have two or more independent systems. In thislast case, partitioning would be used either todebug a new system supervisor or perhaps to aidin the diagnostic analysis of a hardware malfunctionwhere more than a single system component wereneeded." (Glaser et al., 1965, p. 202.)

"Often, failure of a portion of the system toprovide services can entail serious consequencesto the system users. Thus severe reliability stand-ards are placed on the system hardware. Many ofthese systems must be capable of providing serviceto a range in the number of users and must be ableto grow as the system finds more users. Thus, onefinds the need for modularity to meet these demands.Finally, as these systems are used, they must becapable of change so that they can be adapted tothe ever changing and wide variety of requirements,problems, formats, codes and other characteristicsof their users. As a result general-purpose storedprogram computers should be used whereverpossible." (Cohler and Rubenstein, 1964, p. 175).

2.59 "On-line systems are still in their earlydevelopment stage, but now that systems arebeginning to work, I think that it is obvious thatmore attention should be paid to the fail safeaspects of the problem." (Huskey, 1965, p. 141).

"From our experience we have concluded thatsystem reliability . . . must provide for severallevels of failure leading to the term 'fail-soft'rather than `fail- safe'." (Baruch, 1967, p. 147).

Related terms are "graceful degradation" and"high availability ", as follows:

"The military is becoming increasingly interestedin multiprocessors organized to exhibit the propertyof graceful degradation. This means that whenone of them fails, the others can recognize thisand pick up the work load of the one that failed,continuing this process until all of them have failed."(Clippinger, 1965, p. 210).

"The term 'high availability' (like its synonym`fail safe') has now become a cliche, and lacks anyprecise meaning. It connotes a system character-istic which permits recovery from all hardwareerrors. Specifically, it appears to promise thatcritical system and user data will not be destroyed,that system and job restarts will be minimized andthat critical jobs can most surely be executed,despite failing hardware. If this is so, then multi-processing per se aids in only one of the three char-acteristics of high availability." (Witt, 1968, p. 699).

"The structure of a multi-computer systemplanned for high availability is principally deter-mined by the permissible reconfiguration time andthe ability to fail safely or softly. The multiplicityand modularity of system components should bechosen to provide the most economical realizationof these requirements . . .

"A multi-computer system which can perform thefull set of tasks in the presence of a single mal-

48

function is fail-safe. Such a system requires atleast one more unit of each type of systemcomponent, with the interconnection circuitryto permit it to replace any of its type in any con-figuration . . .

"A multi-computer system which can perform asatisfactory subset of its tasks in the presence ofa malfunction is fail-soft. The set of tasks whichmust still be performed to provide a satisfactorythrough degraded level of operation, determinesthe minimum number of each component requiredafter a failure of one of its type." (Amdahl, 1965,p. 39).

"Systems are designed to provide either fullservice or graceful degradation in the face of failuresthat would normally cause operations to cease. Astandby computer, extra mass storage devices,auxiliary power sources to protect against publicutility failure, and extra peripherals and com-munication lines are sometimes used. Manual orautomatic switching of spare peripherals betweenprocessors may also be provided." (Bonn, 1966,p. 1865).

2.60 "A third main feature of the communica-tion system being described is high reliability.The emphasis here is not just on dependable hard-ware but on techniques to preserve the integrityof the data as it moves from entry device, throughthe temporary storage and data modes, over thetransmission lines and eventually to computertape or hard copy printer." (Hickey, 1966, p. 181.)

2.61 In addition to the examples cited in the dis-cussion of client and system protection in the pre-vious report in this series (on processing, storage,and output requirements, Section 2.2.4), we notethe following:

"The primary objective of an evolving special-purpose time-sharing system is to provide a realservice for people who are generally not computerprogrammers and furthermore depend on the systemto perform their duties. Therefore the biggest opera-tional problem is reliability. Because the data at-tached to special-purpose system are important andalso must be maintained for a long time, reliabilityis doubly crucial, since errors affecting the data basecannot only interrupt users' current procedures butals-, jeopardize past work." (Castleman, 1967,p. 17).

u he system is designed to handle both special-1,, :pose functions and programming development,then why is reliability a problem? It is a problembecause in a real operating environment some new`dangerous' programs cannot be tested on the sys-tem at the same time that service is in effect. As aresult, new software must be checked out duringoffhours, with two consequences. First, the systemis not subjected to its usual daytime load duringcheckout time. It is a characteristic of time-sharedprograms that different 'bugs' may appear depend-ing on the conditions of the overall system activity.For example, the 'time-sharing bug' of a programmanipulating data incorrectly because another pro-gram processes the same data at virtually the same

Page 55: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

time would be unlikely on a lightly loaded system.Second, programmers must simulate at night theircounterparts of laymen users. Unfortunately, thesetwo types of people tend to use application programsdifferently and to make different types of errors; soprogram debugging is again limited. Therefore, be-cause the same system is used for both service anddevelopment, programs checked as rigorously aspossible can still cause system failures when theyare installed during actual service hours." (Castle-man, 1967, p. 17).

"Protection of a disk system requires that no userbe able to modify the system, purposely or inad-vertently, thus preserving the integrity of the soft-ware. Also, a user must not be able to gain accessto, or modify any other user's program or data.Protection in tape systems is accomplished: (1) bymaking the tape units holding the system records in-accessible to the user, (2) by making the input andoutput streams one-way (e.g., the input file cannotbe backspaced), and (3) by placing a mark in theinput stream which only the system can cross. Inorder to accomplish this, rather elaborate schemeshave been devised both in hardware and softwareto prevent the user from accomplishirg certaininput-output manipulations. For example, in somehardware, unauthorized attempts at I/O manipula-tion will interrupt the computer.

"In disk-based systems, comparable protectiondevices must be employed. Since many differentkinds of records (e.g., system input, user scratcharea, translators, etc.) can exist in the same physicaldisk file, integrity protection requires that certaintracks, and not tape units, must be removed fromthe realm of user access and control. This is usuallyaccomplished by partitioning schemes and centralI/O software systems similar to those used in tape-based systems. The designer must be careful topreserve flexibility while guaranteeing protection."(Rosin, 1966, p. 242).

2.62 "Duplex computers are specified with thespare and active computers sharing I/O devicesand key data in storage, so that the spare computercan take over the job on demand." (Aron, 1967,P. 54).

"The second channel operates in parallel withthe main channel, and the results of the two chan-nels are compared. Both channels must independ-ently arrive at the same answer or operation cannotproceed. The duplication philosophy provides fortwo independent access arms on the Disk StorageUnit, two core buffers and redundant powersupplies." (Bowers et al., 1962, p. 109).

"Considerable effort has been continuouslydirected toward practical use of massive triplemodular redundancy (TMR) in which logic signalsare handled in three identical channels and faultsare masked by vote-taking elements distributedthroughout the system." (Ayiiienis, 1967, p. 735).

"He must give consideration to 1) back-up powersupplies that include the communications gear,2) dual or split communication cables into his data

49

center, 3) protection of the center and its gear fromfire and other hazards, 4) insist that separate facili-ties separate routes and used to connect locationson the MIS network, and 5) build extra capacityinto the MIS hardware system." (Dantine, 1966,p. 409).

"It is far better to have the system running athalf speed 5% of the time with no 100% failures thanto have the system down 21/2% of the time."(Dantine, 1966, p. 409).

"Whenever possible, the two systems run inparallel under the supervision of the automaticrecovery program. The operational system per-forms all required functions and monitors theback-up system. The back-up system constantlyrepeats a series of diagnostic tests on the computer,memory and other modules available to it andmonitors the operational system. These tests aredesigned to maintain a high level of confidence inthese modules so that should a respective counter-part in the operational system fail, the back-upunit can be safely substituted. The back-up systemalso has the capability of receiving instructions toperform tests on any of its elements and to executethese tests while continuing to monitor the opera-tional system to confirm that the operationalsystem has not hung up." (Armstrong et al., 1967,p. 409).

2.63 "The large number of papers on vote-taking redundancy can be traced back to thefundamental paper of Von Neuman where multiple-line redundancy was first established as a mathe-matical reality for the provision of arbitrarilyreliable systems." (Short, 1968, p. 4).

2.64 "A computer system contains protectiveredundancy if faults can be tolerated because ofthe use of additional components or programs,or the use of more time for the computationaltasks. . . .

"In the massive (masking) redundancy approachthe effect of a faulty component, circuit, signal,subsystem, or system is masked instantaneouslyby permanently connected and concurrently operat-ing replicas of the faulty element. The level atwhich replication occurs ranges from individualcircuit components to entire self-containedsystems." (Aviiienis, 1967, p. 733-734).

2.65 "An increase in the reliability of systemsis frequently obtained in the conventional mannerby replicating the important parts several (usuallythree) times, and a majority vote . . . A techniqueof diagnosis performed by nonbinary matrices . . .

require, for the same effect, only one duplicatedpart. This effect is achieved by connecting thedescribed circuit in a periodically changing wayto the duplicated part. If one part is disturbed thecircuit gives an alarm, localizes the failure andsimultaneously switches to the remaining part,so that a fast repair under operating conditions(and without additional measuring instruments) ispossible." (Steinbuch and Piske, 1963, p. 859).

Page 56: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

2.66 "Parameters of the model are as follows:n = total number of modules in the

systemnumber of unfailed modules neededfor system survivalProbability of failure of each modulesome time du,ring the mission.This parameter thus includes boththe mission duration and themodule MTBF,probability of not detecting anoccurred module failureprobability of system survivalthroughout the missionprobability of system failureduring the mission .

redundancy factor in initial sys-tem. .

"Depending upon the attainable Pf and Pad,the theoretical reliability of a multi-module com-puting system may be degraded by adding morethan a minimal amount of redundancy. For ex-ample, P f= 0.025 . . . it is more reliable to haveonly one spare module rather than two or four,for a typical current-day Pnd such as 0.075. Evenfor a Pnd as, low as 0.03 (a very difficult Pnd to achievein a computer), the improvement obtained in systemreliability by adding a second spare unit to thesystem- is minor." (Wyle and Burnett, 1967, pp. 746,748).

"The probability of system failure . . . is

m =

P f =

Pnd=

PS

Pf=1"-'11.S =--

nlm =

PF=k=n P!

k=(n.m+1) (n k)!k!(prk(1_pf)n -k

k=(nm) n!(n_ k!)k! Pk(1 Pf)nk[1 (1 P me]

(Wyle and Burnett, 1967, p. 746).2.67 'tine of the system design considerations is

the determination of the optimum number of redun-dant units by means of which the required system re.liability is to be reached. It will be seen that Pnd aswell as Pf must be considered in determining themost economical design." (Wyle and Burnett, 1967,p. 748).

"One of the prime requisites for a reliable, de-pendable communications data processing systemis that it employ features for insuring message pro-tection and for knowing the disposition of everymessage in the system (message accountability) incase of equipment failures. The degree of messageprotection and accountability will vary from applica-tion to application." (Probst, 1968, p. 21).

"Elaborate measures are called for to guaranteemessage protection. At any given moment, a switch-ing center may be in the middle of processingmany different messages in both directions. If amalfunction occurs in any storage or processingdevice, there must be enough information storedelsewhere in the center to analyze the situation, andto repeat whatever steps are necessary. This means

50

that any item of information must be stored in atleast two independent places, and that the updatingof queue tables and other auxiliary data must becarefully synchronized so that operation can con-tinue smoothly after correction of a malfunction. Ifit cannot be determined exactly where a transmis-sion was interrupted, procedures should lean towardpessimism. Repetition of a part of a message is lessgrievous than a loss of part of it." (Shafritz, 1964, p.N2.3-3).

"Reference copies are kept on magnetic tapes forprotective accountability of each message. Randomrequests for retransmission are met by a computersearch of the tape, withdrawal of the required mes-sages and automatic reintroduction of the messageinto the communications system." (Jacobellis, 1964,p. N2.1-2).

"Every evening, the complete disc file inventoryis pruned and saved on tape to be reloaded the fol-lowing day. This gives a 24-hour 'rollback' capabilityfor catastrophic disc failures." (Schwartz andWeissman, 1967, p. 267).

"It is necessary to provide means whereby thecontents of the disc can be reinstated after theyhave been damaged by system failure. The moststraightforward way of doing this is for the disc tobe copied on to magnetic tape once or twice a day;re-writing the disc then puts the clock back, butusers at least know where they are. Unfortunately,the copying of a large disc consumes a lot of com-puter time, and it seems essential to developmethods whereby files are copied on to magnetictape only when they are created or modified. Itwould be nice to be able to consider the archiveand recovery problems as independent, but reasonsof efficiency demand that an attempt should bemade to develop a satisfactory common system.We have, unfortunately, little experience in thisarea as yet, and are still groping our way." (Wilkes,1967, p. 7).

"Our requirements, therefore, were threefold:security, retrieval, and storage. We investigatedvarious means by which we could meet theserequirements; and we decided on the use of micro-film, for two reasons. First, photographic copiesof records, including those on microfilm, areacceptable as legal representations of documents.We could photograph our notebooks, store thefilm in a safe place, and destroy the books or, atleast, move them to a larger storage area. Second,we found on the market equipment with whichwe could film the books and then, with a suitableindexing system, obtain quick retrieval of infor-mation from that film" (Murrill, 1966, p. 52).

"The file system is designed with the presump-tion that there will be mishaps, so that an auto-matic file backup mechanism is provided. Thebackup procedures must be prepared for con-tingencies ranging from a dropped bit on a magnetictape to a fire in the computer room.

Page 57: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

11,

"Specifically, the following contingencies areprovided for:

"1. A user may discover that he has accidentallydeleted a recent file and may wish to recoverit.

"2. There may be a specific system mishapwhich causes a particular file to be no longerreadable for some 'inexplicable' reason.

"3. There may be a total mishap. For example,the disk-memory read heads may irreversiblyscore the magnetic surfaces so that all disk-stored information is destroyed.

"The general backup mechanism is provided bythe system rather than the individual user, for themore reliable the system becomes, the more theuser is unable to justify the overhead (or bother)of trying to arrange for the unlikely contingencyof a mishap. Thus an individual user needs insur-ance, and, in fact, this is what is provided." (Corbatoand Vyssotsky, 1965, p. 193).

"Program roll-back for corrective action must beroutine or function oriented since it is impracticalfrom a storage requirement point of view to providecorrective action for each instruction. The roll-back must be to a point where initial conditionsare available from sensors, prestored, or recon-stitutable. Even an intermittent memory mal-function during access becomes a persistent errorsince it is immediately rewritten in error. Thus,critical routines or high iteration rate real-timeroutines (for example, those which perform inte-gration with respect to time) should be storedredundantly so that in the event of malfunctionthe redundantly stored routine is used to precluderoutine malfunction or error buildup with time."(Bujnoski, 1968, p. 33).

2.68 "Restart procedures should be designedinto the system from the beginning, and the neces-sity for the system to spend time in copying vitalinformation from one place to another shouldbe cheerfully accepted. . . .

"Redundant information can be included insupervisor communication or data areas in orderto enable errors caused by system failure to becorrected. Even a partial application of this ideacould lead to important improvements in restartcapability. A system will be judged as much asby the efficiencies of its restart procedures as bythe facilities that it provides. . . .

"Making it possible for the system to be restartedafter a failure with as little loss as possible shouldbe the constant preoccupation of the softwaredesigner." (Wilkes and Needham, 1968, p. 320).

"Procedures must also be presCribed for workwith the archive collection to prevent loss orcontamination of the master records by tapeerasure, statistical adjustment, aggregation orreclassification." (Glaser et al., 1967, p. 19).

2.69 "Standby equipment costs should receivesome consideration, particularly in a cold warsituation: duplicate tapes, raw data or semi-

processed data. Also consider the possible costsof transporting classified data elsewhere forcomputation: express, courier, messenger, Brink'sservice." (Bush, 1956, p. 110).

"For companies in the middle range, the com-mercial underground vaults offer excellent facilitiesat low cost. Installations of this type are availablein a number of states, including New York, Pennsyl-vania, Kansas, Missouri and California. In additionto maximum security, they provide pre-attackclerical services and post-attack conversionfacilities. The usual storage charge ranges from$2 to $5 a cubic foot annually, depending on whethercommunity or private storage is desired. . .

"The instructions should detail procedure forconverting each vital record to useable form,as well as for utilizing the converted data to per-form the desired emergency functions. The lan-guage should be as simple as possible and freeof 'shop' terms, since inexperienced personnelwill probably use the instructions in the post-attack." (Butler, 1962, pp. 65, 67.)

2.70 "The trend away from supporting recordsis a recent development that has not yet gainedwidespread acceptance. There is ample evidence,however, that their use will decline rapidly, ifthe cold war gets uncomfortably hot. Exceptfor isolated areas in their operations, an increasingnumber of companies are electing to take a cal-culated risk in safeguarding basic records butnot the supporting changes. For example, someof the insurance companies microfilm the basicin-force policy records annually and forego thechanges that occur between duplicating cycles.This is a good business risk for two reasons: (1)supporting records are impractical for most emergency operations, and (2) a maximum one-year lag inthe microfilm record would not seriously hamperemergency operations." (Butler, 1962, p. 62.)

"Mass storage devices hold valuable records,and backup is needed in the event of destructionor nonreadability of a record(s). Usually the entirefile is copied periodically, and a journal of trans-actions is kept. If necessary, the file can be recon-structed from an earlier copy plus the journal todate." (Bonn, 1966, p. 1865).

2.71 'The life and stability of the [storage]medium under environmental conditions are otherconsiderations to which a great deal of attentionmust be paid. How long will the medium last?How stable will it be under heat and humiditychanges?" (Becker and Hayes, 1963, p. 284).

It must be noted that, in the present state ofmagnetic tape technology, the average accuratelife of tape recoreq is a matter of a few monthsonly. The active master files are typically rewrittenon new tapes regularly, as a part of normal up-dating and maintenance procedures. Specialprecautions must be undertaken, however, toassure the same for duplicate master tapes, wher-ever located.

"Security should also be considered in another

51

Page 58: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

sense. Paper must be protected against fire andflooding, magnetic tapes against exposure to electro-magnetic fields and related hazards. No specialprecartion is necessary for microfilm, provided

the reels are replaced periodically in updatingcycles. Long-term storage of microfilm, however,will require proper temperature and humiditycontrol in the storage area." (Butler, 1962, p. 64.)

3. Problems of System Networking

3.1 As noted in a previous report in this series:"Information processing systems are but one

facet of an evolving field of intellectual activitycalled communication sciences. This is a genericterm which is applied to those areas of studyin which the interest centers on the propertiesof a system or the properties of arrays of symbolswhich come from their organization or structurerather than from their physical properties; that is,the study of what one M.I.T. colleague calls 'the1.zoblems of organized complexity'." (Wiesner,1958, p. 268).

The terminology apparently originated withWarren Weaver. Weaver (1948) noted first that theareas typically taCkled in scientific research anddevelopment efforts up to the twentieth centurywere largely concerned with two-variable prob-lems of simplicity; then from about 1900 on, power-ful techniques such as those of probability theoryand statistical mechanics were developed todeal with problems of disorganized complexity(that is, those in which the number of variablesis very large, the individual behavior of eachof the many variables is erratic or unknown, butthe system as a whole has analyzable averageproperties). Finally, he points to an intermediateregion "which science has as yet little exploredor conquered" (1948, p. 539), where by contrastto those disorganized or random situations withwhich the statistical techniques can cope, theproblems of organized complexity require dealingsimultaneously with a considerable number ofvariables that are interrelated in accordance withorganizational factors.

is3.2 "Organizational generality s an attributeof underrated importance. The correct functioningof on-line systems imposes requirements that havebeen met ad hoc by current designs. Future systemdesigns must acknowledge the basic nature of theproblems and provide general approaches to theirresolution." (Dennis and Glaser, 1965, p. 5).

"Diversity of needs and divisibility of computerresources demand a much more sophisticated multi-plexing strategy than the simple communicationcase where all users are treated alike." (David, 1966,p. 40).

"As we turn toward stage three, the stage char-acterized by the netting of geographically distrib-uted computers, we find ourselves with a significantbase of experience with special-purpose computernetworks, but with essentially no experience withgeneral-purpose computer networks of the kind thatwill come into being when multiple- access systems

such as those at M.I.T. and the Systems Develop-ment Corporation are linked by digital transmissionchannels. The difficulties involved in establishingcomputer networks appear not to be difficulties ofhardware design or even of hardware availability.Rather, they appear to be difficulties of social andsoftware organization, of conventions, formats,and standards, of programming, interaction, andcommunication languages. It is a situation in whichthere now exist all the component hardware facilitiesthat can be said to be required, yet in which theredo not now exist any general-purpose networkscapable of supporting stage-three interaction."(Licklider, 1967, pp. 5-6).

"The state of affairs at the end of 1966 can besummarized as follows. Multiaccess-system applica-tion techniques and user-oriented subsystems havebeen developed to only a relatively primitive level.Far too many people, particularly those with ascientific-application bent, still hold the short-sighted view that the real value of large, multiaccesssystems lies in their ability to simultaneouslypresent a powerful 'private computer' to each ofseveral tens or hundreds of programmers. Nearlyall of the systems actually in operation are used nthis way. Application-oriented systems that free theuser of most, or all, of his concern with details ofcomputer-system characteristics and conventionalprogramming are coming much more slowly. Theirdevelopment has been inhibited by among otherthings, the temporary plateau that has been reachedin basic multiaccess system technology." (Mills,1967, p. 247).

3.3 "The analytical tools are simply not avail-able . . ." (Baran, 1964, p. 27).

"The essence of rational benefit-cost analysis isthe tracing of indirect as well as direct effects ofprograms and the evaluation and summing of theseeffects. Typically, the methodology for tracing allbut the most obvious linkages is entirely lacking orfails to use the relevant information." (Glaser et al.,1967, p. 15),

"The problem associated with providing the inter-connection of a network of processors is a majorone." (Estrin and Kleinrock, 1967, p. 92).

"Solving the data base management problems of adistributed network has been beyond the state ofthe art." (Dennis, 1968, p. 373).

"Although techniques for multiplex communica-tions are well developed, we are only beginning tolearn how to multiplex computers." (David, 1966,p. 40).

"The formalism of hardward /software system

52

Page 59: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

management is just beginning to take shape. SAGEis a landmark because it worked in spite of itsimmense size and complexity." (Aron, 1967, p. 50).

"The system engineer presently lacks sufficienttools to efficiently design, modify or evaluate com-plex information systems." (Blunt, 1965, p. 69).

3.4 "The design and analysis problems associ-ated with large communications networks are fre-quently not solvable by analytic means and it istherefore necessary to turn to simulation techniques.Even with networks which are not particularly largethe computational difficulties encountered whenother than very restrictive and simple models areto be considered preclude analysis. It has becomeclear that the study of network characteristics andtraffic handling procedures must progress beyondthe half-dozen switching center problem to considernetworks of dozens of nodes with hundreds or eventhousands of trunks so that those features unique tothese large networks can be detern,ned and usedin the design of communications systems. Here it isevident that simulation is the major study tool."(Weber and Gimpelson, 1964, p. 233).

"The time and costs involved make it almostmandatory to 'prove' the 'workability' and feasi-bility of the potential solutions via pilot systems orby implementation in organizations or associationswhich have some of the characteristics of thenational system and which would therefore serve asa model or microcosm of the National Macrocosm."(Ebersole, 1966, p. 34).

"A report by Churchill et al., specifically recog-nizes the need for theoretical research in order tobuild an adequate foundation on which to basesystems analysis procedures. They point out thatrecent computer developments and particularlylarge computer systems have increased the need forresearch and the body of data that research canprovide in such areas as data coding and fileorganization." (Borko, 1967, p. 37).

3.5 "The coming importance of networks ofcomputers creates another source of applicationsfor . . . multiple-queue disciplines. Computer net-work disciplines will also have to be dependent ontransmission delays of service requests and jobsor parts of jobssfrom one computer to another as wellas on the possible incompatibilities of various typesbetween different computers. The synthesisand analysis of multiprocessor and multiple proc-essor network priority disciplines remains afertile area of research whose development awaitsbroad multiprocessor application and an enlighten-ing experience with the characteristics of thesedisciplines." (Coffman and Kleinrock, 1968, p. 20).

"We still are plagued by our inability to pro-gram for simultaneous action, even for the sched-uling of large units in a computing system." (Gorn,1966, p. 232).

"As computer time-sharing systems have evolvedfrom a research activity to an operational activity,and have incre..,sed in size and complexity, it hasbecome clear that significant problems occur in

53

controlling the use of such systems. These prob-lems have evidenced themselves in computerscheduling, program capability constraints, andthe allotment of auxiliary storage." (Linde andChaney, 1966, p. 149).

"A network has to consider with great carethe many possibilities of user access which approachmore and more the vast possibilities and intri-cacies of direct human communication." (Cainand Pizer, 1967, p. 262).

"Increased attention needs to be placed on theproblem of techniques for scheduling the manyusers with their different priorities." (Bauer,1965, p. 23).

3.6 "Much of the design effort in a message-switching type communications system goesinto the network which links the terminals andnodal points together. The distribution of terminalscan be shown, the current message density isknown, and programs exist to help lay out thenetwork. With most interactive systems this isnot the case." (Stephenson, 1968, p. 56).

3.7 "In looking toward computer-based, com-puter-linked library systems, that have beenproposed as a national technical information net-work, studies of perceived needs among usersare likely to be of very little use. Instead it wouldseem to be more appropriate to initiate small-scale experiments designed to produce, on alimited basis, the effects of a larger -scale systemin order to determine whether such, experimentsproduce the expected benefits." (Schon, 1965,p. 34).

3.8 "Transmitting data collection systemscan assume a wide variety of equipment con-figurations, ranging from a single input unit withcable-connected recorded to a farflung networkwith multiple input units transmitting data tomultiple recorders or computers by means ofboth common-carrier facilities and direct cableconnections. Probably the most important parameterin planning the equipment configuration of asystem is the maximum number of input stationsthat can be connected to a single central recordingunit." (Hillegass and Melick, 1967, pp. 50-51).

3.9 Licklider stresses the importance of "co-herence through networking" and emphasizes: "Onthe average, each of n cooperative users can drawn-1 programs from the files for each one he putsinto the public files. That fact becomes so obvi-ously significant as n increases that I can concludeby concluding that the most important factors insoftware economics are n, the number of nettedusers, and c, the coefficient of contributive coopera-tiveness that measures the value to his colleaguesof each user's creative effort." (Licklider, 1967,p. 13).

"The circumstances which appear to call for theestablishment of physical networks (as opposed tological networks) are generally:

"1. The existence of special data banks orspecial collections of information located at a

Page 60: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

single institution but useful to an audiencegeographically dispersed.

"2. The inadequacy of general data banks orgeneral collections of information to meetlocal needs where remote resources can beused in a complementary fashion to fulfill theneed.The centralization of programming services,processing capabilities or scientific resourceswith a geographically dispersed need.The need for interpersonal (including inter-group) direct communication. This includesteleconferencing and educational activities.A justification on economic, security or socialgrounds for distribution of responsibilityfor load sharing among organizations or geo-graphiCal regions." (Davis, 1968, pp. 1-2).

"In certain areas, such as law enforcement,medicine, social security, and education, there is aneed for joint Federal-State computer communica-tions networks which can apply new technology toimproving the management of major national pro-grams in these areas." (Johnson, 1967, p. 5),

"It has been suggested that a principal advantageto be gained from computer networks is the abilityto distribute work evenly over the available installa-tions or to perform certain computations at installa-tions particularly suited to the nature of the job,"(Dennis, 1968, p. 374).

"The time-sharing computer system can unite agroup of investigators in a cooperative search forthe solution to a common problem, or it can serveas a community pool of knowledge and skill on whichanyone can draw according to his needs." (Fano andCorbatO, 1966, p. 129).

"Within a computer network, a user of anycooperating installation would have access toprograms running at other cooperating installations,even though the programs were written in differentlanguages for different computers. This forms theprincipal motivation for considering the implementa-tion of a network." (Marill and Roberts, 1966, p.426).

3.10 "The establishment of a network may leadto a certain amount of specialization among thecooperating installations. If a given installation, X,by reason of special software or hardware, isparticularly adept at matrix inversion, for example,one may expect that users at other installations inthe network will exploit this capability by invertingtheir matrices at X in preference to doing so ontheir own computers." (Marill and Roberts, 1966,p. 426).

"An interconnected network would make itpossible for the top specialists in any field to in-struct anyone within the reach of a TV receiver."(Brown et al., 1967, p. 74).

3.11 "For initial network trials the advantagesof an open system are:

443.

4.

5.

a. ease of programming,b. services for all users,

54

c, all operations may be publicized." (Brownet al., 1967, p. 209).

3.12 "The operator's charges to clients must bein fair proportion to the usage made of installationresources (processing, time, storage occupancy,etc.). Therefore adequate records must be kept ofresource use." (Dennis, 1968, p. 375).

"A principal [individual or group of individuals]is charged for resources consumed by computationsrunning on his behalf. A principal is also chargedfor retention in the system of a set of computingentities called retained objects, which may beprogram and data segments . . ." (Dennis andVan Horn, 1965, p. 8).

"The equitable allocation of space and time byadministrative fiat. This is probably an overwhelm-ing problem in a network since the predicting ofcommunication paths and computing facilitiesrequired by any user would be quite unwieldy. . . .

"Thus a more elaborate scheme seems to benecessary one whose rates are proportional tothe value of the service. This would be modifiedin a network because the rate for the same kind ofservice may vary among the installations." (Brownet al., 1967, p. 212).

"Built-in accounting and analysis of system logsare used to provide a history of system performanceas well as establish a basis for charging users."(Estrin et al., 1967, k). 645).

3.13 "Questions of technical feasibility andeconomic value are not the sole determinants of thecomputer utility. The development of the computerutility may be influenced by norms, or lack of norms,about the confidentiality of data. At the momentthere do not seem to be any clear standards ofgood practice; perhaps there was less need beforetechnology greatly increased capabilities forhandling data." (Jones, 1967, p. 555).

"It should be easy and convenient for a user toallow controlled access to any of his segments, withdifferent access privileges for different users."(Graham, 1968, p. 367).

3.14 "The designer must decide whether he willprovide the high-speed service to all users, toprovide the service that the majority request, andleave the minority to fend for themselves, or toprovide the degree of speed needed in each case,but no more. Ideally, he should know the entiredistribution of response time requirements. It iseven desirable to know how the arrival of thesequeries will be distributed in time throughout theday. In attempting to meet requirements, he mustconsider what is actually being retrieved in anystated response interval. Does the user want hardcopy or will he be satisfied with citations or indexrecords? The engineering problems associated withhigh-speed retrieval of hard copy from files can beformidable. If a conversational, or browsing, modeof search is used in which the searcher uses asuccession of queries, do we aim to minimize histotal search time, or only to give him. immediate

Page 61: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

response to each single query?" (Meadow, 1967,p. 191).

"Precedence is computed as a composite functionof:

1) the ability of the network to accept additionaltraffic;

2) the 'importance' of each user and the 'utility'of his traffic;

3) the data rate of each input transmissionmedium or the transducer used;

4) the tolerable delay time for delivery of thetraffic." (Baran, 1964, p. v).

"Many separate low-data-rate devices time-sharedor concentrated into a single high-data-rate linkpermit better averaging, as compared to a fewcorrespondingly-higher-data-rate users. But, asmany of the high-data-rate lasers 'get in' and 'getout' fast, they have a short holding time. This helpsthe averaging process. To be precise in this compu-tation, a better understanding of the number ofusers, their use statistics, and the network charac-teristics appears mandatory.

. The mixed requirement that, while wewish to give priority treatment to the higher-precedence traffic of equal network loading, wemust also satisfy the goal that we preserve a mini-mum transmission capability for the lower-prece-dence traffic...Thus, instead of a blanket rule thatall traffic of a given precedence grade will be trans-mitted before handling the next lower precedencegrade, we choose to use the time ratios of theseprecedence categories to act as a preferenceweighting factor." (Baran, 1964, pp. 30, 33).

"An added complication may be introducedin the form of a hierarchy of precedence classifica-tions. This can be a very useful feature of the com-munications system, allowing important messages toavoid delay by by-passing a string of messages ofrelatively low urgency. But it adds an extra dimen-sion to the message queue, requiring separate list-ings for each precedence. This system can go beyondgoverning of the order of transmission, and canallow high-priority messages to interrupt othersduring their transmission. In such a system,message switching has an advantage over circuitswitching, in that an interrupted message can beautomatically retransmitted as soon as possible,with no further action by the sender. But the pos-sibility of interruption necessitates that the entirecontents of a message be retained in storageuntil its last transmi &sion is completed." (Shaf-ritz, 1964, p. N2.3-3).

"The difference between control strength andpriority is that control strength is used for defininginterrupt classes (an interrupt class is the setof all requests with the same control strength),while priority is used for ordering requests withinthe same interrupt class." (Dahm et al., 1967,p. 774).

3.15 "In real time data communications

55

oriented problems, four major system equipmentperformance factors must be evaluated:

1. Real time processing capability of centralprocessor(s)

2. Core memory size provided in central proc.essor(s)

3. Bulk storage size provided4. Limitations on real time access to bulk

storage." (Birmingham, 1964, p. 38.)3.16 "It seems imperative that EDUCOM . . .

establish certain technical standards and opera-tion procedures which each state or regional groupmust meet before they can be interconnected.These standards should apply to digital trans-mission, telephonic communications, and tele-vision . . ." (Brown et al., 1967, p. 54).

"One final thought about integration. Integrationis facilitated by the standardization of equipment,processes, and languages. However, standardiza-tion in command-and-control systems must beconsidered in the light of the evolutionary natureof these syste.tns. First, standardization shouldbe based upon those elements which are mission-independent; that is, the elements standardizedshould be general-purpose in nature. Secondly,the standardization of system elements shouldbe modular; that is, it should be possible to addother elements to them in order to modify orincrease the capabilities of the system. If systemelements are standardized at too low a level ofaggregation, the system's speed of response isincreased, but its flexibility is reduced. If, on theother hand, elements are standardized at thehigher levels of aggregation, provided these arenot higher than the level of the designer's prob-lem, flexibility is increased, but there is an accom-panying reduction in the system's speed of re-sponse. It is this trade-off between flexibilityand speed of response that makes the standardiza- .

tion problem such a difficult one." (Jacobs, 1964,pp. 41-42).

"In general the standards of distributed-controlsystems are standards built around each class ofjob for each level of job for each unique function ofthe system. Procedures and languages need notbe standardized across job levels or across functions.Minimum standardization does not, however, implythe complete freedom of each functional unit toselect idiosyncratic communication codes orbizarre formats. Such matters as codes, formats,file structures, vocabularies, and message syntaxare all aspects of perforMance programs, and thelibrary of these program building blocks, from whichany information-processing job can be built, isbounded from above. Executive control over thelimits of the library establishes the boundaries ofthe range of alternatives available at any organiza-tional level. This is standardization of a sort, but itallows considerably more flexibility than the stand-ardization generated by a rigid set of specificationsto be applied across functions and up and down the

Page 62: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

hierarchy of information-processing jobs." (Bennett,1964, p. 107),

3.17 "A built-in system for user feedback wouldbe essential in determining near-future needs andcurrent inadequacies of the network." (Brown et al.,1967, p. 216).

"It was considered that it might be useful tohave all users of materials feed back their evalua-tions, which could be analyzed statistically forconsideration of the next user." (Brown et al., 1967,p. 63).

"Provisionallywe characterize a network by:A. Remote and rapid services regarding selec-

tion, acquisition, organization, storage, re-trieval, and processing of information andprocedures in current files . . .

B. Feedback to the1. Originator or organizer of the informa-

tion (hence there would be a Communityof users improving a common store ofmaterials and procedures).

2. Supervisor of the network services(hence the systeM would be adaptive tothe needs of the users)." (Brown et al.,1967, pp. 49-50).

"In designing a priority handling system, weshould never permit ourselves to believe that wehave more (or less) usable communications capa-bility than we really have. This implies networkstatus control feedback loops." (Baran, 1964, pp.17-18).

3.18 "To facilitate system scaling, reliability,and modularity, many multi-processor operatingsystems are designed to treat the processors ashomogeneous system resources. Hence, there is no`supervisor' processor, each schedules and controlsitself. To prevent critical races and inconsistentresults, only one processor at a time is permittedto alter or examine certain shared system databases; all other processors attempting simultaneousaccess are locked-out. This phenomenon is notstrictly limited to homogeneous processor systems,similar requirements apply to any multi-processorscheme utilizing shared data bases." (Madnick,1968, p. 19).

3.19 "Some general observations may be ofinterest. There are indications that the cost ofoperating an information system network, organizedalong subject lines, varies little with change ofprocess allocation within the system. Whether allacquisition and input processes are carried on in acenter clearing house or distributed in some logicalmanner among the service centers does not appearto make a significant difference in cost. On the otherhand, centralization in the regionally organizedsystem becomes imperative if excess operating costsare to be avoided. In a system organized to serveusers on a project basis, there is an indication ofsome economy of operation being achieved bycomplete decentralization." (Sayer, 1965, pp.141-142.)

"The least expensive method of organizing ascience information system network appears to beon a regional basis with the centralization of acqui-sition input processes being undertaken in a centralclearing house." (Sayer, 1965, p. 142.)

320 "In the network concept, then, the tech-nical information centers would be linked by thetraffic routing centers. Each would become de-pendent upon the other with both responding to thelaw of supply and demand, service and customersatisfaction, and continued viability based uponjustification of existence through performance."(Vlannes, 1965, p. 5).

"Other choices in the spectrum may include thatof a network of information centers in which eachcommunity performs and contributes to the ad-vancement of knowledge in accordance with itscapabilities. Of course, a network must impose aseries of constraints in order to operate, but it alsoallows for the flexibility that a rigidly structuredsystem cannot accommodate. A network also fostersa sense of competition in which each communitymust ever strive to re-orient itself in order to surviveand progress in its changing environment. Inaddition, each must become sensitive to the changesin the other communities in order that it may react,re-evaluate and adapt to the net set of goals thatare inevitable." (Vlannes, 1965, p.

"In order to gain control over the accountabilitydata, a telephone switchboard was added to thesystem . . . With the formalization of the terminalnetwork, the concept of operation changed from E

central computer with satellite terminals to theconcept of a central terminal network with satellite

.~computers." (O'Sullivan, 1907,_p. 169).3.21 "Many of the larger systems must also

take into account the requirements for providingmachine-readable output for use in a decentralizednetwork of search centers. The designer mustremember that other users will place constraints onthe parent system. It must be remembered that achange to the central system has multiple effectson the various members of the decentralized net-work. Good system documentation will be essentialin provieing programs to the local search centers. Aconstant training requirement will also be imposedupon the central system, and technical liaison mustbe maintained with all users in the network. Effec-tive file maintenance procedures must be developedwell in advance of implementation of the decentral-ized system. Changes and updatings to the centralfile will occur frequently, and an adequate mecha-nism must be available for insuring that these same_changes are made to all files in the field." (Austin,1966, p. 245).

"Locating the point of minimum sufficientcentralization for a system may call for a somewhat atypical philosophy of system design, aphilosophy not commonly held by theoreticianson the subject but often implicit in the daily designpractices of the engineers and logicians involvedin the actual specification of system details. That

56

Page 63: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

is, a sysiem should have standardized proceduresfor only the smallest job units that can be formallyspecified. These fixed subroutines can then becombined to form larger routines suitable forperforming larger segments of the overall activity.At any point where two jobs are dissimilar, thisdissimilarity can be reflected not only in differentflow diagrams but also in different formats, codes,sequencing procedures, indexing methods, displays,and so forth. To this extent, the system is neithertailor-made nor ready -made. It is not uneconomi-cally designed si that every job is unique, nor isit standard but ill-fitting because dissimilar jobsare forced into standardization. Rather, like amade -to- measure suit, the system is built aroundsmall standardized parts, each designed to fita small part of the overall job. The larger portionsof the system are not standardized as total units,but are unique configurations of standardizedsmaller parts." (Bennett, 1964, p. 108).

3.22 Baran (1964) considers, for example,"four separate techniques that can be used singlyor in combination to achieve, through automation,`best' use of a seriously, degraded and overloadedcommunications plant, within the frameworkof a rapidly changing organizational structure."(p. v).

3.23 "Calulation of the average daily volumeand the peak volume of information to be handledin the system consists of four steps:

1. Calculate the average daily volume of mes-sages presently flowing in the system.

2. Calculate the average number of charactersin each message.

3. Calculate the average daily total transmissiontime.

4. Calculate the peak volumes.The communications designer must plan thesystem to handle the peak traffic loads with accept-able delay as well as the total traffic load." (Gentle,1965, p. 58.)

"Calculate Call Voitune. The first step in cal-culating the volume of information that mustbe handled by the data communications system isto determine _the number of messages (called`traffic') handled in an average day. This is donefor traffic to and from every point in the system.The .volume is calculated by taking a sampleof several days' traffic and actually countingthe namber of messages handled each day ateach location. The number of days to be includedin the study is based upon the estimated numberof messages that are handled in a month. Anestimate of the monthly volume should be made,and the following table may be used as a guidein determining the number of days to be studied.

Estimated monthly message volume

Number ofdays to be

studiedUnder 1000 201000 to 2000 . . 102000 to 5000 55000 to 10,000 310,000 and over 2

Ideally the working days to be studied 'should bechosen at random, but if for any reason a series ofconsecutive days must be selected, care should betaken to avoid days immediately preceding orfollowing holidays. In addition, the count must bemade at each location from which information issent and at which information is received." (Gentle,1965, pp. 58-59.)

"Calculate the Total Transmission Time. Thethird step, after calculating the average number ofcharacters per message, is to determine the averagedaily total transmission time. At this point a trans-mission speed must be assumed. This transmissiontime can be calculated by dividing the averagenumber of characters per message by the assumedspeed of the system. If the average message has2,500 characters, for example, and the assumedtransmission speed is 10 characters per second, theaverage transmission time per message will be 250seconds. To this figure, however, must be addedsome operating time for dialing the call, waiting forthe connection to be established and, in some cases,coordinating the forthcoming transaction with thepersonnel at the receiving end. Operating timeshould be calculated from a study of a sample ofcalls, but if this is impracticable, the systemdesigner may use 100 seconds as an overall averagefor the operating time on each dialed-up datacommunications call. . . .

"The amount of .delay to be expected during thebusy hour depends upon the holding time of thecircuit at the receiving location and the totalnumber of minutes in the busy hour during whichinformation will be received. Data communicationplanners refer to a series of charts which indicatethe expected delay in transmissions when holdingtime, circuit use, and number of circuits in thegroup are known factors. The number of incomingcircuits affects the probability that a calling partwill receive a busy. signal." (Gentle, 1965, pp. 63, 65).

"The intervals at which messages are trans-mitted. Are these intervals fixed or random?What are the peak rates, and at what times ofday will they occur?" (Reagan, 1966, p. 23).

"To determine the proper size of a dial systemrequired, it is necessary to study the company'sbusy hour, calculate the average message length,determine the total number of call seconds in-volved, and then consult the hundred-call-second(CCS) tables developed for telephone trunk loading.On the CCS tables is a listing for the number oftrunk lines required for a Oven loading and grade ofservice desired. If you want only one lost (busy)

57

Page 64: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

call in every hundred, the tables will show howmany trunk lines are required. If you can tolerate10 lost calls per hundred, the tables show thatyou can get by with fewer lines. In this manner,you choose the grade of service you require tohandle your particular data communicationsproblem." (Birmingham, 1964, p. 37).

"Whenever it is necessary to have a largenumber of stations communicate among a largenumber of potential addresses, it is a practicalnecessity to use some form of switching. Thereis always a very wide variety of potential groupingsand possible network configurations. The shapeand complexity of the resulting network is verymuch dependent upon the economies one wishesto make in circuit groupings. The choice of thesegroupings in turn depends upon the statisticsof the expected traffic. If the traffic statisticsare known very accurately, large savings in costof selection of routes and assignment of channelscan be realized." (Baran, 1964, p. 15).

"Complex data communications systems thatterminate many lines in a central facility usuallyuse either a multi-line communications controllerin conjunction with general-purpose computeror a specialized, stored-program communicationsprocessor. These units are capable of bufferingand controlling simultaneous input/output trans-missions on many different lines. Again, a widevariety of equipment is now available to performthese functions. The available devices differin the number and speed of lines they can ter-minate and in their potential for performing auxiliaryor independent data processing. Examples includethe three multi-line communications controllersavailable for use with the general-purpose IBMSystem/360 computers and the Collins DataCentral system, a computer system designedespecially for message switching applications."(Reagan, 1966, p. 26).

"Data rate alone, however, does not provide acomplete measure of network loading; somedevices have a short duty cycle, such as onecomputer sending the contents of its core to aremote computer. While such devices place aheavy peak demand for service, they are highlyintermittent. On the other hand, a pulse-codedtelephone call places a lower peak demand load,but ties up network capacity for a longer periodand results in heavier average loading. Therefore;we should include an expected message-durationor holding-time factor in the network-load weightingtable." (Baran, 1964, p. 30).

"It is necessary to have some idea of the typesof messages that the system will be handling so thatestimates of transmission rate requirements can bemade. This calculation is intimately tied in with thedistribution of terminals from the central computersystem. It may be economically justifiable, even anecessity, to multiplex several of the terminals

58

together onto one high-speed line." (Stephenson,1968, p. 55).

"It is vital to have some knowledge of the averagemix of message types and message lengths in thesystem." (Stephenson, 1968, p. 55).

"The results obtained from the numerical solutionof a model give a precise and comprehensive de-scription of the statistical effects of high traffic. Thevalue of such precise and extensive data for com-puter systems may not be obvious, especially since`worst ea 3e' examples and physical reasoning canestablish much of the qualitative behavior of thesystem. The two prominent facts which warrantsuch analysis are the large scale of most multi-console systems, and the critical nature of machineresponse in their conception. Because multi-con-sole systems are of such large scale, it can be worth-while economically to thoroughly evaluate proposeddesigns, seeking to achieve maximum capacity.Such design evaluation requires a rather accurateknowledge of the traffic in the various parts of thesystem. Because much of the effectiveness of amulti-Console system can be rapidly dissipated bypoor response characteristics, an accurate statisticaldescription of response is also needed. The exist-ence of a capability for rapidly solving generalqueueing models makes this approach a muchneeded alternative to Monte Carlo simulations orexperiments in traffic studies." (Fife and Rosenberg,1964, pp. H1-6).

3.24 "The unequal and intermittent loading ofnet-type channels in present communications resultsin inefficient utilization of radio frequencies. Ifchannels were made available to other users duringperiods of idleness, more communications couldbe handled within a given frequency band. A solu-tion to the problem of frequency congestion can befound in giving each user access to a group ofchannels through a system which selects an idlechannel for each call and releases the channel assoon as the call is terminated. Such a system maybe called a random access system, since an idlechannel is selected at random from the group eachtime a user wishes to place a call. After a channel isselected, a means is needed to direct the call to theintended group or individual without dsturbingusers to whom the call is not directed. This processis called discrete addressing and is applied in theform of tone signalling to many systems in use today.The combination of the terms Random AccessDiscrete Address describes a class of communica-tion systems employing these principles and isfrequently referred to by the acronym, `RADA'."(Home et al., 1967, pp. 115-116).

"Adaptive channel communication systems pro-vide efficient bandwidth utilization by allowing timesharing of a small number of channels by a largegroup of users with low duty rates. Unlike fixedfrequency netted systems, where a call to a non-busy subscriber cannot be made if his assignedfrequency is in use by another of the net members,

Page 65: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

,t)

the adaptive system will allow' completion of allcalls provided the number of simultaneous calls isless than or equal to the number of system channels.The adaptive system is thus similar to the telephonesystem where each user has a private line, but thenumber of trunks connecting the lines is 1e'ss thanthe number of lines." (Home et al., 1967, p. 120).

3.25 "In the distributed network routing scheme. . . if the preferred path is busy, the in-transitMessage Block is not stored, but rather sent out overa less efficient, but non-busy link. This rapid passingaround of messages without delay, even if a secon-dary route must be chosen, is called a 'hot-potato'routing doctrine. . . .

"With such a doctrine, only enough storage ateach node need be provided to permit retransmittingmessages if an acknowledgement of correct receiptis not received from the adjacent station within aprescribed time interval. Message storage capacityis modest. . . .

"A dynamically-updated routing table stored ateach node indicates the best route for each MessageBlock to take to reach its directed end terminalstation . . . When two messages seek the samepreferred line, a random choice is used to selectwhich message is sent out over the best path . . .

Simulation has shown that this use of secondarypaths in lieu of storage is a surprisingly effectivedoctrine, enabling transmission of about 30-50 percent of the peak theoretical capacity possible in astore-and-forward system having infinite storageand infinite delays." (Baran, 1964, pp. 9, 12).

3.26 "If empirical data were available in theinformation field we could use an input-outputmatrix. On the input side the various internationaland national information services should be in-cluded. Many of these services act also as users ofthe input from other services, and they process itfor their own purpose. Thus the output side willalso contain most of the same services. The contentof the matrix should be information just as aneconomic matrix will contain employment, goods orservices. The economist expresses his flows indollar values, since money is the conventional sub-stitute. A convention measuring the informationflow of documents has to be developed." (Tell, 1966,p. 119).

3.27 "Based upon information taken in these[Lockheed] studies, it is quite clear that long terminformation exchange requirements will includewide band data and image transmission and thatdata entry points nay ultimately involve more than10,000 terminals throughout the nation." (Johnson,1967, p. 7).

"The issues of communication in the sense of theelectrical transmission of data have come to theforefront during 1966. Most computer-system im-plementers and users are encountering the problemsof communication engineering for the first time.Many have found disquieting the fact that the ele-ment of system cost arising from the necessary data-communication support of large, multiaccess sys-tems is surprisingly large often of the same orderas that of the central computer facility." (Mills, 1967,p. 244).

4. Input-Output, Terminal Design, and Character Sets

4.1 "The display-computer interface is a gen-eralized requirement of all displays and providescomputer buffering for the display system. In addi-tion, some systems may require logic level and/orword length changes from computer to display.These operations are also performed by the inte-face." (Mahan, 1968, p. 9).

4.2 Plans for implementation of an experimentalnetwork for the Advanced Research PlanningAgency (ARPA) have been reported as follows:"The ARPA contractors' network is . . . in theplanning stage. As one of the nodes of this network,SDC would receive a small computer (of the PDP-8class) as an interface message processor (IMP). Allother nodes in the network would likewise have asimilar IMP. The IMP would be two-faced: one viewfacing the local contractor's time-sharing system;the other facing other node IMPs. In this fashion,network 'protocol would be standardized at theIMP, while still maintaining the flexibility of per-mitting dissimilar local time-sharing systems to beincluded as nodes. An IMP will support up to twolocal time-sharing systems, thereby permitting local

59

376-411 0 - 70 - 5

networking via the IMP." (System DevelopmentCorp., 1968, p. 1-13).

4.3 "A method must be devised to develop thestorage adress of a record from a key in the recorditself. This is usually referred to as a randomizingformula. What is implied is an arithmetical operationon a key in the record to develop from this key anactual storage address for the record. Study isrequired to ascertain which technique or formulaprovides a good file utilization, with the least numberof common addresses for different keys to keep over-flow r_records down." (DePais,_1965, pp. 30-31).

"The logical address of a data item defines therelative position of the item within the structure ofthe data base. The logical address is coded so that aunique code may be created for each item in thedata base. The logical code is a numerical repre-sentation of the nodes in the multi-list tree structureof the data base, and is called the Item PositionCode." (Barnum, 1965, p. 50).

"The technique of hash addressing by randomiz-ing the input word was used to generate an addressfor the dictionary look-up. This method results in

Page 66: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

the address of the first element of a chain of words instorage, each of which yielded the same randomaddress. An examination of the chain would pro-ceed in sequence until the word was found or untilthe last element of the chain was compared."(Baker and Triest, 1966, p. 3-13).

"Every word encountered in the scan of an inputtext, i.e., during the actual operation of ELIZA, israndomized by the same hashing algorithm as wasoriginally applied to the incoming keywords, henceyields an integer which points to the only possiblelist structure which could potentially contain thatword as a keyword." (Weizenbaum, 1966, p. 38).

4.4 "The concept of on-line information controlimplies the ability of such users of the system tochange the performance of the system to meet theirown changing needs or wishes. With adequate con-trol, they can experiment with the display of alter-native data formats or configurations, withalternativesequences of data retrieval, with alternativeformulae for summarizing, processing, or analyzingdata." (Bennett et al., 1965, p. 436). "Other sugges-tions indicate the need to reconcile data formats ofconsiderable variety." (Brown et al., 1967, p. 54).

4.5 "Special provisions (per ps in the software)may be required to prevent overlap of symbology,as might result from adjacent aircraft tracks."(Israel, 1967, p. 208).

4.6 "Time-shared computers need an accurate,versatile clock to schedule programs, subroutinesor problems, and to synchronize the computers witha desired time base. Large computer systemsusually have their own built-in clock, but smallerunits used in a time-sharing mode must be modifiedby the addition of an external clock." (Electronics38, No. 23, 194 (1965).

4.7 The question of color on output is reflectedfirst in the more conventional documentation orlibrary situation. Thus it is to be noted that "theuse of color in printing is of increasing importance",but that there are questions of whether copies incolor, such as the color film versions of valuablemanuscripts supplied by the Bodleian Library orthe French Bibliotheque Nationale (Gunther, 1962,p. 8), can be preserved over extended periods oftime. (Applebaum, 1965, p. 493).

4.8 "The term 'graphical communication' pre-supposes a graphical language in which pictorialinformation is transmitted between the designer andthe computer." (Lang et al., 1965, p. 1).

4.9 "M. Stafford of Westboro, Mass. . . .

discussed the ways in which graphic communica-tions systems are currently used, with emphasison their interface between computers and infor-mation storage centers. Some examples of theiruse, he noted, are to send signature samplesand information on accounts between a mainbank and its branches, to send weather mapsand technical drawings across the country, and tosend pages of newspaper copy between cities."(LC Info. Bull. 25, App., 288 (1966).)

"[A] CRT display console . . . [should meetat least] three on-line capability criteria. First,it is directly tieable to a data processing system.Second, it has ability to initiate messages orcontrol signals from a data entry keyboard orswitches for transmission to the computer. Finallyit has ability to receive digital messages or controlsignals." (Frank, 1965, p. 50).

"Desire to display data rapidly . . . placesa premium on the efficiency of the graphicallanguage used at the display level." (Dertouzos,1967, p. 203).

"The CAFE system permits definition of, orselection from, a library of pictorial elements(static and dynamic), formation of complex picturesfrom simpler ones, and parameter control of theirindividual display characteristics as well as theirsynchronization into a sequence of compositedisplays. Once a pictorial element is definedby the user-editor, it is readily available by referenceto a name supplied at the time of its definition."(Nolan and Yarbrough, 1968, p. C103).

"High-level methods for expressing scopeoutput and console input operations have pro-duced a great deal of display programming activity.The obvious advantages over assembly codingof clarity, brevity, and fewer mistakes are a strongincentive for users. The compitler-compiler systemon TX-2 has made relatively easy the imple-mentation and evolution of an extended high-level language based on ALGOL. The language,called LEAP (Language for Expressing Associa-tive Procedures) has associative data structuringoperations, reserved procedure forms for displayor input manipulation, and real time variablessuch as the clock time and tablet stylus coordinates.Direct means for invoking the symbol recognizer'sservices are even incorporated. A 'Recognize'statement gets a symbol from the tablet just asa 'Read' statement gets a symbol from the consolekeyboard. Writing interactive programs which usethe display is straightforward, and experimentationand modification can be rapid.

"Having LEAP available as a programmingtool has facilitated the evolutionary developmentof application programs for graphical program-ming, data analysis, logic diagram input, andintegrated circuit mask layout. The largest efforthas been on circuit mask programs. A circuitdesigner controls the mask layout program withfreehand figures sketched on the Sylvania tablet.The computer recognizes his rough marks ascommands to create, move, group, and deletevarious integrated circuit components. Once acircuit design is complete, output tapes for eachof the mask levels required can be punched forlater use by a precision patterns making machine.Individual variations among designers in drawingstyle are accommodated easily by the trainablerecognizer." (Sutherland et al., 196%. p. 632).

"First of all, language must be concise andeasily learned. It should permit the user to specify

60.

Page 67: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

the various features of a thawing in the naturalorder in which they occur to him and in a continuousstream rather than in segmented form in separatestatements. For publication purposes, it must givethe user direct and ultimate control of every linedrawn if he so desires. Yet, where applicable, theuser should be able to cause a particular version ofa whole superstructure to be generated by the sys-tem merely by specifying a few simple options.Toward this end, the language should include thefacility to construct higher level statements from thebasic language statements. It is envisioned that aset of such 'user defined' statements could be de-veloped by an experienced programmer for a par-ticular application. Once defined, such statementscould then be used by non-programmers withoutknowledge of their genesis. Preferably, the languageshould meet the needs of users of widely varyingcomputer experience. At one end of the scale itshould appeal to a user essentially untrained incomputer programming for the simple transcriptionof drawings from a rough draft. At the other end ofthe scale it should satisfy a user desiring to generatepictures controlled by algorithm at execution time.Drawing on a conditional basis is particularly attrac-tive for applications such as circuit drawings andthe production of musical scorei:. Finally, the im-plementation of this language should readily accom-modate minor changes in syntax dictated by userexperience. In addition, it should be designed to runeasily on a variety of computers, and hopefully on avariety of terminal CRT systems, such as theStromberg Carlson 4060 or the RCA Videocomp."(Frank, 1968, p. 179).

"For the past five years, hardware has existedthat allows a computer user to enter drawn, printedor written information directly into a computer aseasily as writing with pencil on paper. The missingelement that would make such devices viableentities in computer systems has been the appro-priate software support. At SDC, we are developingthe necessary programs to allow a user scientist tohand-print, on-line, two-dimensional structuresrequired in the statement and solution of his prob-lem. These programs include an on-line, real-timecharacter recognition program that is independent ofboth position and size of the input; editing programsthat can deal with two-dimensional entities (asopposed to linear strings); and contextual parsingprograms that re-structure the recognized, editedinput for subsequent processing. This work is beingdone within the constraints of the SDC Q-32 Time-Sharing System . .

"The editing facilities are simple and straight-forward, requiring a minimum of effort on the partof the user while providing repositioning, erasure,replacement and insertion for such diverse notationsas mathematics and organic chemical structures."(Bernstein and Williams, 1968, p. C84).

4.10 "The display information is stored in aring-type list structure, which reflects not only theorder in which parts of the display are to be pro-

61

duced, but also the associations which may existbetween parts of the picture. A display routinewithin the executive threads its way through thisring structure transmitting the data if finds in thestructure to the display generator." (Forgie, 1955,p. 606).

4.11 "A three dimensional windowing subsystemis available for the AGT [Adage Graphics Terminal]in which upper and lower bounds can be placed(in digital registers) on x, y, and z. The vectorgenerator then blanks whenever the beam goesbeyond one of the bounds, and it also tells theprogram which bound was exceeded. This devicefinds use in a number of applications includinguncluttering pictures, testing the dimensions andintersections of solids, and splitting the CRT screenup into rectangles allocated to different pictures,which can then move beyond the 'edge' withoutencroaching upon its neighbor's display space."(Hagan et al. 1968, p. 753).

4.12 "Early graphics systems, such as the GMDAC system and Sketchpad, were little better thanautomated drafting boards. This statement is notintended in any way to belittle their efforts, butmerely to underline the fact that there was verylittle that could be done with a picture once it hadbeen generated. Certain of Sutherland's illustrationsare quite startling in their apparent sophistication,but generally return to the use of constraints (whichwere satisfied using least squares fit, which is anenergy constraint in engineering). In endeavoringto ascribe meaning to pictures, later investigatorswere forced to use data structures in a moresophisticated manner, and it became obvious thatassociations should be much more complex than theoriginal ring structures, etc. The CORAL language,APL, and AL, the language described by Feldman,are all outgrowths of the need to ascribe extra asso-ciations and meanings to a picture. Many are nowworking on this problem but information in technicalliterature is relatively sparse. To illustrate thetechniques being developed at the University ofMichigan, and to show the power of the associativelanguage, a detailed example will now be given."(Sibley et al., 1968, p. 553).

4.13 Some other examples of experimentalcapabilities in this area are as follows: "GRIN[(GRaphical INput) language] is particularly suit-able for use in problems requiring the extensivereal-time manipulation of graphical information atthe console. It takes full advantage of the incre-mental display structure of the scope . . . Thus,if a display part is composed only of a sequenceof incremental words, its position on the displayscope can easily be changed by changing only theinitial absolute entry point. Also if a part is repre-sented only incrementally, it can be called up usingthe display part subroutine linkage at many placeson the scope face. The part has to exist in storageonly once, however." (Ninke, 1965, p. 845).

"The GRIN-2 (GRaphical INteraction) languageis a high-level graphical programming language that

Page 68: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

permits the generation and manipulation of thegraphical data structure, and provides statementsfor controlling real-time man-machine interaction.The interaction portion of the language is used withthe GRAPHIC -2 graphical terminal. The rest ofthe language pertains to the common data structureused by all, graphical devices and terminals."(Christenson and Pinson, 1967, p. 705).

"The PLOT operator is used to create a pictureon the oscilloscope corresponding to the graphicallanguage statements . . It builds a list of 'con-sole commands' (plot a point, or a line, repositionthe beam etc.) known as the display file, which areinterpreted by the display console hardware soproducing the desired picture." (Lang et al., 1965,p. 39).

"VITAL (Variably Initialized Translator forAlgorithmic Langauges), a general purpose trans-lator for the Lincoln Laboratory TX-2 computer,is currently being adapted for use as a graphicalcontrol language translator." (Roberts, 1966, p. 173).

"The MAD language offers a powerful facility toprogrammers to tailor their compiler to fit theirapplication, viz., the MAD operator-definitionfacility. Using this part of the MAD compiler, theprogrammer (or programming staff) can extend thecompiler by introducing new operators and newoperand mode definitions into the language. In theformulation of any graphical language, the MADoperator-definition facility should play a large role.The syntax extensions we are describing in thispaper are those which cannot be encompassedwithin the operator-definition facility. The list-processing facilities described in the last sectionresulted from the examination of the L6 syntax, andthe identification of these elements which werebeyond the scope of the operator-definition ability.The rest we leave for development at the MADprogramming level.

"In a similar fashion, we have tried to identifythose elements of graphical language which werebeyond the scope of the MAD syntax for theirpossible incorporation into the compiler. The oneexample of a graphic language which encompassesmost of the desirable elements is the LEAP languageof Feldman and Rovner. This language is an ALGOL-type language which includes elements of set opera-tions as well as Feldman's own method of represent-ing graphical relations. LEAP is predicated on ahighly elaborate but efficient method of data storageinvolving hash coding, but the details of the imple-mentation do not concern us here. What does con-cern us, however, is the language syntax, insofar asit is incompatible with a MAD representation."(Laurance, 1968, p. 392).

"The subject of data structures has received agreat deal of attention in the past few years, espe-cially in relation to computer-aided design. Pro-gramming systems used for creating data structures(sometimes dignified by the name 'graphical lan-guages') vary greatly in the rigidity of their repre-sentation and the types of facilities offered to the

62

programmer. As an example of a high-level system,we can mention the formal language LEAP, inwhich the programmer can easily manipulate thelogical elements of his model, and the structuringof the information (in the form of hash-coded tables)is performed automatically by the language system.At the other extreme we have a language like L6which is a macro language useful in creatingarbitrary list structures. The difference betweenthese two 'graphical languages' is so great that onecould easily conceive of implementing the LEAPlanguage using the L6 language. An excellentreview of this subject is given by Gray." (Laurance,1968, p. 387).

"Only two other compiler-compiler systemswhich cater for graphics with Computer AidedDesign in mind are known to the author. Thefirst of these, AED, due to Douglas T. Ross isa very long-standing and general systen of greatinterest. GULP only attempts a small part of thisgenerality; as AED has been justly called 'a systemof systems for building systems'. AED processesgraphic language using a macro-processor in adifferent way from the character definition ofGULP. AED is also able to deal with context-dependent languages, and with more generaltypes of precedence besides including an ALGOL-like compiler and many special-purpose packagesfor design; e.g., POLYFACE." (Pankhurst, 1968,p. 416).

"It is of vital importance that the languagefacility for the Computer-Aided Design Systeminclude not only flexible descriptive and pro-gramming languages in word form, but a generalizedcapability for graphical communication as well.There are many aspects of design in almost anyfield, for which the natural means of expressionis in terms of pictures or diagrams, and any attemptto convey equivalent information in verbal formwould be extremely unnatural and awkward,and would defeat the basic principle that thedesigner-user be able to operate in a mannerwhich is natural to him." (Ross and Feldman,1964, p. 15).

4.14 "Even within equipment classes there isa wide variation in keyboard arrangements. Thoughthe alphanumerics generally correspond, theavailability and location of special charactersis by no means standard. Functional controlsare even more varied, and in the case of devicesusing complex editing features (the alphanumericdisplay device) the number, type, function, andplacement of functional controls is completelydissimilar between manufacturers." (AuerbachCorp., SDA, 1967, p. 2-10).

"For the inquiry/display console, the majorproblem is that of determining the proper functionsto implement for the user." (Hobbs, 1966, p. 44).

"Other operator input devices are availableon various consoles. Alphanumeric keyboardsand function keys are used. Some function keysuse plastic overlays for additional coding. Track

Page 69: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

balls and joy sticks are preferred by some users.The Rand Tablet, which provides an easy methodfor graphic input, is available as an accessoryin several systems." (Machover, 1967, p. 158).

"About a half of the available buttons are leftunprogrammed, so that every user can tailorthe system to his own particular needs by con-structing just those operators which are usefulto him at a given moment. A paper overlay is usedto mark the labels under the buttons . . . Thesystem provides operators which allow eachuser to keep his own private programmed push-buttons and functions in a deck of cards. Thesame programmable buttons may then be utilizedby different users for col,pletely different purposessimply by reading in a small deck of cards at thestart and punching out a new deck when someoneelse wants to use the system." (Clem, 1966, p. 136).

"Registration . . . is important in systems usingmultiple display projectors or sources. Displayoverlays are a prime example where static informa-tion is projected over the dynamic display. It isvery important that the images are properly regis-tered with respect to each other. In general,registration accuracy should meet or exceed theresolution of the display in order that misregistrationnot be detected." (Mahan, 1968, p. 5).

<L.. Better determination. of the proper func-

tions to mechanize in the display to facilitate thehuman interface, including the appropriate humanfactors determination of the appropriate trade-offsbetween manual and automatic functions." (Hobbs,1966, p. 1882).

"In addition to the standard typewriter keyboard,the CRT set will contain 40 special function keys aswell as several cursor control keys. Through the useof plastic overlays, the meaning and purpose of thespecial function keys can be changed when thesets are used for different application modes."(Porter and Johnson, 1966, p. 81).

"Manual input to the [IBM 2250] display unit iseffected by one of three devices an alphanumerickeyboard, a program function keyboard and a lightpen . . . The program function keyboard contains32 keys designed to allow the operator to indicateprogram interpretative functions to the computerby means of a single key depression . . . Anysignificance can be ascribed to . . . [the keys]depending upon the requirements of the operatorand the program." ("IBM System/360", 1965, pp.298-299).

4.15 "A system is described in this paper fordeveloping graphical problem-oriented languages.This topic is of great importance in computer-aideddesign, but has hitherto received only sketchydocumentation, with few attempts at a comparativestudy. Meanwhile displays are beginning to be usedfor design, and the results of such a study arebadly needed. What has held back experimentationwith computer graphics has been the difficulty ofspecifying new graphic techniques using the

63

available programming languages." (Newman,1968, p. 47).

"The major problem is the development ofsufficiently versatile programming languages di-rected at graphic, interactive devices." (Flynn,1966, p. 99).

"Unlike conventional computer languages,graphical languages have received little study, andtheir formal properties have not been examined indepth. The lack of precise ways to formulate andrepresent graphical language fundamentals impedesthe use of graphical techniques in many problemareas." (Sutherland, 1967, p. 29).

4.16 "The types of phosphors used in currentalphanumeric display terminals are of relatively lowpersistence. To present a display that is suitablefor viewing and free from annoying flickering, thedisplay must be continually regenerated. Bufferstorage is provided within the central controller orwithin each display unit to store data entered locallyor received from the remote computer. Logiccircuitry within the controller or display unit utilizesthe buffer Storage to regenerate the display, usually30 to 40 times per second." (Reagan, 1967, p. 33).

"Continual regeneration of a short persistenceCRT for flicker-free viewing demands high datatransfer rates, or considerable buffer storage; but inreturn for this outlay comes ready communicationby single photosensor pens and a medium for highlydynamic displays. However, the attendant costs arerelated to regeneration rates established by persist-ence of vision criteria and are disproportionate tothe dynamic content of the majority of displayswhere static text and diagrams prevail for seconds."(Rose, 1965, p. 637).

"While the maximum required number of flicker-free lines varies greatly among applications, avector construction rate of 10 microseconds perinch for the larger component, plus a memoryaccess and decode time of not more than threemicroseconds per vector, should satisfy the require-ments for most applications. A refresh rate of 30frames per second would display about 2,600 con-nected one-inch lines." (Chase, 1968, p. 26).

4.17 "By modulating the electron beam withseveral differential selectable frequencies, one could`color' or 'classify' various displayed points on theCRT screen." (Haring, 1965, p. 854).

"The light pen allows good feedback in pointingsince programming can brighten what unit is beingseen, or display only the total unit being seen. Theimportance of such feedback cannot be appreciateduntil one works with a graphical system." (Ninke,1965, p. 844).

"In addition to selecting point locations by meansof the tracking cross, the light pen can be used inanother interesting mode of operation known as thepick function. After a figure has been drawn by theoperator, or is generated on the basis of computeddata, the operator may desire to select a particularcurve, component, line of text, or other distinctelement which is a part of the picture. He may do

Page 70: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

this merely by pointing the light pen at the elementand pressing an 'accept' button. The light pen photo-multiplier produces a pulse at the instant theselected element is painted, which signals the com-puter. Since the address of the element beingpainted is known by the computer, its identificationcan be used as directed by the program. Forexample, the element may be erased, moved,duplicated, or rotated. By proper programming,symbols or labels on the screen can be made light-pen sensitive and can be selected by pick function.This operation may be used to select a single optionfrom a list or 'menu' of alternatives presented onthe display." (Prince, 1966, p. 1699).

"After investigating several possibilities, weconcentrated on the use of intensification for high-lighting parts of a display. For example, the four'boundary lines of a surface may be brightened whilethe interior lines are dimmed. This technique ofvarying intensification has proven to be an importantpiece of feedback to the console user. We call thepreceding example 'static' intensification becausean item remains intensified throughout the currentdisplay. Static intensification can be used meaning-fully in other ways, besides highlighting. For exam-ple, user attention can be focused on suggestedchoices of control words by brightening them, whiledimming the rest of the display. Shading to give athree-dimensional effect is also useful in optimizingthe user's understanding of a display. This tech-nique, however, requires enough intensity levels sothat the transition between them is continuous andimperceptible . . .

"This 'selective disabling' aids the operator inrecognizing his displays as a composition of varioussets of items. In addition, he will be less confusedwhen trying to make a selection. Only a small subsetof items can be selected, even though other informa-tion is still displayed. Thus, the selectable items arekept in context with the rest of the display. Thecombination of selective disabling and dynamicintensification provides the means of conveying thesyntax of a graphical language to the console user."(Joyce and Cianciolo, 1967, p. 715).

4.18 "Overlays can be quickly accomplished onstandard maps by reading the maps with the com-puter-controlled CRT reader and combining them inmemory with elements to be displayed. The resultscan be put out on film." (Fulton, 1963, p. 40).

4.19 Motivational factors obviously include theassurance (or lack of assurance) to the client that heis in fact effectively online to the processing sys-tem; that he has, in effect, direct access to his ownprior program and data stores and to such otherprograms or data as can be shared; and that hisown data banks and programs are adequately pro-tected against unauthorized access, piracy, orinadvertent destruction.

"Evaluating the 'cost effectiveness' of graphicalI/O of various types is a particularly difficult task.The literature is noticeably lacking in any referenceto the subject. Given a particular function to im-

64

plement, such as to reduce a graph to hard copy, tomonitor a given equipment or program parameter, orany other straightforward operation, a cost/perform-ance comparison of alternative implementationscan be made. However, to assign a widely agreedupon numerical scale of values to human productiv-ity or 'intellectual enhancement' is difficult, if notimpossible." (Wigington, 1966, p. 88).

"The art of designing man-machine systemsis still in its infancy. List selection terminals,by placing the output burden on the data system,are able to increase the input rate that an untraineduser can achieve. By so doing, terminal operationis made feasible for a much broader class of users.So far this approach has proved useful in applica-tions in which the vocabulary is limited to severalhundred words. We are just beginning to developthe automatic formatting procedures that willexpedite the design of the next generation ofmatrices. The potentials of information systemsthat adapt to the user's response patterns are yetto be realized. To the retriever, this approachoffers the ability to control the quality of thedata at the time that they are entered without,we hope, placing an undue burden on the enter-ers . . . Although we have been heartened byour limited successes in facilitating man-machinecommunication, we have at the same time beenhumbled and challenged by our ignorance of howa dialogue should be structured, how we shouldmold the machine to fit the man. It is perhapsin this area that the next advances will be made."(Uber et al., 1968, p. 225),

4.20 "In order for computer-aided designto achieve its full potential in the coming years,significant hardware advances are called for inseveral areas. More natural means for communi-cating with the computer are desirable. Althoughmany clever techniques have been developedfor using the light pen, it is still not completelysatisfactory. Also, the need is sometimes expressedfor large, high-resolution 'drawing board' displays."(Prince, 1966, p. 1706).

"A variety of light pen configurations are avail-able ranging from a simple penholder type toa gun type. Some pens are relatively heavy whileothers are lightweight. Some use a very flexiblecable and others use a rather stiff cable or coil cord.Aiming circles are provided with some light pens sothat the operator knows where the sensitive area ofthe light pen is pointed. Activating switchesfor the light pen include mechanical shutterson the pen, electrical switches on the pen, kneeswitches and foot pedal switches." (Machover,1967, p. 158).

4.21 "Another difficulty of the lite pen . . .is that its broad end which contacts the scopeface obliterates that portion of the screen where;.he lite pen is acting. One means of overcomingthis difficulty might be to display the points drawnby the pen to one side of the area actually sensed."(Loomis, 1960, p. 9).

Page 71: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

4,22 "Display maintenance has been providedby the large computer or by a data channel off thelarge computer. The high transfer rates needed forsuch an organization have dictated that the consolesbe very close, if not physically adjacent, to the sup-porting computer or channel." (Ninke, 1965, p. 839).

". Display maintenance is independent ofcontrol computer intervention. Thus, once startedby the control computer, the display continuouslyrefreshes itself with a direct jump word at the endof the display picture providing the link back to thestart." (Ninke, 1965, p. 842).

"To avoid flicker and to update picture informa-tion, each point of the display must be repeated atleast every 1/50 second. Although high-speedrepetition is within the capability of light-deflectionmethods, rapid repetitive presentation is unneces-sary if the display screen has good persistence."(Soref and McMahon, 1965, p. 60).

"Memory for a vector display file requires about4K words of storage per user." (Roberts, 1965,p. 217).

4.23 "Operable flat oscilloscopes have beenconstructed and have proved to afford excellentresolution." (Licklider, 1965, p. 96).

4.24 For example, "the most difficult aspects ofthe keyboard to experiment with are' the keypressure required and the reaction and travel of thekeys at the moment the signal is transmitted."(Boyd, 1965, p. 157).

"Ile keyboard design would be based on princi-ples of motion study. Each key would be shapedand positioned for maximum accessibility by theshortest route, with compensations made for dis-tances traveled, varying strength in differentfingers, and other factors." ("The (R)evolution inBook Composition . . . IV", 1964, p. 69).

"The keyboard must be kept fairly small, atleast within the span of an operator's hands. Ineffect, this does mean a single alphabet keyboardwhose condition is controlled by function buttons."(Boyd, 1965, p. 157).

4.. Various criteria such as weight of key

depression, key travel, key spacing, layout, andwhether or not you want hard copy." (Boyd, 1965,p. 153).

"One possible result of this type of analysis couldbe a bowl-shaped keyboard, with keys on the sidesand rear banked up from the horizontal . . . Thiswould enable the fingers to reach outlying keys bymoving in a straight line, rather than in an arc or intwo moves, over and down. Another feature wouldbe larger keys' on the outskirts, to reduce the needfor accuracy in moving fingers long distance."("The (R)evolution in Book Composition . . . IV",1964, p. 69.)

"Kroemer (1965) . . . arranged the keys in twoseparate but symmetrical groups, one for each hand.Each group consists of three curved rows of fivekeys. The form of the curve corresponds to that ofthe fingers and the possible movements of theindividual fingers. The two space bars for, the

65

thumbs are curved as well, which enables thethumbs to strike them from any hand position. Thekeyboard position is no longer horizontal since testsdisclosed that an angle of 300-45' to the horizontalgives the most comfortable position to the hands andarms." (Van Geffen, 1967, p. 6).

Hobbs points out further that "new types of key-boards are being developed that do not involvemechanically moving parts and that may permitmore design freedom from the human factors stand-point. These include pneumatic, optic, and piezo-electric techniques." (Hobbs, 1966, p. 38).

4.25 Mills, in a review of 1966 developments,concludes that "work directed toward solving thisinterface problem has appeared to be poorlyfocused. Only a few attempts to specify an improvedgeneral-purpose terminal were reported, and evenfewer reports of actual hardware development andprototype testing were discovered." (Mills, 1967,p. 245).

Further, "it will require major research, andengineering efforts to implement the several func-tions with the required degrees of convenience,legibility, reliability, and economy. Industry hasnot devoted as much effort to development ofdevices and techniques for on-line man-computerinteraction as it has to development of other classesof computer hardware and software." (Licklider,1965, p. 66).

4.26 "Specific to the time-sharing field there isa need for the development of adequate consoles,especially graphical input-output consoles. Thereare conflicting requirements for low cost and formany built-in features that minimize the load on thecentral computer. An adequate graphical consolemay require built-in hardware equivalent to thatrequired in a fairly sophisticated computer. This isan area in which analogue as well as digital tech-niques may be important. It is an area in which thenew component technologies may make significantcontributions." (Rosen, 1968, p. 1447).

"Design efforts are directed towards realizing aconsole that uses a cathode-ray tube (CRT) displaywith approximately 1,800 alphanumeric-charactercapacity. The data-communication between centralcomputer and console will initially be 200 charactersper second with provisions for higher data rates.Several character sets should be possible in additionto the English alphabet. User communications areentered by means of a typewriter keyboard, andspecial function buttons which designate frequentlyencountered commands. The user's message isdisplayed on the CRT prior to the transmission tothe time-shared computer, and editing of displayedcommands is possible. As the user's conversationwith the catalog system progresses, certain datasupplied by the computer may be stored locally forfuture reference, edited as required, and eventuallyprinted in hard-copy form." (Haring, 1968, p. 37).

4.27 "On the hardware side the picture seems tobe much brighter, at least in terms of providing

Page 72: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

color-display systems of greater color fidelity andreliability." (Rosa, 1965, p. 413).

4.28 "Color is deemed a necessary feature be-cause it can reduce errors and increase the assimila-tion of displayed information." (Mahan, 1968, p. 34).

4.29 "To produce a stereographic displaythe computer calculates the projected videoimages of an object, viewed from two separatepoints. The resulting video maps are stored onseparate refresh bands of the rotating memory.The two output signals are connected to separatecolor guns of a color television monitor, thuscreating a superimposed image on the screen.Optical separation is achieved by viewing theimage through color filters.

"The display is interactive and can be viewedby a large group of people, at the same time."(Ophir et al., 1969, abstract, p. 309).

4.30 "Unsolved problems . . . that derivefrom the graphic thems,,dves include flicker,ease of use, coupling of programs to do substantivecomputations, the language of discourse, andthe desirability of halftone capability." (Swanson,1967, pp. 38-39).

"Although the display field has progressedrapidly during the past five years, significantfuture progress is still required to provide thetypes of displays necessary to achieve the essentialclose man-machine interaction at a price thatwill permit their widespread use. Several majorneeds and problems facing the display field havebeen discussed here, including the need for:

(1) better methods of implementing large-screendisplays which can provide dynamic real-time operation with both alphanumericand graphical data

(2) flat panel visual transducers for both large-screen and console displays that:

(a) can be addressed digitally(b) provide storage inherent to the display

panel(c) are compatible with batch-fabricated

electronics and magneticsbetter determination of the proper functionsto mechanize in the display to facilitatethe human interface, including the appro-priate human factors determination of theappropriate trade-offs between manual andautomatic functions

(4) more effective software both to facilitatethe programming of display functions and toprovide for the efficient computer generationand control of operations such as the rotationand translation of drawings and the interro-gation of large data baseslower cost for all categories of displays, butparticularly for low performance remotedisplay consoles.

Developments or improvements needed in specificdisplay technologies (e.g., the need for higher

(3)

(5)

66

power or ultra-violet lasers) have also been cited."(Hobbs, 1966, p. 1802).

4.31 "There are . . . areas which I believe de-serve priority attention. The first is the developmentof a small, cheap, convenient desk-top terminalset such as the duffer unit." (Mooers, 1959, p. 38).

"Whereas fast time sharing divides the costof the computer itself among many users, there isno comparable way to distribute the cost of users'input and output equipment. Therefore the problemsof console design and development are critical.Teletype equipment is inexpensive and reliable,but the character ensemble is small, and thisseems likely to limit the applicability of teletypeseriously. Electric typewriters are more expensiveand less reliable, but they provide a reasonablylarge set of characters (enough, for example, tohandle ALGOL with only a few two-stroke charac-ters), and they may be adequate for the input andoutput requirements of the majority of users. Ifthey are not, then almost surely the next require-ment is a drawing board or 'doodle pad' onto whichboth the operator and the computer can write andmake graphs and diagrams. These functions arenow instrumented with oscilloscopes; what isneeded is a less expensive means." (Licklider,1964, pp. 124- 125).

"Displays are good, but, in the generalized formneeded, they tend to bz,. expensive. This means areduction in physical accessibility, since one doesnot put a $50,000 or $100,000 box in every office.Displays which are both low-cost and adequatedo not exist." (Mills, 1966, p. 197).

4.32 "William N. Locke (MIT) reported brieflyon INTREX . . . an MIT Lab is trying to develop abetter console because the small consoles are notnow very satisfactory." (LC Info. Bull. 25,90 (2/10/66).)

"Effective testing of user interaction with theaugmented catalog requires a remote computerconsole optimally suited to the task. Currentlyavailable consoles, however, exhibit seriousshortcomings as regards catalog experimcatation.Impact-printing teletypewriters operating at tento fifteen characters per second, for example, areclearly too slow for rapid scanning of large quanti-ties of bibliographic data. The cathode-ray-tube(CRT) alphanumeric display terminals now offeredby several manufacturers do allow for more rapid,quiet display of computer-stored data. However,they, too, lack features essential to effective userinteraction with the augmented catalog. Forinstance, there is generally a lack of flexibility inoperating modes, in formats (e.g., no superscriptsand subscripts) and a severe limitation on the sizeof the character set. On the other hand, the CRTgraphic display terminals that are currently availablecan be programmed to circumvent these deficienciesbut are very expensive as regards original cost,communications requirements, and utilization ofcomputer time." (Haring, 1968, pp. 35-36).

Page 73: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

4.33 "How much is the end user willing to payfor his terminal? What does he expect for it? Whatcs)uld he do without? The terminal's requirementsare obviously going to vary with the applicationprogram." (Stotz, 1968, p. 16).

"The need for graphical displays at a large num-ber of terminals remote to the utility places a severeconstraint on the allowable cost per display unit."(Dertouzos, 1967, p. 203).

Typically, the R & D requirement is noted for. lower cost for all categories of displays, but

particularly for low performance remote displayconsoles." (Hobbs, 1966, p. 1882).

4.34 "In the multiple-access system environ-ment, point-by-point generation of the display bythe main computer results in the imposition ofintolerable loads on the processor. The task ofsimply regenerating a fixed picture, involvingno new information, can occupy a major part ofprocessor capacity." (Mills, 1965, p. 239).

"Also, if such high speed consoles are to be usedremotely, local display maintenance is essential.Thus, a picture need be transmitted only once,maybe over low-cost low-speed lines." (Ninke,1965, p. 840).

"There is an increasing trend toward using smallsatellite computers for local servicing of theimmediate demands of the display and the humanoperators while communicating with a large time-shared computer for complex computations."(Wigington, 1966, p. 88).

"Many modern computers have a memory con-figuration which can be used to refresh the displaywithout interrupting other computations. Wherethis is not possible, a buffer memory is availablewithin a display. Commercial terminals use corememories, delay lines, and drums. With theavailability of low cost, high-speed, general purpose,digital computers, it becomes feasible to considerincluding a digital computer in the CRT graphicterminal. BR, DEC, and IDI offer terminals inwhich the digital computer is an integral part ofthe display and provides functions of storage,plus some of the hardware mode control features."(Machover, 1967, p. 153).

"There is growing awareness that display buffersshould, in fact, be small general purpose computers,which opens up a whole new spectrum of possi-bilities in properly assigning tasks within theoverall system." (Ward, 1967, p. 49).

"The concepts of list processing have proven tobe desirable in the manipulation of display data.Second, the hardware and software design ofMAGIC . . . provides suffident local processingabilities to significantly remove the burden of dis-play data processing from the C.P.U. which itis to he interfaced." (Rippy et al., 1965, p. 829).

"The display material memory is an Ampex RVQcore unit with 4096 36-bit words and ,a 5 micro-second cycle time. This storage capacity representsabout eight large pictures, i.e., those which take

67

about 1/30-th of a second to display." (Ninke,1965, p. 841).

"The remote operation of displays is facilitatedif local storage is provided at the display for re-freshing the picture. Thus only changes need betransmitted from the central computer, and thedata transmission requirements are greatly re-duced. Also, some computing capability can beplaced at the display so that change of scale,translation, rotation, and other modest computa-tions can be done locally. In this regard, M.I.T.and others are experimenting with satellite com-puters which can continuously refresh severaldisplays and perform simple calculations for them,but which call upon a larger computer for extendedcomputation." (Prince, 1966, p. 1706).

4.35 "The Control Edit Console consists oftwo dark trace storage tubes and associatedequipment for control and operation. The darktrace storage tube, and its associated equipment,is a device which will store and display a singleframe of video scan for a relatively long period oftime (up to several days). It has excellent resolutionof up to 500 or more line pairs per inch . .

Through manual control it is possible to erase andedit portions of the display. By using two DTStubes, it will be possible to duplicate the graphicportions of the original document on the secondtube. The display on the first tube will, of course,retain the original document for reference pur-poses. The selective transfer is accomplished byXY coordinate sliding markers, or by tracing areaboundaries with electrostatic or infrared pencildirectly on the screen . . ." (Buck et al., 1961,p. VI-21).

4.36 "Bunker-Ramo communications deviceslend themselves particularly well to engineeringapplications. The results of the computationsrequested by the engineer may be displayed ina meaningful format he is accustomed to seeingsuch as a Nyquist plot, or families of plots, for anynumber of variables." (Dig. Comp. Newsletter 16,No. 4, 37 (1964).)

4.37 "No matter what logic is applied to themanagement of the data base inside the machine,the surface appearance of the data base must beunder the user's on-line control. He should be ableto set up the file display, the object and propertynames, the formats of these files, including sequenceand arrangement of data as he desires. He shouldbe able to generate, store and revise the structureof his displays, on-line . ." (Bennett et al.,1965, p. 436).

For these reasons, it is suggested that "displayequipment manufacturers [should] improve andprovide software packages that allow:

a. One, two, or three variable-width column pageformats with justification routines and maskingprovisions for inserting graphics.

b. Variable character spacing to improve textdensity,

Page 74: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

c. Variable line spacing to allow text with orwithout superscripting and subscripting, and

d. Utilization of the software packages on addi-tional general purpose digital computersystems." (Burger, 1964, p. 12).

4.38 For example: "To qualify as standard, aterminal device must operate on-line, use a voicegrade channel, provide a visual display, at a ratetaxing the reading and scanning speed of theproficient user." (DeParis, 1965, p. 49).

"A reasonable goal is that the console shouldsell for under $15,000, that it should have a cathoderay tube or something equivalent, that it shouldallow for some buffering of information, and thatit should have a flexible and adaptable keyboardstructure and status information display." (Bauer,1965, p. 23).

"Certainly the design of the user's terminal isimportant; but far too much emphasis seems tohave been placed on high style and novelties tothe detriment of dependability and economy."(Adams, 1965, p. 486).

"This terminal equipment should be small. Itshould be easy to use, since this is to be the mainchannel of intercommunication between thehuman worker and the machine system. . .

"The terminal equipment should provide atypewriter-style keyboard for input, and it shouldhave an alpha-numeric printing device for printingits output. Its output should be printed on paper,either on a paper sheet or on a paper 'ticker tapewith a gummed back. Thus the output can beassembled and edited by scissors and pasting."(Mooers, 1959, p. 10).

"If any of these qualities [good resolution, noflicker, upper and lower case] are badly deficient,the reading becomes so difficult that the type ofperson required to understand and edit complexinformation will refuse to use the display." (Buck-land, 1963, p. 179).

"To allow a user, however, to do real-time com-posing, editing, or other manipulation of graphicalinformation with a light pen or other graphicaldevice, the sum of access time and transmissiondelay must be only a few milliseconds." (Ninke,1965, p. 840).

"It would seem that an automated system, tobe completely satisfactory, has to respond withina few seconds and should present output resultsat roughly a normal reading rate." (Drew et al.,1966, p. 6).

"The display should have controllable per-sistence . . . and should be free of flicker."(Licklider, 1965, p. 94).

"Upper and lower case alphabetics should beprovided; two symbols should be able to be plottedat the same position on the screen (for underline,circumflex, etc.); and superscripts and subscriptsshould be possible. Both the computer and theuser should be able to set Tabs and their settingsshould show at a glance." (Stotz, 1968, p. 17).

68

4.39 "We should like to have: a color dis-play . . . if possible, or, if not, a black-on-whitedisplay . . . with at least eight gradations ofbrightness . . . and a resolution exceeding400 . . , or 200 . . . or, at any rate, 100. . .

lines per inch." (Licklider, 1965, p. 94).While eight levels of geay scale may be adequate

for many user-oriented remote terminal input-output units (and more is sometimes available),far more is required for sophisticated pictorialdata processing.

Ledley's FIDAC (Film Input to Digital AutomaticComputer) currently provides only eight graylevels for a 700 X 500 input spot raster. (Ledleyet al., 1966, p. 79).

"Digital Electronics Inc. L SCAN rapidly con-verts visual data, such as drawings and objects,into digital format for computer analysis andrecord storage. It uses a vidicon tube pick-up andrecords on IBM compatible magnetic tape. Theunit can discriminate 64 shades of gray and hasa field of view divided into 40,000 segments."(Data Proc. Mag. 7, 50 (Feb. 1965).)

"Interest is developing in extensions of the dis-play technique to include color and grey scale.It seemed likely from the earliest experiments thatthe use of phosphors inside the cells could be usedin multi-color displays. The later observation ofultra-violet radiation in the cells led to the ideathat this radiation could excite phosphors depositedon the outside of the outer glass panels, and thatthe effect should be enhanced if the panels weremade of quartz or some other material with goodU V transmission characteristics." (Arora et al.,1967, p. 11).

"The practical application of such displays aschromatron photochromic dyes, coupled with suchreproduction means as thermoplastic recording orsmoke printing will usher in an era of true electronicprinting in all the flexibility we have come to expectin printed communication." (Herbert Ohlman,panel discussion of Markus, 1962, p. 25).

Further, "the National Aeronautics and SpaceAdministration has awarded the Philco Corp.,two 'contracts to develop an experimental colortelevision display system for possible use in themission control center of the Manned SpacecraftCenter in Houston." (Electronics 38, No. 18,40 (1965).)

4.40 "Most experienced users want 'terse'modes of expression to the computer and full,unabbreviated (and fast) responses from thecomputer." (Licklider, 1965, p. 182).

"It should be possible, in a 'debreviation' mode,to type 'ch.' on the keyboard and have 'The Councilon Library Resources, Inc,' appear on the display."(Licklider, 1965, p. 100).

"Several types of keyboarding shortcuts havebeen developed; for example, computer programsautomatically provide italicized and capitalizedcharacters in chemical names, and the computer

Page 75: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

also expands useful abbreviations." (Davenport,1968, p. 37).

4.41 "Each element of the display should beselectively erasable by the computer program,and also directly or indirectly by the operator."(Licklider, 1965, p. 94).

4.42 "DOCUS, developed by Informatics forthe multicomputer complex at Rome Air Develop-ment Center, provides a display-oriented man-machine system with some built-in datamanagement functions and a capability to assignfunctions to control keys at the display and definean execute compound functions based on earlierdefined tasks." (Minker and Sable, 1967, p. 137),

Other examples viewed generally with favor inthe literature include the following:

(1) "Placing control functions on the scope facehas two advantages. First, only those controlswhich should be present at a particular stageof a problem are displayed'. If a light buttonlabeled 'MOVE' is one of those present, auser knows that he can move picture partsaround. Similarly, if only light buttons inthe form of PNP and NPN transistor symbolsare displayed, a user knows he must selecta particular type of transistor at that time.Thus, in effect, a user is steered through aproblem. Second, during 'most operationsthere is only one center of attention, the scopeface, on which a user need concentrate.This allows faster and smoother work ona problem." (Ninke, 1965, p. 845).

(2) "STATP AC, a program for scientific dataanalysis, allows an experimenter to applyvarious statistical operations and mathe-matical transformations to data simply bylight-penning a desired operation chosenfrom a list of operations displayed on thescope." (Baker, 1965, p. 431)."This [SDC Variable Display] program isintended to assist in the design of tabulardisplays. It permits a user to sort, delete,insert and exchange rows and columns ofinformation presented in matrix form on adisplay console. Actions can be taken byboth light-pen and teletype input." (Schwartzet al., 1965, p. 30).

(4) "These programs . . . [may lead the operator]by providing information on the next step,by informing him which next console stepsare permissible, or by signaling him whena console step has been initiated which isnot permissible." (Bauer, 1965, p. 22).

4.43 "The major limiting factor in further im-provement in light-pen speed is in the phosphorscreen itself, which must be of a persistent type inorder to reduce display flicker. . . .

"We have developed a system to detect theelectron beam causing the screen light rather thanthe light itself . . . [as a] new approach to in-

(3)

69

creasing the speed of the CRT display system forman-machine communication. . .

"From experimental evidence we conclude thatit is possible to make a system to detect when andwhere the electron beam of a CRT strikes thescreen, thus essentially eliminating the bandwidth-limiting effects of the CRT phosphor and makinga high-speed man-machine communicationssystem possible." (Haring, 1965, pp. 848, 854).

"Current development work on CRT's is con-centrated heavily in the search for new phosphorsto provide increased brightness and longer life."(Mahan, 1968, p. 19).

"Both electroflors [liquid phase materials thatfluoresce or change color when small electriccurrents are passed] and piezoelectric displaysshow promise, but are hardly beyond the feasibilitystate of development. Current research spendingis not very heavy for these displays, so few resultsare expected in the near future." (Mahan, 1968,p. 28).

4.44 "The use of analog predictive circuitryshould be explored as a possible means of improvingtracking performance. The simple noise filter inthe present experimental circuit serves to providevelocity prediction in the sense that the voltageon the filter capacitor is the average of past errorvoltages. If an error signal should fail to be de-veloped over several cycles, the voltage on thiscapacitor would provide tracking continuity."(Stratton, 1966, p. 61).

"The merit of the analog technique lies in thesmall processing time required to determine theposition of the moving pen. Decreasing the track-ing interval allows more time to be utilized for dis-play a particularly important consideration whenmany consoles share the same display channel."(Stratton, 1966, p. 58).

4.45 ". . . The computer may be operated in amultisequenced fashion . . One sequence maybe used to calculate a file of point coordinates fordisplay and a higher priority sequence may be usedto display these points. In this manner, the highpriority display sequence insures that points aredisplayed as often as possible so that the picturedoes not flicker objectionably as a result of thecomputation of new points usurping display time."(Loomis, 1960, p. 2).

4.46 "Research and development work onseveral display technologies offer promise forimproved real-time large screen displays by theearly 1970's. These include.

Photochromic displays with cathode-ray-tube orlaser image generation

Thermoplastic and photoplastic light valveswith cathodt, ay-tube or laser-image generation

Crossed-grid electroluminescent displays withintegrated storage

Laser inscribing systemsSolid-state light valvesOpto-magnetic displays

Page 76: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Electro-chemical displaysInjection electroluminescence matrix displays."

(Hobbs, 1966, p. 42).4.47 "Large-screen displays are used where

it is necessary for a group of people to view the sameinformation simultaneously. Because of the costinvolved in implementing large-screen displays,their use has been confined largely to military sys-tems. The high cost plus the lack of really adequatetechnologies for presenting dynamic informationon a large screen have seriously restricted the useof group displays. Significant technological improve-ments are necessary to permit satisfactory groupdisplays at a reasonable price before their utiliza-tion will become widespread in the commercialworld. Most group displays that have been installedto date involved projection systems of one type oranother with the display generated in one deviceand then projected onto a screen. However, workis underway on several technologies that will permitthe generation of the visual image in the screenitself. Some of these are considered later in thediscussion of display technologies. Most of theresearch and development on the visual trans-ducer portion of display systems is devoted totechniques applicable to large-screen displaysbecause of the lack of suitable means for imple-menting displays of this type at present. The CRTenjoys a dominant position in console displays,but there is no equivalent dominant technology forlarge-screen displays." (Hobbs, 1966, pp. 1871-1872).

"While large stele display techniques have ad-vanced considerably in the past few years, there isstill much room for improvement. Their capabilityto handle dynamic data needs considerable expan-sion. Cost which is now high must be lowered andreliability needs improvements. . . .

"Toward this end, considerable research is nowunderway to improve existing techniques. Newfilms which do not need wet chemicals are being ex-plored along with novel methods of processingconventional films. Considerable effort is under wayto improve the performance of the light valvetechnology. Also, new and better techniques arebeing developed to convert digital data to the analogform necessary for the exploitation of TV typedevices." (Kesselman, 1967, p. 167).

4.48 "When data phones are used in the com-munication link, experience with existing displaysystems indicates that several data phones arerequired to handle many user consoles from asingle buffer/controller. Data phones now availablefor switched telephone networks provide approxi-mately 2000 bits per second of data, but within thenear future this rate may be approximately doubled,according to information received from the tele-phone company. Since one CRT frame containsapproximately 20,000 bits, at least ten seconds isrequired to transmit a complete single frame.Fortunately, the majority of the messages between

the user and the time-shared computer will be onthe order of one line of text requiring only one-thirdof a second, However, since a single user mightwish to request up to five frames at a time, a maxi-mum of 50 seconds might be required to serve asingle user. In order to avoid prolonged delays inservice to other users who may be awaiting serviceat that instant, one 2600-bps data phone must bededicated to a small number of consoles, or theservicing of the consoles must be interlaced, orcombinations of these two must be employed. Atthe present time, our thinking runs toward use ofone data phone for every three consoles, and inter-lacing the service to provide response time of lessthan 30 seconds under worst-case conditions. Since,in actual practice, users most frequently will besending and receiving much less than a completeframe and furthermore, since the probability thatseveral users will be communicating data simul-taneously is very small, as regards the communica-tions delays, the service would typically be lessthan a second." (Haring, 1968, pp. 38-39).

"The video buffer contains the necessary elec-tronics for head selection, writing, and sync mixing.Each head is in a read (or display) mode exceptwhen writing, and a single write driver serves allthe heads." (Terlet, 1967, p. 172).

Roberts suggests that "it is possible . . totime-share one vector-curve generator for severaldisplay units if the generator is fast enough toeliminate excessive flicker. Each display thenreceives the common deflection signals, but isonly intensified when the segments being drawn areintended for that unit. This technique allows thegenerator cost to be shared by up to four displaysit present speeds." (1965, p. 217).

"In order to reduce the costofindividuarconsoles,it is advantageous to-cluster consoles around a localstation which includes data storage and processingthat is common to all clustered consoles. Initialinvestigations indicate that it should be possibleto design economical console systems which clusterabout ten consoles at distances of a thousand feetfrom a local station. Thus, the consoles could beplaced in several different rooms of a single building.Interconnections between the consoles and thestation are made with coaxial cables, while theconnection between the station and the time-sharedcomputer utility are made by common carrier. In thefuture, a high-speed photographic printer will belocated in the vicinity of the console to produce hardcopy on command from any of the consoles."(Haring, 1968, p. 258).

4.49 "Human factors, habit patterns, etc., haveto be considered with respect to both operation ofthe terminal and designing new procedures."(Duffy and Timberlake, 1966, p. 273).

"We have planned a survey of man-machinerelationships, both at present and with contemplatedsystems, from two points of view: (1) man-machinerelationships from the standpoint of human engi-neering principles, and (2) the relationships from

70

Page 77: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

the standpoint of the attitudes of the people whowill be operating the system." (Sharp and McNulty,1964, p. 2).

"Heretofore not much has been said about themechanical design features of the consoles. A greatdeal of design effort is being applied to the humanengineering aspects of the consoles since it is im-perative that the user's initial contact with theconsoles, the only part of the Intrex experimentallibrary which the average user sees, be a pleasantone. The objectives here are to retain sufficientflexibility in the initial consoles to permit effectiveuser evaluation of various features and optionswhile at the same time to maintain a finished lookto the consoles." (Haring, 1968, pp. 263-264).

"Human errors and the education of humanbeings are, therefore, two systems design factorswhich must receive significant attention. The sys-tems designer must understand the chief motiva-tions of the terminal operators. Wherever possible,he must design into the system factors which willlead to operator satisfaction as a result of successfuloperation of a terminal. Operator education, bothat the start and on a continuing basis, must also becarefully planned. This is particularly true for sys-tems which may expect a significant turn-over interminal operators within a relatively short period(e.g., 6 months to a year) after the initial implemen-tation of the system. Operator education must berecognized as a continuing system function.Wherever possible, means for providing educationor guidance for operators should. be designed intothe terminal device or the system as a whole.

"The efficiency of the terminals and the system,and the degree of intrusion introduced by the systeminto the regular jobs or occupations of the terminaloperators, also must be considered. Many peoplein the organization will assume a second role as aresult of the information system. In addition totheir regular job, they will now also he expected toact as a recorder of data. As a result, human factorswill be extremely important in the design of ter-minal devices. This will insure that the data record-ing operation is as simple and straightforward aspossible, and requires a minimum effort from theterminal operator." (Pedler, 1969, p. 30).

4.50 " 'Interface', with its connotation of amere surface, a plane of separation between theman and the machine, focuses attention on the capsof the typewriter keys, the screen of the cathode-raytube, and the console light that wink and flicker,but not on the human operator's repertory of skilledreactions and not on the input-output programs ofthe computer. The crucial regions for research anddevelopment seem to lie on both sides of the literalinterface." (Licklider, 1965, p. 92).

4.51 "The 'process' Keys would be used toinstitute queries to specific portions of the auto-mated catalog and to respond to the next stepssuggested by the automated system. At least a120-character or symbol set would be desirable, incontrast with the 64 characters typically available

71

now. This means that effort will have to be placedon new or improved and certainly more economicalmeans of character generation." (King, 1963, p. 19),

"Several basic hypotheses resulted from thestudy program and have been used in the formula-tion of the initial design concepts. First, it is ad-vantageous to handle many routine operations atthe console in order to minimize communicationbetween the console and the time-shared computer.This approach reduces the demands on the centralcomputer and should result in more rapid accessto the central machine when required. It furtherreduces the cost of transmitting information fromthe time-shared computer to the console, an impor-tant consideration in any large operational system.Second, careful attention should be Oven to thesize and content of the console alphabet, the abilityto produce superscripts and subscripts, and to thehuman engineering aspects of the console in orderto ensure favorable user reaction to the console andto the overall system. Third, it must be possible forthe uninitated user to become familiar with theoperation of the console and the catalog systemrapidly and easily. Finally, the design of the consoleshould be such that it can be economically repro-duced. This feature is a necessary prerequisite tothe wide-scale user of computer-based librarysystems." (Haring, 1968, pp. 257-258).

4.52 "Careful planning of systems outputs maypermit the complete specification of all files to bemaintained and the input entering such files. Letus examine input first. The systems designer mustbe concerned with content. All needed data to beprinted in reports must first be entered into thecomputer. In addition to this substantive data, thedesigner must also consider any necessary controldata which must be part of the input to the com-puter. For example, if the system is to use sophisti-cated photocomposition equipment with an ex-panded character set, then typographic symbolsmust be included in the computer input. For ex-ample, upper and lower case indicators must beentered, variations in sizes of type to be used on thecomposer, indications of which characters are to beprinted in bold face, which are to be printed initalics, etc. . . .

"The character set to be employed in a systemis an obvious important factor in systems design.The designer must consider how sophisticated acharacter set is required. He must decide whetheron the basis of the user's requirements upper andlower case characters are needed, whether differentsizes of type would be useful in preparing publica-tions, whether the Latin alphabet is adequate orwhether Cyrillic characters must be provided. Hemust also determine the requirements for specialcharacters such as diacritical marks required forcertain foreign languages or subscripts and super-scripts needed for chemical notations." (Austin,1966, pp. 243-244, 245).

4.53 "In a comprehensive chemical dataprocessing system there is a real need to input

Page 78: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

and output conventional molecular or empiricalformulae, various nomenclatures, structure formuladiagrams, and lonsiderable generic informationand data. To accomplish the output function,the system analyst was, until recently, limitedin his choice of output hardware. This restraintled to the use of numerous and often cumbersomecoding techniques and printing conventionswhich must very often be learned by the chemiststhemselves." (Burger, 1964, p. 1).

"Examples of elaborate code conventions adoptedfor keypunching of text include reports by Nugent(1959) and by Ray (1962), both of which point outsome of the requirements and some of the difficultiesencountered with respect to providing a manualfor the keypunching of natural language texts.An even more sophisticated encoding-transliterationscheme is detailed by Newman, Knowlton and Swan-son (1960) as necessary for the keypuching of patentdisclosures in which a very wide variety of boldface,normal, italic faces and of special symbol insertionsmust be provided for. . . .

"Flexible selection from amongst a large .varietyof available characters and symbols may thusbe provided for, but at the high expense of multipleinitial keystroking, multiple code-sequence process-ing and redundant storage requirements for each`actual' character eventually to be reproduced.For example, with respect to the early Barnettexperiments using Photon, it is claimed that theproportion of control and precedence codingto actual character-selection identifications couldrun as high as 75 percent of the total input. Whilepossibilities for re-design of keyboards and formultiple-output from a single key depressionmay alleviate this problem somewhat, the difficultiesof precedence coding to achieve larger and moreversatile character sets remain a major problemarea." (Stevens and Little, 1967, p. 75).

"As an output device . . . the painfully slowprinting rate [of the teletypewriter] impedesall but the dullest of men. Its limited symbolset, and its rigid format for placing characterson a page, makes its display of information veryminimal. Graphs come out as x's on a page, givingonly the coarsest possible feel for the information.Mathematical expressions must be coded insuch a complicated shorthand that only the pro-gram's author can interpret it without a guide-book." (Stotz, 1968, p. 13).

4.54 ". . . Special provision for 64 programkeyboard overlays, each of which, when inserted,changes both the labels and the functions associatedwith the keyboard buttons. A newly inserted overlaylinks the buttons with different sets of computerprograms and so completely reorients the use ofthe console." (Dig. Comp. Newsletter 16, No. 4,35-36 (1964).)

4.55 Thus: "The greatest promise for futuredevelopment seems to be in increasing input flexi-bility of user terminals . . . We (ire beginning to

72

see the use of parallel-key devices, based on thestenotype principle . . ." (Ohlman, 1963, p. 194).

"If a data file is extremely redundant (such as anatural-language text) then the stenotyping tech-nique cuts the number of keystrokes required by asmuch as two-thirds." (Partick and Black, 1964,p. 43).

"The automatic conversion of stenotyping isbeing actively pursued with the aim of reducing thecost of file conversion." (King, 1963, p. 9).

. . . The central problem of keyboard designis to get a large selection of codes from a limited,properly spaced, simple-to-operate arrangement ofkeys." ("The (R)evolutior in Book Composition . . .IV," 1964, p. 68).

"It may be that we shall have to wait for morefundamental research to clear the way for a chordtypewriter, played more like a piano." (Duncan,1966, p. 264).

"If it turns out, as seems likely, that very largeensembles of characters are desirable in man-computer interaction with the body of knowledge,then it will become much more important that it isnow to be able to specify the desired character bypressing a pattern of keys on a small keyboard. Thatis a much better solution than pressing a single keyon a keyboard with several thousand keys." (Lick-lider, 1965, p. 100).

4.56 The case of chemical information is evenmore demanding: "It is difficult to devise analgorithm that allows the sorting process to ignorecharacters that should be ignored in traditionalalphabetization of chemicals. For instance, 1,3-dimethyl is alphabetized under M, dimethylamino-propyl under D, and 1,3-bis(dimethylaminopropyl)under B. Using a flexible listing order that can bespecified for the index solves the ordering problemin this system; editorial work is, of course, a con-sequence. The advantage of flexibility in the faceof a difficult problem was considered sufficientjustification for this effort." (Tinker, 1968, p. 324).

4.57 Problems of character-set repertoire areparticularly acute in the areas of primary publicationof scientific and technical texts, bibliographies(e.g., in the Besterman World Bibliography ofBibliographies, 1950 different pieces of Monotypecharacters were used) (Shaw, 1962, p. 268) and inlibrary card catalogs.

Ohringer comments: "There will be manyformulas and special mathematical symbols inter-spersed with the text. The type fonts used tend tovary greatly, including many Greek letters in addi-tion to many styles and sizes of Roman alphabet.The page arrangement also contains too muchvariation for present-day optical character readers."(1964, p. 311).

4.58 "A character set for instruction must ofteninclude a far larger number of characters andsymbols than is needed for other applications. Forexample, teaching a foreign language may requirethat two different alphabets be displayed at once.In mathematics, a display must often include ex-

Page 79: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

ponents, subscripts, and fraction lines, as well asalphanumeric characters and mathematicalsymbols." (Terlet, 1967, pp. 169-170).

"Primarily because of the complex chemicalnames, chemical information publishing demandsthe use of the Roman and Greek alphabets, upperand lower case letters, several type fonts (italics,boldface, small capital letters), and superior andinferior positions. In all, nearly 1500 symbols areused in CA issues." (Davenport, 1968, p. 37).

4.59 "The data contained in the augmentedcatalog are basically alphanumeric. Of particularconsequence to both the display console design andthe entire computer-based library is the largenumber of alphanumeric symbols that is required.The set of 128 USASCII standard symbols is in-sufficient, and thus must be augmented. Further-more, provisions for superscripts, subscripts andunderlines must be included to properly display tousers in forms with which they are familiar suchitems as chemical formulas and mathematicalequations." (Haring, 1968, p. 36).

4.60 "Filing rules pose a particularly complexproblem to the designer of an information system.Unfortunately, the machine has not been builtwhich will sort according to such filing rules asthose laid down by the American Library Associa-tion. The system designer must choose betweenan attempt to program these rules into the computeror to employ special control codes on input to per-mit the citations to be sorted according to the filingrules rather than the normal collating sequence ofthe computer." (Austin, 1966, p. 244).

4.61 "The earlier work of Op ler and Baird indrawing chemical structure diagrams on a computercontrolled CRT gave much promise for the futureapplicability of such devices. . . .

"Use of 2-case print mechanisms on high-speedcomputer printers for the display of extended chem-ical character sets has been under joint developmentby IBM and Chemical Abstracts . . ." (Burger,1964, p. 2).

"The use of photocomposition equipment toreproduce structural chemical formulas from codedtape is of special interest in the publication of theoutput of chemists and chemical engineers, but theproposed work would have application to almostall other branches of science where publication ofmaterial requiring spatial delineation is necessary."(Kuney, 1963, p. 249).

"Using as a starting point the characters devel-

oped by workers at American Cyanamid for thetyping of chemical structures, we proposed a fontof 50 lines and angles with which we could constructthe needed bonds and rings.

"Machine setting of chemical structures wassuggested as one of the potentials of the photo-composition work we started in 1958. The develop-ment of the chemical typewriter by Jacobus andFeldman at Walter Reed and computerized type-setting techniques by Barnett and others stimulatedthe development of a test matrix disc for thePhoton." (Kuney and Lazorchak, 1964, p. 303).

"With the cooperation of the workers at WalterReed, we are planning to encode structures usingthe Army chemical typewriter and process thisinput via computer to get a tape which will operatethe Photon for the setting of chemical structures."(Kuney and Lazorchak, 1964, p. 305).

4.62 "The ultimate challenge to calligraphy forcomptiters is the imitation of brush strokes inChinese and Japanese characters. An investigationhas been made to determine the feasibility ofdigitalization of the Japanese characters. Theresults . . . have been used for the preparation ofan abstract of a Naval Weapons Laboratory reportin Japanese as well as in French and German."(Hershey, 1967, p. 15).

4.63 "An application of the GDP, dubbed`Type-A-Circuit', has been developed by L. Maillouxand his colleagues. A contrived alphabet or char-acter set may be assembled into a pattern to createa mask for printed circuit etching. Horizontal,vertical, and diagonal segments, bends, mountirgpads, and sockets or transistors and several typesof integrated circuits have been made a part of thischaracter set . . .

"The processing program also contains plug-board prongs which are superimposed over theimage generated by the Type-A-Circuit specifica-Lions. In use, the designer prepares the string of`letters' which specify the elements of the Type-A-Circuit character set in juxtaposition with oneanother, thus forming leads, bends, pads, andsockets. This specification is prepared with theaid of a rough sketch drawn on a E. pecially preparedform. The graphical output consists of a finishednegative which may be used directly for photo-etching of a copper clad board. The time requiredto prepare artwork for etched circuit boards hasbeen greatly reduced with the Type-A-Circuitsystem." (Potter and Axelrod, 1968, p. 4).

5. Programming Problrims and Languages and Processor Design Considerations

5.1 "The most important need is for a designphilosophy that aims at the design of total informa-tion processing systems, and that will eliminatethe mostly artificial distinction between hardwaresystems and software systems. We need a con-tinuing development of the trend toward com-

t't,46gi

bined hardware-software programming." (Rosen,1968, p. 1446).

"Since, ultimately, the only computer systemof any significance to the outside world is thesystem composed of hardware plus software,one cannot claim to have truly designed a computer

73

Page 80: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

system without having designed both . . . 'Inte-grated Hardware/Software Design' requiresnot only a knowledge of hardware design andprogram design, but a thorough understandingof the total relationships between hardware andsoftware in a working system. Effective hardware/software design means consideration of the manypotential tradeoffs between capability residentin the machine configuration and capability residentin the supporting software." (Constantine, 1968,p. 50).

5.2 For example, at the [ACMSUNY Conf.],"Dr. McCann talked about the need for 'LanguageEscalation' (Commun. ACM 9, 645 (1966)),while Perlis (1965), Clippinger (1965), and Burk-hardt (:1 965), among others, discuss the generalproblems of hierarchies of language.

5.3 "Very large programs, roughly defined,are those that (1) demand many times the availableprimary storage for retention of code and im-manent data, and (2) are sufficiently complexstructurally to require more than ten codersfor implementation." (Steel, 1965, p. 231).

"For the hypothetical air traffic control systemused as an example, approximately forty volumesof several hundred pages each would be necessaryto specify the program subsystems fully, In additionto detailing the many functions that must beperformed by the several programs, limits onenvironmental behavior and transfer functionsbetween the environment and the communicationnetwork must be described . . . The amountof interrelated, precise and not easily obtainabledata required is staggering." (Steel, 1965, p. 233).

"The point upon which the mind should focus inthe foregoing description of current large-scaleprogramming procedures is that many people worka long time to prepare a computer to do quicklywhat it would take many people a long time to do.The number of people involved in a large program-ming task is very great, and the time it takes toprepare the program successfully is very long. Forexample, the programming of the entire Semi-Auto-matic Ground Environment system took roughly athousand man-years, and the average output forthe entire staff was between one and ten 'debugged'and final machine instructions per day. Despite thefact that almost any programmer can write a validsubroutine of ten instructions in ten minutes, thefigure for the Semi-Automatic Ground Environmentsystem does not indicate that anyone was lazy orinept. It represents the extreme difficulty of very-large-scale programming and the essential incoher-ence, insofar as complex and highly interactiveprocedures are concerned, of large, hierarchicalstaffs of people." (Licklider, 1964, p. 121).

5.4 How can very large, very complex, problemsbe effectively segmented for programmer attack?

"For the practical application of programminglanguages to large problems, efficient segmentation

74

features are desirable so that parts of problems canbe handled independently." (Burkhardt, 1965, p. 5).

"Automatic segmenting of code is a critical anddifficult problem. To be efficient, a program shouldriot loop from segment to segment and further, allcode not normally executed should be separatefrom the main flow of the program. Since programsegments will be retrieved on demand, this reducesthe program accesses." (Perry, 1965, p. 247).

"Since running programs will have access to onlya portion of the memory, frequent 'page turning' willbe necessary as the program goes through its majoroperating pieces. Segmenting of the program canbe very difficult if it must be done manually by theprogrammer. Assemblers or compilers with auto-matic segmentation or semi-automatic segmentationare needed." (Bauer, 1965, p. 23).

5.5 "We must now master, organize, systematl..,and document a whole new body of technical experi-ence that pertaining to programming systems."(Brooks, 1965, p. 88).

"When a programmer is asked to change a recordformat, for example, in an unfamiliar program, orwhen a systems analyst must estimate the timerequired for such a change, dnculties may arisewhich are out of proportion to the routine nature ofthe problem . . .

"Inadequate documentation may be regarded asone of the most serious problems confronting com-puter users." (Fisher, 1966, p. 26).

"One major concern is the program library. Thereare many needs here. One is an indexing system topermit users to retrieve a program from among themany. Another is adequate documentation of libraryprograms. Documentation must specify what theprogram will do and under what conditions. Stillanother need is a linking mechanism: That is, ameans for passing data from one library program toanother without manual intervention so thatlibrary programs can become segments of com-posite, larger programs. These requirements makeit difficult today to provide a convenient library ofcomputer processes even in a single computer, notto mention the problems raised by machine-to-machine incompatibility. Today it is difficult tostand on the shoulders of previous programmers."(David, 1966, p. 2).

"The importance of a library to provide a reposi-tory for programs was recognized early, but fullexploitation has been impeded by poor programdocumentation, lack of interest on the part of pro-grammers, and language problems. Many programlibraries now consist of programs in languages eitherdead or destined for an early demise. Limitationsresult from the dearth of program 'readers' and theserious practical difficulties in translation betweenmachine languages." (Barton, 1963, p. 170).

5.6 "One of the most significant elements in theorderly, rapid assimilation of multiaccess systemtechnology is adequate and appropriate documenta-tion. 'Appropriate' is the operative word here, sincethe historical norms for system and program docu-

Page 81: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

mentation are probably inappropriate for the future.The time is rapidly approaching when 'professional'programmers will be among the least numerous andleast significant system users," (Mills, 1967, p. 227).

5.7 "However, there is another means of rduc-ing programming cost making use of programdevelopment already done by others. At present thisis difficult to do because of poor documentation andmaintenance of programs by their authors. Theprimary reason for this sad state of affairs is theabsence of any clear incentive for program authorsto provide good documentation and to tailor theirprograms to the demands of prospective usersrather than their own private whims." (Dennis,1968, p. 376).

5.8 "The importance of documentation in themanagement of large programming developmentsis gendrally accepted. A number of groups havefound a formal system of documentation themost effective management tool at their disposal.In its most advanced implementation, such asystem of documentation is on-line to a timesharing system available to all participating mem-bers of the system programming project.

"Difficulties often are related to the program-mer's resistance to documentation which maybe due to several reasons:

Lack of tangible evidence of benefit to hisown activity.The inaccessibility of his colleagues' docu-mentation because of sheer quantity, lackof organization and common format and outof date status.Rejection of standards, imposed for reasonshe does not appreciate.Belief (often confirmed) that he can getalong without, and in fact feel at his creative`best' when free to improvise.

"Putting a documentation system on-line appearsto have overcome this resistance in a manneracceptable to the programmer.

The system itself can help him by rejectingcertain types of inconsistencies.He has instant access to the latest versionof his colleagues' work.Standards have been translated into for-matting conventions with which he is familiar.He understands that the system must safe-guard itself and his programs from unauthor-ized change. Thus, he more readily acceptsthe need for authorization to change andimplement." (Kay, 1969, p. 431).

5.9 "The greatest difficulty in programmingstill concerns the language to be used and thefact that any given program is relatively non-interchangeable on another machine unless ithas been rather wastefully written in, say ALGOLor FORTRAN." (Duncan, 1967, p. x).

Machine language translation programs are there-fore of interest, since they allow programs written for

75

376-411 0 - 70 - 6

one machine to be run on another and they providea bootstrap for changeovers from ore equipmentsystem to another. Examples of such programsare Control Data's computer-aided translationsystem to transfer 7090 programs into their own3600 Compass language (Wilson and Moss, 1965)and a system developed to reprogram Philco2000 codes into IBM 7094 language (Olsen, 1965).

5.10 For example, "Any software system wouldbecome increasingly useful if it could be adaptedto a variety of I/O configurations." (Salton, 1966,p. 209).

"If the performance of input/output functionsrequires specialized coding in the master controlprogram of a system, then altering the set of periph-erals or changing its i/o functions requires modifica-tions of the master control programs, leading tothe . . . problem of coping with evolution."(Dennis and Glaser, 1965, p. 9).

"If systems design can be automated, i.e.,programmed for a computer, ultimately the con-figuration can be selected by systematically design-ing the systems for a variety of configurations andselecting the configuration which will run theseapplications at lowest cost." (Greenberger, 1965,p. 278).

5J1 "Several apparently mutually exclusivefeatures of programming languages all have theiradvantages . How are we to resolve theseissues?" (Raphael, 1966, p. 71).

5.12 "Walter F. Bauer, president of InformaticsIncorporated, . . predicted that in ten years allcomputer systems Will be online systems and that90 percent of all work on computers will involveonline interaction." (Commun. ACM 9, No. 3, 645(Aug. 1966).)

5.13 "Multiprogramming: That operation of a(serial) processor which permits the execution of anumber of programs in such a way that none of theprograms need be completed before another isstarted or continued." (Collila, 1966, p. 51.)

"A subfield of multiprogramming is concernedwith the problems of computer system organizationwhich arise specifically because of the multiplicityof input-output devices which interface with thesystem. The problems in this area are referred to asproblems of multiaccessing." (Wegner, 1967, p. 135.)

"The requirements for console languages willpose a formidable problem for facility designers ofthe future." (Wagner and Granholm, 1965, p. 287).

"The whole system is multi-programmed, therebeing a number of object programs in core at once.Undoubtedly, we shall see such systems in operationand undoubtedly they will work. In the present stateof knowledge, however, the construction of a super-visor for such a system is an immense task, andwhen constructed it has severe run-time overheads."(Wilkes and Needham, 1968, p. 315).

5.14 Brooks says further, "the new systems con-cepts of today and tomorrow are most '-eenlyprogramming systems concepts: efficient time-shar-ing, fail-softly multiprocessing,, effective mass

kc 1ft

Page 82: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

information retrieval, algorithms for storage alloca-tion, nationwide real-time Teleprocessing systems."(1965, p. 90).

5.15 "Most individuals accustomed to scientificcomputation or commercial data processing fail toappreciate the magnitude of the programming effortrequired for real-time control system implementa-tion." (Steel, 1965, p. 231).

"Dr. Saul Rosen of Purdue University men-tioned several fallacies of current time-sharingsystems of which the most important is the beliefthat manufacturers who have great difficulty pro-ducing relatively simple software systems willsomehow be better able to produce the very com-plex systems required for time sharing." (Ccmmun.ACM 9, No. 8, 645 (Aug. 1966).)

5.16 "Typical functions of such executive sys-tems include: priority scheduling, interrupt han-dling, error recovery, communications switchingand the important and relevant area of cataloging,accessing and manipulating information and pro-gram files." (Weissman, 1967, p. 30).

5.17 "Today's fastest machine cannot be loadeddown and will be idle most oF the time unless it iscoupled to a large number of high speed channelsand peripheral units . . . In order to distributeinput-output requests in a balanced flow, it mustalso be controlled by a complex monitor that chooseswisely among the jobs in its job queue." (Clippinger,1965, p. 207).

"In some systems, more than one single programis processed with simultaneity . . . Inefficiencyoften results because the mix of individual pro-grams, each written for sole occupancy of a com-puter, is unlikely to demand equal loading of eachparallel element." (Op ler, 1965, p. 306).

"System overhead includes scheduling and thecontinuous processing of console input. Thesefunctions are almost uniformly distributed, degrad-ing the processor's execution rate by almost aconstant." (Scherr, 1965, p. 14).

"The executive is usually multipurpose. It mustbe designed with a balance between the conflictingrequirements of (1) continuous flow or batch process-ing, and (2) control for a demand processor in casetime-sharing consoles should be attached. In addi-tion, it usually has facilities for on-line control inparticular for communications switching." (Wagnerand Granholm, 1965, p. 287).

"The utility programs provide three basic func-tions: the movement of data within the systemrequired by time sharing or pooled procedure, thecontrolling of the printout of information on a pooledbasis, and the controlling of accesses to auxiliarymemory." (Bauer, 1965, p. 22).

"From the system designers' point of view, intime-sharing systems the most important thing isthe supervisory program. Gallenson & Weissmanpursue this subject in considerable detail and high-light other features such as 'memory protection,error checking circuitry for hardware, software and

76

operator error checks, an interrupt system . .

able logical modules [and] dynamic relocationmechanism' as being essential for time-sharing."(Davis, 1966, p. 225).

"From a programmer's point of view, one of themost important features of the second generationof computers is the way it is possible to exploittheir automatic interruption facilities to providecontrol programs and operating systems. A typicalcomputer will have stored in it, more or less per-manently, a control program (which may be calledthe 'Director', 'Master', 'Supervisor', 'Executive',etc.) whose functions are usually to arrange theloading and unloading of independent 'object' pro-grams (the programs which actually do the work)and keep a record of the sequence of jobs theyperform, to allocate input-output devices to theseprograms, and to enable the computer's operatorsto exercise the necessary control over its operation.It may also provide facilities for performing variouskinds of input-output operation. The controlprogram may be able to arrange for several objectprograms to be stored in the computer at once, andto 'time-share' the use of the instruction-sequencingunit of the computer between all these programs....

"The relationship between a control programand the object programs it controls in many waysresembles that between a deity and mere mortalsthe analogy extends to the permanence,, privileges,independence and infallability' of control programs.Perhaps because of this, a misconception seemsto have grown up about the extent of their activities.Although in a computer equipped with one instruc-tion-sequencing unit the control program only`comes to life' following an interruption of theobject program, and effectively expires when thelatter is resumed, it seems to be half-believedthat, all the time the object program is active,the control program is leading some kind of inde-pendent existence; like an all-seeing presence,keeping a close watch on all the activities of theobject program. This myth probably springsfrom experience of the behaviour of the controlprogram when the object program is caught obeyingan illegal instruction: but in fact this occurrenceis detected by hardware, not by the control programitself." (Wetherfield, 1966, p. 161).

"Built-in accounting and analysis of systemlogs are used to provide a history of system per-formance as well as establish a basis for chargingusers." (Estrin et al., 1967, p. 645).

5.18 "A very significant development in soft-ware and one which must be given serious attentionby the facility system designer, is the relativelynew concept of Data Base Management." (Wagnerand Granholm, 1965, p. 287).

"The layout and structuring of files to facilitatethe efficient use of a common data base for awide range of purposes requires careful analysesof the applications, the devices which store thefiles, and the file organization and processing.The criteria for efficiency are, as usual, maximum

Page 83: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

throughput and minimum requirement for storagespace, Ease of programming, program size, andrunning time are also important considerations,"(Bonn, 1966, p. 1866).

"The most widespread information retrievalsystems are those for data base file management,which process records organized into fields, eachcontaining a type of data in the record," (Hayes,1968, p. 23),

"Data Management Problems. It is interestingto review a few of the problems raised by G. H,Dobbs at the summary session of the first sym-posium on data management systems mentionedabove. One of these was the diverse terminologyand points of view, which make it difficult to extractany basic principles. Another was lack of concernto input quality control. Still another was lack ofappreciation for the real-life data base problemsas the user sees them, At the second symposium,two years later, Galantine described tin relativelack of progress as `apalling.' Dobbs, at this sec-ond session, identified several specific technicalareas needing further development among these,the ability to allow an unsophisticated user todescribe data structures, capability to change dataand file organization, ability to share files amongsimultaneous users with adequate file security`lockout' procedures, and the need for more flexible.report formatting." (Climenson, 1966, p. 128).

5.19 One example of a developmental systemclaiming to incorporate these features is the CatalogInput/Output System at RAND Corporation. Morespecifically: "Computer applications in linguistics,library science, and social science are creating aneed for very large, intricately structured, and insome cases tentatively organized files of data. Thecatalog a generalized format for data structuresis designed to meet that need . . . The computerprograms will:

a) Facilitate partitioning, rearranging, and con-verting data from any source in preparationfor writing the catalog.

b) Format and convert data for printing on oneof a variety of printers.

c) Sort the data elements within a catalog andmerge data from two or more separate catalogs.

d) Restructure a file by rearranging the orderof classes of data catalog transformations.

e) Address nodes in the structure, retrieve datafrom the structure, and add to or delete fromthe structure file maintenance." (Kay et al.,1966, pp. 1-2).

5.20 "In recent years there has been a rapidgrowth in the use of so-called 'formatted file sys-tems'. These systems are general-purpose datastorage, maintenance and retrieval systems de-signed to provide the user with a maximum amountof flexibility. They feature the use of a single set ofprograms to handle a variety of demands on a groupof large files. Each hle may possess a different

77

format, but all records within a file must be identicalin format. New files may be created or old fileschanged to meet new requirements. Data can beadded to files, or changes can be made to correcterrors in existing files." (Baker and Triest, 1966,p. 5-1).

"The Formatted File System (FFS) developed forthe Defense Intelligence Agency is a general-

purpose data management system for the IBM 1410which is coupled to the 1410/7010 Operating Sys-tem. It is oriented to a set of users (technicians) whocan maintain an intimate knowledge of the structureof their files and the query language to access them.It employs both tapes and disc to define, maintain,and query a set of independent files. A table ofcontents and cross index can be defined and main-tained on tape or disc. An FFS file must have aunique key field group in each record. A single levelof embedded files (periodic sets) is permitted in therecord. Except for the last field of a record, all fieldsare Fixed in length. The query language permitsgeneral logical conditions and relations and providesseveral geographical and statistical operators. FFS isone of the few general-purpose data managementsystems which are operational." (Minker and Sable,1967, p. 148).

"The users of non-numeric systems had require-ments for very long alphanumeric records. Someof the records were formatted as were unit recordsbut the fields were not all of predetermined length.To cope with this, the formatted file concept wasdeveloped. It had the ability to handle records ofvariable length by referring to a data definitionwhich described the permissible record contents,context, and internal structure. The data definitioncould be carried within each record but was morenormally separated into a data definition table toeliminate redundant entries. The formatted filecould handle variable length records but could notinterpret completely free form text. Special tech-niques were developed to handle free text which, ingeneral, relied on the usual delimiters in the text,such as periods and commas, to identify the end ofeach structural unit. Free text then could beinterpreted by scanning it as though the computerwere reading it from left to right." (Aron, 1968, p. 7).

5.21 "Univac's B-0 or Flow-Matic, which wasrunning in 1956, was probably the first true Data-Processing compiler. It introduced the idea offile descriptions, consisting of detailed record anditem descriptions, separate from the description ofprogram procedures. It also introduced the idea ofusing the English language as a programminglanguage." (Rosen, 1964, p. 8).

5.22 "General Electric has announced GECOSIII (General Comprehensive Operating SupervisorIII), an advanced operating system for large-scalecomputers. GECOS III integrates requirements foron-line batch, remote batch, and time-sharing intoone system using a common data base. The 'heart'of the GECOS III is a centralized file system ofhierarchical, tree-structured design which provides

Page 84: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

multiprocessor access to a common data base, fullfile protection, and access control." (Commun.ACM i i, 71 (Jan. 1968).)

5.23 "The Integrated Data Store (IDS), devel-oped by Bachman of the General Electric Company,is a data processing programming system that relieson linkage of all types for its retrieval and mainte-nance strategies. Through extensions to the COBOLlanguage and compiler, IDS permits the programmerto use mass random-access storage as an extensionof memory," (Minker and Sable, 1967, p, 126).

"The IDS file structure allows a linked:liststructure in which the last item on every listis linked back to the parent item that started thelist. Thus, it is possible to return to the parentitem without a recursive list of return points.In IDS, each record is an element in a linkedlist, A file of records may be subordinated to amaster record by linking it to the first memberof the subordinate file and chaining from thatpoint, through each record in the subordinatefile, through the last one, and back to the masterrecord. There is no inherent limit in IDS to thenumber of records that may exist in a chain orto the number of detailed chains that may belinked to a given record with a single masterrecord. There is also no inherent limit to thedepth of nesting that is permitted; i.e., a recordin a chain that is subordinate to a given recordmay, in turn, have subordinate record chains."(Minker and Sable, 1967, pp. 126-127).

"The G.E. Integrated Data Store is an exampleof a linked file organization. Master and detailitems are organized in a series of linked chainsto form records. Each chain at least one masteritem and one or more detail items. Each itemcontains linking or chaining information whichcontains the addresses of the next item and theprevious item in the chain. An item may belongto several chains, and linking information toall chains is included in the record." (Bonn, 1966,p. 1867).

5.24 "Franks describes the SDC Time-SharedData Management System (TDMS), whose designdraws upon ADAM and the earlier LUCID. TDMSemploys an interesting data structure involvingonly a single appearance of each item of datawith appropriately organized pointers to representorder, multiple instances, etc. This is a much-discussed idea that has needed exploration ina large system. TDMS, like too many similarsystems, lacks means by which the system can`learn' frequently traversed paths through thedata, a mechanism that would permit subsequentidentical or similar searches to be handled moreefficiently." (Mills, 1967, p. 240).

"Williams & Bartram have developed a reportgenerator as part of the TDMS. The object ofthis program is to give a nonprogrammer theability, while he is on-line with the system, toaccess a large file of data for the purpose of de-

78

scribing and generating a report. Another workin this area is by Roberts." (Minker and Sable,1967, p. 137).

"The file organization of TDMS is an invertedtree structure with self-defining entries. Thisorganization has made it possible for TDMS tomeet its goal of providing rapid responses tounpredictable queries in a time-shared environ-ment, Although this organization requires moreon-line, random-access storage than most otherfile organizations, the benefits obtained far out-weigh this storage cost." (Bleier and Vorhaus,1968, p. F97).

5.25 "IBM has developed a Generalized Infor-mation System based on experience gained withmilitary file applications. Because of IBM's intentionto provide this system as part of its applicationslibrary for the System/360 series, this system un-doubtedly will be examined quite closely by a varietyof potential users. The system has two basicmodules: one for defining, maintaining, and retriev-ing files of 'formatted' data, and one for text process-ing and concordance-type retrieval." (Climenson,1966, p. 126).

"The text-processing module of GIS includesthree basic files: (1) a dictionary ordered on keyword, each record containing: pointers to synonymsand equivalents, key word frequency data, and apointer to (2) the inverted file, which can contain avariable number of document numbers indexed bythe given key word. Finally, (3) the master file cancontain bibliographic data and all words stored forthat document. Given the document number, thebibliographic data, and key words from the docu-ment, the system can automatically generate theabove files." (Climenson, 1966, p. 127).

5.26 "GRED, the Generalized Random ExtractDevice developed at Thiokol Chemical Corporationand described by Heiner & Leishman, is writtenin COBOL. Files within the system follow theCOBOL restrictions of fixed-record size and fixed-field length size for each field; the files are alsorestricted to tape. File definitions are provided atrun time by the user, who specifies the file andrecord description. A file definition library optionis provided and can be maintained by input request.The system, developed for the IBM 7010 computer,has the ability to sort and output data, and isuseful for small files." (Minker and Sable, 1967,p. 147).

5.27 "At a second symposium held in September.1965, a benchmark problem was used to organizethe discussion of specific systems. The probleminvolved a management data base that is, an organ-ization table and personnel files. Five systems weregiven the same file data and asked to create thefile(s) and perform several kinds of operations on it.The five systems were: COLINGO of MITRE Cor-poration, the Mark III File Management System ofInformatics, Inc., the on-line data managementsystem by Bolt, Beranek, and Newman, Inc., theBEST System of National Cash Register, and the

Page 85: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

`Integrated Data Store' of General Electric."(Climenson, 1966, p. 125).

5.28 "This work has led to the definition ofFILER (File Information Language ExecutiveRoutine), the formalization of a calculus of opera-tions with specific relevance to problems of fileorganization." (Hughes Dynamics, 1964, p. 1.3).

5.29 "Apart from languages at the implementa-tion or procedural level, a number of user-orientedsystems employ list-structured data as a basicmechanism, Bernstein & Slojkowski describe asystem called Program Management System (PMS),whic* stores data in a two-level file structure. Asso-ciated with each file is a file name, a list of pointersto its subfiles, and a list of pointers to other files."(Minker and Sable, 1967, p. 126).

5.30 "The long-range objectives are:"1. To define some of the charActeristics of large

files, and to develop the structure for a largefile of heterogeneous scientific and technicalinformation including chemical structurerepresentations, with particular attention tothe necessity for:

a. Manipulating information which is insome cases formatted and in otherscompletely amorphous.

b. Making provision for inclusion of cer-tain kinds of information when it exists,and for later filling of gaps when theinformation is not available at the timeof file initiation,

c. Creating a multi-level file of informationso that provision may be made for theinclusion of general information, aswell as the addition of various levels ofspecificity as required by the user, eithersimply because the additional levels ofspecificity exist and might be useful at alater date or because there is a userrequirement for that degree of specificity.

d. Examining the inter-structuring of filesthat are part of the larger files but whichmay be geographically separated.

"2. To provide list-processing capability in thefile structure in order to:

a. Maintain flexibility in the file.b. Provide for an efficient means of

updating.c. Permit additions to files, both in

classes existing already in the systemand in the entry of new classes ofinformation.

d. Free the system from the constraintsof fixed-length and formatted files.

e. Permit aggregations of data from filesthat are geographically separated.

To investigate the techniques of file manipula-tion in order to provide systems of sub-files, special files, desirable redundancyin exchange for multiple access. to informa-tion, and the necessary keying or cross-

3.

4

79

referencing facii;lty required for such asystem of multi-level, multi-subject files.This work must take into account also theplanning and development of several kindsof computer programs which are requiredfor file maintenance . . .

"4. To determine the kind of organizationof information which will most readilypermit questioning of the file by differentgroups of questioners who have varied (andvarying) requirements for the kinds of infor-mation contained in the file, and who have,furthermore, requirements for differentdegrees of specificity in the informationthey are seeking. (Anderson et al., 1966,pp. 2-4).

5.31 "Fossum & Kaskey examine differentkinds of file organization and investigate thepotential of a 'list ordered file' for economizingon the work-load on a computer when carryingout a search. The data base used is the termsassigned to a number of DDC documents, usingthe DDC thesaurus, The relevance of this studyfor this chapter is the analysis of word associations,and the objective of the analysis is, in effect, topermit a certain amount of precoordination ofterms, in order to reduce the amount of processingnecessary in the retrieval operation. The meansto achieving this objective is the determinationof the mutual exclusiveness of terms. Unlike therigorous intellectual marshalling of terms infacet analysis, the procedure here is to determinemutual exclusiveness on the basis of whether ornot terms appear together in the indexing ofindividual documents. The attemp to developan economic list-organized file, the specific purposeof the work reported here, is abortive, but thetechnique of handling terms in this way remainsan intriguing possibility as a machine aid to thegeneration of schedules for faceted classificationschemes." (Sharp, 1967, p. 102).

5.32 "The fundamental importance of datastructures may be illustrated by considering theproblem of designing a single language that wouldbe the preferred language either for a purely arith-metic job or for a job in symbol manipulation.Attempts to produce such a language have beendisappointing. The difficulty is that the data struc-tures required for efficient implementation in thetwo cases are entirely different. Perhaps we shouldrecognize this difficulty as a fundamental one, andabandon the quest for an omnibus language whichwill be all things to all men." (Wilkes, 1968, p. 5).

5.33 "Historically primary preoccupation withthree classes of data structures (real-complexscalars and arrays, alphanumeric strings and files,and symbolic lists) have led to three major languagedevelopments; exemplifiedbut not exhaustivelydefined by ALGOL, COBOL, and LISP, respec-tively. A major concern of procedural languagedesigners is the reconcilization of these diverse

I-4.1.02,..1 tsunA.1.

Page 86: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

data types and their transformations with onelanguage." (Perlis, 1965, p. 189).

"Studies in artificial intelligence call for powerfullist-processing techniques, and involve such opera-tions as the placing of items on lists, the searchingof lists for items according to specified keys, andso on. Languages in which such operations can beeasily specified are indicated. The ,ioneer languagein this regard was IPL developed Iv Newell, Simonand Shaw." (Wilkes, 1964, p, F 1-3).

5.34 "Early list processing languages wereinvented in order to carry out specific projects of anon-numerical nature. The sequence of languagescalled IPL . . Started out in order to providesatisfactory methods of organizing information forwork in theorem proving and problem solving, afield in which the amount of storage needed is veryvariable and unpredictable, and in which the struc-ture of the lists carries important information.LISP . . . was developed partly for the use of aproject called the 'Advice Taker' which was in-tended to operate in a complex way on English state-ments about situations. A language called FLPL . . .

which was embedded in FORTRAN was producedto write programs for proving theorems in geome-try. COMIT . . . was devised for language re-search." (Foster, 1967, p. 3).

5.35 "Furthermore, since a list is itself anordered set of items which may themselves be lists,there is no limit to the complexity of the structuresthat can be built up except the total memory spaceavailable. Also, there is no restriction to the numberof lists on which an item can appear." (Hormann,1960, p. 14).

"List structures have, as their fundamentalproperty, the substitution rule that allows anysymbol in a list to be substituted by a list structure.The natural control procedure for such rules isrecu[r]sion: the use of rules whose definition con-tains calls on itself." (Perlis, 1965, p. 189).

5.36 "List structures allow dynamic storageallocation for units of data larger than can be storedin a single computer word. Normally blocks of con-secutive storage cells are allocated for data setsprior to program execution based on estimates ofthe maximum size of these sets. If the sizes of datasets are variable or unknown, wasteful amounts ofstorage must be assigned t.o guard against programfailure caused by overflow at some block of assignedstorage. Faced with this problem, Newell, Shaw,and Simon organized information in memory into anassociative list structure format." (Fuller, 1963, p. 5).

"Since the order relation among items is deter-mined only by links, this relation can be changedsimply by changing the links without moving theitems physically, thus allowing simple and quickchanges of organization of memory content.Processes such as insertion and deletion of items ina list become very simple." (Hormann, 1960, p. 13).

5.37 For example, "list processing is relativelynew; none of the forms for the components of list

80

languages has as yet been established as 'standard'even in an informal sense." (Raphael, 1966, p. 67).

"The concept of list processing, or chaining, hasbeen used as a technique for the manipulation oflogical data strings for many years and has beenformalized as a language for handling data in com-puter storage. List processing has also been utilizedwith limited success for the control of data in direct-access storage devices. In general, when list struc-tures are used for external data control, only a sub-set of the possible data structures is implemented,and the logical and physical relationships areapproached as a single entity. Thus, the many ven-tures into this area are highly individualized, result-ing in duplication and incompatibility." (Henry,1969, p. 2).

5.38 "List processing is a convenient mode ofdescription for many of the more interesting andprovocative areas of computer application, but theavailable languages such as IPLV, COMIT,LISP, SIMSCRIPT require a level of programmersophistication that makes them almost prohibitivefor normal classroom use." (Conway et al., 1965,p. 215).

5.39 "While the list processing languages . .

make it possible to handle reasonably complicateddata structures, the programmer is neverthelessfaced with a number of difficult problems as soonas he attempts to use such languages in practice . . .it becomes necessary to fit the given data structuresinto what often turns out to be an unnatural for-mat . . . At the very least, this type of dataorganization wastes a great deal of storage space . . .Furthermore, the inefficient space allocation alsoresults in extremely slow execution times for theobject programs." (Salton, 1966, p. 205).

"The user of list-processing languages is fre-quently plagued by the lack of memory space.Sophisticated means of 'garbage collection' havebecome important to circumvent memory spacelimitations; however, they constitute mostly apalliative that seldom satisfies the user's appetitefor extra space." (Cohen, 1967, p. 82).

"In their relation to machines the list processinglanguages are burdensome, first because the datastructures involved may be represented onlythrough explicit links, and second because thefree growth and cancellation of structures re-quires a troublesome administration of the storage."(Naur, 1965, p. 197).

"Powerful software schemes (e.g., the listprocessing languages) have been developed todeal with the problem of treating scattered dataas a contiguous string, but they pay a very heavyprice in memory overhead (in some schemes overthree-fourths of the available memory is required tohandle the addressing mechanisms) and in theprocessing time required to perform the addressarithmetic.

"An alternative solution is proposed here whichinvolves the addition of a small Associative Memory

Page 87: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

(AM) to the addressing machinery of the computer(or peripheral dirF-3 Pr..,cess storage device). Aswill be shown, this hardware modification willpermit scattered data to appear contiguous, withonly a token overhead cost in memory and process-ing time." (Fisch ler and Reiter, 1969, p. 381).

5.40 "The process of item coletion involvesthe elimination of an existing item in the file.This task is complicated by the fact that afterdeletion, linkage information must be modifiedif the system is to operate properly. This canbe accomplished in two ways. In the first casethe item can be physically deleted and each listassociated with the item can be searched andthe associated link deleted. Alternatively, thespecified item can be flagged in the control section.Then during retrieval flagged items are ignored."(Prywes, 1965, p. 20).

"There are several penalties paid for obtainingthe flexibility of a memory organized into a liststructure. One of these is the requirement foradditional memory bits to store appropriate linkingaddresses. A far more serious penalty is the timerequired to retrieve symbols from lists and foroperations required to add symbols to lists, reorderlists, delete symbols from lists, erase lists, transferlists to and from bulk storage, and manipulatepush-down registers. These tasks are taken outof the hands of the programmer by various list-processing languages, but they still must be per-formed by the machine." (Fuller, 1963, p. 12).

5.41 "Most list-processing languages havesuffered from their inability to deal directly withcomplex data structure and/or from their inabilityto perform the complete range of programminglanguage operations upon the data list structures."(Lawson, 1967, p. 358).

5.42 ". . . In many cases, especially among thelist-oriented languages, they simply have not gearedthemselves to the large amounts of data and dataprocessing required in this special field." (Simmons,1966, p. 214).

"The reason list processing has yet to acquireuniversal popularity in non-numeric data processingis that, while these mechanics are easy in the listprocessing language, they can be very time-consum-ing when performed by a computer, so much sothat the economics of use of present-day listprocessing languages in most information retrievalapplications is questionable. Because of the savingin programming time that is possible, however, wemay anticipate future hardware and software devel-opments that will enable these methods to be madepractical." (Meadow, 1967, pp. 200-201.)

"This suggests that a conventional linked liststructure is inadequate for representing map (orpicture) structure. Alternative forms might be anassociative memory or a more general linked ele-ment structure." (Pfaltz et al., 1968, p. 369).

5.43 "It must be said also that numeric informa-tion is often awkward to store in terms of list struc-

81

tures, and arithmetic procedures may be corre-spondingly difficult to perform." (Salton, 1966,p. 208).

"The well-known list structure languages such asLISP were not designed for graphics and are notvery efficient or easy to use for such multidimen-sional problems. They are well suited for processingstrings of text but break down when two-way asso-ciations between list elements are the rule ratherthan the exception." (Roberts, 1965, p. 212).

5,44 "The list processing section of the compilershould make it possible to handle variable lengthdata structures, in such a way that each data itemmay be associated with a variable number of listpointers, as specified by a special parameter."(Salton, 1%6, p. 209).

5.45 "It might . . . be useful to think about ageneral graph and tree manipulator, capable ofperforming most of the common transformations onabstract graphs and trees." (Salton, 1966, p. 209).

"The value of a generalized file processing systemis directly related to the variety of file structures itcan accommodate. With most existing systems, theuser is limited to structures having the form ofhierarchies or trees. For many problems, suchstructures are either not adequate or not efficient;instead, theie problems require structures of theform of graphs, of which the tree is a special case.Examples of problems of this type are network andscheduling problems, computer graphic problems,and information retrieval problems." (McGee, 1968,p. F68).

"Most complex data structures can be ade-quately represented as directed graphs. A directedgraph is defined informally as a set of labeled pointsor nodes, and a set of directed line segments orarcs connecting pairs of nodes. An arc running fromnode x to node y denotes that the entities repre-sented by x and y are related in some way, e.g.,x is superior to y, or x precedes y. If x and y bearmultiple relationships to each other, they may beconnected by multiple arcs, each arc being labeledwith the relation being represented." (McGee, 1968,p. F68).

5.46 "Since Rover has more features that arecommon to IPLs than any other language, it maybe said to be a member of the IPL family. Someadditional features are present in Rover whichattempt to provide a more powerful tool for syn-thesizing complex processes. . . .

"The original design of IPL memory structurehas only DLs [down links] which, by themselvescan form lists and achieve a great flexibility. Weare introducing ULs [up links] here in order to givegreater flexibility and to overcome some of thedifficulties encountered by having only DLs. . ."

"Since the order relation among items is deter-mined only by links, this relation can be changedsimply by changing the links without moving theitems physically, thus allowing simple and quickchanges of organization of memory content.

Page 88: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Processes such as insertion and deletion of items ina list become very simple.. . .

"Another disadvantage is the loss of ability tocompute addresses. Since the order relation isintroduced to a set of items only by extending`adjacency' defined by links, the 'table look -up'feature is lost.

"A part of the seemingly lost feature is regained,and furthermore, an added feature is gained byintroducing 'side links' which are extension of ULsand DLs. . . .

"The first item on a list . . . uses its UL to storethe name of the list [the location of the cell thatcontains the name of the first item on the list].The last item of a list contains the name of the firstitem as its DL as well as the 'end bit' description,

"This internal convention gives a list the effectof being a ring; having reached to the end item, theprocessor has an access to the first item as well asthe name of the list without backtracking." (Hor-mann, 1960, pp. 2, 13, 15-16, 21).

5.47 "The Association-Storing Processor (ASP)consists of a language designed to simplify theprogramming of non-arithmetic proble Y to-gether with a number of radically new r < ~chineorganizations designed to implement this language.These machine organizations are capable of high-speed parallel processing, and take advantageof the low cost of memory and logic offered bylarge scale integration (LSI).

"The ASP concept has been developed spe-cifically for applications having one or more of thefollowing characteristics:

(1) The data bases are complex in organizationand may vary dynamically in both organiza-tion and content.

(2) The associated processes involve complexcombinations of simple retrieval operations.

(3) The problem definitions themselves maychange, often dramatically, during the life ofthe system." (Savitt et al., 1967, p. 87).

5.48 "Sketchpad's topological file uses aspecial ring structure that permits storage structurerearrangement with a minimum of searching . . .

The use of redundant interconnecting blocks in itslist structure gives it the appearance of beingordered more like a tree. This allows fast forwardand backward searches for subgroups. The structureused insures that at most two steps must be takento find either the header block or the previouselement." (Coggan, 1967, p. 6).

5.49 "If some connection is deleted, it is notsufficient to just remove the line from the displayfile. That line represents an association betweentwo elements and may constrain the movementof the elements, the distance between them andtheir deletion, as well as their external properties(e.g., electrical, program flow, etc.). Thus, eachline or other picture element must be attachedto an undetermined number of graphical elements

82

St 1r

and constraints in such a way as to facilitate theprocessing of these associations." (Roberts, 1965,p. 211).

"A block of elements collects many ties togetherand thus allows the multi-dimensional associa-tions required for graphical data structures . . .

A block is formed from a sequence of registersof any length and contains a blockhead identifierat the top, a group of ring elements and any numberof data registers . . . Blocks are used to representitems or entities and the rings form associationsbetween blocks." (Roberts, 1965, pp. 212-213).

"There are two operators for moving aroundclasses, one to go through all members of a class(arrows leaving this item), and one to find all theclasses an item belongs to (arrows pointing atthis item)." (Roberts, 1965, p. 214).

"Before a new free block is added to the freelists, the block just after it in memory is examinedand if it is free also, then the two blocks are mergedinto one larger free block. This merging techniquemakes the allocation of variable length blocksalmost as efficient as the allocation of fixed lengthblocks." (Roberts, 1965, p. 213).

5.50 "In the current DEACON work, data isorganized into ring structures. These structures aresimilar in many respects to the plex structuresdefined by Ross and used by Sutherland in Sketch-pad, and are an extension of the notion of liststructure." (Thompson, 1966, pp. 354-355).

5.51 "SLIP was the first embedding of a list-processing capability within a higher level languageand has a formative ring structure. The idea ofrings was crystallized by Sutherland and Robertsand used with data systems designed primarily forgraphics and computer-aided design. Roberts hasalso developed a language to refer to rings (ClassOriented Ring Associative Language: CORAL).In such languages, the associations are built intothe structure by allowing blocks of information tobe threaded by rings which carry the associationsbetween the blocks of data. This is illustrated

. ., where JOHN's"parent of ring. goes through`NUB C' and 'NUB A', which in turn reference`Edith' and 'Arnold' respectively. Thus John isthe parent of both Edith and Arnold. A similarstructure has been implemented with PL/1 by Dodd.The duality of certain relationships, such as: 'de-fined by' and 'defines' or 'to the left of' and 'to theright of, etc., led to the need for a connector block,here illustrated by the three NUBS. in essence, theNUB represents a two-way switch for transferringout of one ring and into another. The subroutinesor macros pass along the ring until they arrive at aNUB. They 'switch' it, and pass into the other ring,passing along the second ring (and others as found)picking up information until they return to theoriginal NUB and re-enter the first ring. This allowsanswers to questions such as 'Who is the mother ofArnold?' as well as 'Who is the sou of Mary?' Oneof the major disadvantages of these structuresoccurs on adding a new, not previously anticipated,

Page 89: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

association. The operation either is impossible,requiring a complete re-compilation, or else clumsy,patching on additional blocks . . and requiringsophisticated garbage collectors. A recent surveyby Gray describes these and similar structures."(Ash and Sibley, 1968, p. 145).

5.52 "An interesting variation on the basic listprocessing technique was described by Cheydleurin 1963. In this technique, a datum is stored justonce in memory, but provision is made for manypointers both to and away from a given element,Data redundancy of normal list processing is elim-inated at the expense of addressing redundancy.The address bookkeeping becomes quite complex,but it is interesting to note that Cheydleur's conceptis applicable to cells of varying sizes. He suggestsmethods for partitioning strings of symbols so thatthere is a reasonable distribution of pointers through-out the store. That is, if it can be predicted that theco-occurrence of individual letters or words will beof high frequency, these co-occurrences would bestored in the same cell. Since the storage spacetaken by pointers competes with data storagerequirements, methods for balancing cell size andpointer distribution are quite important." (Climen-son, 1966, p. 114).

5.53, "Ring structures are adequate for storinga wide range of richly interrelated data that ispertinent to such functions as intelligence analysis,management planning, and decision making. Typi-cal of these functions are resource allocation prob-lems, in which the pertinent data is an inventory ofthe resources, their characteristics, and their inter-relations. This type of data is specifiable in ringstructures." (Craig et al., 1966, p. 366).

5.54 "APL was conceived at the General MotorsResearch Laboratories to satisfy the need for con-venient data association and data handling tech-niques in a high-level language. Standing forASSOCIATIVE PROGRAMMING LANGUAGE, itis designed to be embedded in PL/1 as an aid to theuser dealing with data structures in which associa-tions are expressed." (Dodd, 1966, p. 677).

5.55 "The principal tool used by this system isthe Associative Structure Package (ASP7) imple-mented by W. A. Newman for the PDP7. It is usedto build data structures whose associative proper-ties are expressed using rings, where a ring is aseries of addresses in words so that the first pointsto the next, and so on until the last points back tothe first. The first pointer is specially marked andis called a ringstart. The ringstart may point toitself; i.e., the ring may be null." (Pankhurst, 1968,p. 410).

5.56 "On-line interaction introduces into thelanguage picture the possibility of 'conversation'.This possibility, together with the need to bring on-line languages abreast of conventional programminglanguages, opens an inviting field to research anddevelopment." (Licklider, 1965, p. 124).

We note also the following: "On-line Program-ming Systems put the raw power of a computer at

83

the immediate disposal of a human user. Evidenceof today's great interest in on-line programmingsystems is that more and more of them are beingused." (Sutherland, 1965, p. 9).

"The on-line nature of time-sharing permits directman-machine communication in languages that arebeginning to approach natural language, at a paceapproaching normal human conversation, and insome applications, at graded difficulty levels appro-priate to the skill and experience of the user."(Sackman, 1968, p. 1).

"To improve all our on-line systems, we needmore :Ind better languages of communicationbetween the man and the machine which are`natural' in the sense that they are easy to use andfit the task. Why can't I write mathematical equa-tions which look like mathematical equations andhave the machine accept, compile and performthem? Why can't I describe network problems tothe computer by means of the picture showing thenetwork? Why can't I, in filter design, place polesand zeros on the complex plane? The answer ineach case is: I can in principle, but not in practice.As yet, the techniques which let me do these thingsare not widely used." (Sutherland, 1965, pp. 11-12).

5.57 "Overhead computation can be thought ofas degrading the effective operating rate of theprocessor ac, seen by the user." (Scherr, 1965, p. 28).

5.58 Examples include, but are not limited to,the following: "With each computation there isassociated a set of information to which it requiresa high density (in time) of effective reference.The membership of this working set of informa-tion varies dynamically during the course of thecomputation." (Dennis and Van Horn, 1965, p. 11).

"Various problem areas make different demandsfor data structures and for syntax. The argumentis not to restrict specialization to transliteration.It is to avoid unnecessary diversity and to achievenecessary diversity by specializing in the variousdirections at as high a point on the scale as possibleand thus by handling the bulk of the language-implementation process in a uniform way withcommon facilities." (Licklider, 1965, p. 181).

"Most writers of papers on on-line applicationsof time-shared systems are universal in their agree-ment on the importance of interactive languages.They also seem to concur that such languages willdiffer substantially from the programming languagesnow in existence, such as FORTRAN, ALGOL, andIPLV. The difference is primarily due to the newkinds of operations made possible by the remoteconsole through which communication betweenman and computer takes place." (Davis, 1966,p. 229).

"In programming as well as in hardware designand system organization, time sharing calls for newdepartures. Perhaps the most significant is theone referred to by the term 're-entrant programs'.When many computer programs are currentlyactive, it is likely that several of them are doing

Page 90: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

essentially the same thing at the same time. When-ever that is the case, efficiency in use of memoryspace would be gained if the several programsshared a common subprogram." (Licklider, 1965,p. 26).

5.59 "In artificial intelligence problems, thisprocess [code, run & see, code] must often beprolonged beyond the debugging phase. It isimportant for the programmer to experimentwith the working program, making alterationsand seeing the effects of the changes. Only in thisway can he (evaluate it or extend it to cover moregeneral cases." (Teitelman, 1966, p. 2).

"This appears to be the best way to use a trulyinteractive man-machine facilityi.e., not as adevice for rapidly debugging a code representinga fully thought out solution to a problem, butrather as an aid for the exploration of problemsolving strategies." (Weizenbaum, 1966, p. 42).

"If computers are to render frequent and inten-sive service to many people engaged in creativeendeavors (i.e., working on new problems, notmerely resolving old ones), an effective compromisebetween programming and ad hoc programming isrequired." (Licklider, 1965, p. 181).

5.60 "Programs very likely to contain errorsmust be run but must not be permitted to interferewith the execution of other concurrent computa-tions. Moreover, it is an extremely difficult task todetermine when a program is completely free oferrors. Thus, in a large operational computersystem in which evolution of function is required,it is unlikely that the large amount of programminginvolved is at any time completely free from errors,and the possibility of system collapse through soft-ware failure is perpetually present. It is becomingclear that protection mechanisms are essential toany multiprogrammed computer system to reducethe chance of such failure producing catastrophicshutdown of a system." (Dennis, 1965, p. 590).

5.61 "The functional language makes no refer-ence to the specific subject matter of the prob-lem . . . The program must be organized to separateits general problem-solving procedures from theapplication of these to a specific task." (Newell andSimon, 1959, p. 22).

"There has been a shift away from a concernwith difficulty andt9w-ard a concern with general-ity. This means--both a concern that the problemsolver accept a general' anguage for the problemstatement, and that the internal representation bevery general." (Newell, 1965, p. 17).

5.62 For example, "Multilang is a problem-oriented language that translates the user's state-ment of the problem into requests for relevantprograms and data in the system's memory. Thelanguage was designed specifically to assist in prob-lem-solvhig and, in so doing, to 'accumulate knowl-edge'. For example, it may not recognize the term`eligible voter', but it can be told that an eligiblevoter is a thing that is 'human', 'age over 21' and

84

either 'born in the U.S.' or 'naturalized'. If theseterms have been previously defined, the computercan find an answer to the question; additionally,the next time it is asked about eligible voters, itwill know what is meant." (Carr and Prywes, 1965,pp. 88-89).

5.63 "Each user, and each user's program, mustbe restricted so that he and it can never access(read, write, or execute) unauthorized portions ofthe high-speed store, or of the auxiliary store. Thisis necessary (1) for privacy reasons, (2) to prevent adefective program from damaging the supervisoror another user's program, and (3) to make theoperation of a defective program independent of thestate of the rest in store." (Samuel, 1965, p. 10).

5.64 "The TRAC (Text Reckoning And Com-piling) language system is a user language for controlof the computer and storage parts of a reactivetypewriter system." (Mooers, 1966, p. 215).

"A solution to this problem is to use a machine-independent computer language, designed tooperate with a reactive typewriter, to operate thelocal computer. With this method, the computeracts in place of the human controller to gain accessto remote computer systems. This approach ispossible only with an extremely versatile language,such as the TRAC language. . . . It is relativelyeasy to describe in TRAC the set of actions whichmust be taken in order to make the remote computerperform and bring forth the desired files." (Foxet al., 1966, p. 161).

5.65 "The basic property of symbolic languagesis that they can make use in a text of a set of localsymbols, whose meaning and form must be declaredwithin the text (as in ALGOL) or is to be deducedby the context (as simple variables in FORTRAN)."(Caracciolo di Forino, 1965, p. 227). However, "itis . . . regrettable from the standpoint of theemerging real-time systems that languages likeCOBOL are so heavily oriented toward processingof sequential tape file data." (Head, 1963, p. 40).

5.66 Some other recent examples includeLECOM, L6, LISP II, CORAL, and TREET, char-acterized briefly as follows:

"The compiler language, called LECOM, is aversion of COMIT, and is especially designed forsmall (8K) computers. The microcategorizationprogram was written in LECOM, and assigns anappropriate syntactic category_ to each word of aninput sentence." (Reed and Hillman, 1966, p. 1).

"Bell Telephone Laboratories' Low-Level LinkedList Language (L6, pronounced 'L-six') containsmany of the facilities which underlie such listprocessors as IPL, LISP, COMIT and SNOBOL,but it permits the user to get much closer to machinecode in order to write faster-running programs, touse storage more efficiently and to build a widervariety of linked data structures." (Knowlton, 1966,p. 616).

"L6 . . . is presently being used for a varietyof purposes, including information retrieval, simu-

V/ 'alliketVdd I

Page 91: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

lation of digital circuits, and automatic drawing offlowcharts and other graphical output." (Knowlton,1966, p. 617).

"The most important features of Lsix whichdistinguish it from other list processors such asIPL, LISP, COMIT and SNOBOL are the availa-bility of several sizes of storage blocks and a flexiblemeans of specifying within them fields containingdata or pointers to other blocks. Data structuresare built by appropriating blocks of various sizes,defining fields (simultaneously in all blocks) andfilling these fields with data and pointers to otherblocks. Available blocks are of lengths 2" machinewords where n is an integer in the range 0-7. Theuser may define up to 36 fields in blocks, whichhave as names single letters or digits. Thus the Dfield may be defined as bits 5 through 17 of thefirst word of any block. Any field which is longenough to store an address may contain a pointerto another block. The contents of a field are inter-preted according to the context in which they areused." (Housden, 1969, p. 15).

"LISP 2 is a new programming language designedfor use in problems that require manipulation ofhighly complex data structures as well as lengthyarithmetic operations. Presently implementedon the AN/FSQ-32V computer at the SystemDevelopment Corporation . . . A particularlyimportant part of the program library is a groupof programs for bootstrapping LISP 2 onto anew machine. (Bootstrapping is the standardmethod for creating a LISP 2 system on a newmachine). The bootstrapping capability is suffici-ently powerful so that the new machine requires noresident programs other than the standard monitorsystem and a binary loader." (Abrahams et al.,1966, pp. 661-662).

"This list structure processing system andlanguage being developed at Lincoln is calledCORAL (Class Oriented Ring Association Lan-guage). The language consists of a set of operatorsfor building, modifying, and manipulating a liststructure as well as a set of algebraic and con-ditional forms." (Roberts, 1965, p. 212).

"TREET is a general-purpose list processingsystem written for the IBM 7030 computer atthe MITRE Corporation. All programs in TREETare coded as functions. A function normally hasa unique value (which may be an arbitrarliy complexlist structure), a unique name, and operates withzero or more arguments." (Bennett et al., 1965,pp. 452-453).

5.67 "The growing importance of the familyconcept accentuates the need for levels of soft-ware. These levels of software will be geared toconfiguration size instead of family member.In other words, the determining factor will be theamount of memory and the number of peripheralunits associated . . ." (Clippinger, 1965, p. 210).

"The advantages of high-level programminglanguages . . . [include] higher machine inde-

85

pendence for transition to other computers, andotherwise for compatibility with hardware . . .

[and] better documentation (compatibility amongprograms and different programmers)." (Burk-hardt, 1965, p. 4).

"The user needs to employ data structures andprocesses that he defined in the past, or that weredefined by colleagues, and he needs to refresh hisunderstanding of those objects. The language musttherefore have associated with it a metalanguageand a retrieval system. If there is more than oneworking language, the metalanguage should becommon to all the languages of the system." (Lick-lider, 1965, p. 185).

"The over-all language will be a system becauseall the sublanguages will fall within the scope of onemetalanguage. Knowing one sublanguage will makeit easier to learn another. Some sublanguages willbe subsets of others." (Licklider, 1965, p. 126).

5.68 "The most immediate need is for a generalcompiling system capable of implementing avariety of higher-level languages, including inparticular, string manipulations, list processing facil-ities, and complete arithmetic capabilities." (Salton,1966, p. 208).

5.69 Licklider, 1965, p. 119."It will be absolutely necessary, if an effective

procognitive system is ever to be achieved, to haveexcellent languages with which to control processingand application of the body of knowledge. Theremust be at least one (and preferably there should beonly one) general, procedure-oriented language foruse by specialists. There must be a large number ofconvenient, compatible field-oriented languages forthe substantive users." (Licklider, 1965, p. 67).

5.70 "There is, in fact, an applied scientificlag in the study of computer programmers andcomputer programming a widening and criticallag that threatens the industry and the professionwith the great waste that inevitably accompaniesthe absence of systematic and established methodsand findings and their substitution by anecdotal4inion, vested interests, and provincialism."(Sackman et al., 1968, p. 3).

"Work on programming languages will continueto provide a basis for studie9 on languages in gen-eral, on the concept of grammar, on the relationbetween actions, objects and words, on the essenceof imperative and declarative sentences, etc.Unfortunately we do not know yet how to achieve adefinition of programming languages that coversboth their syntactic and pragmatic aspects. To thisgoal a first step may be the thorough study of speciallanguages, such as programming languages formachine tools, and simulation languages." (Carac-ciolo di Forino, 1965, p.

5.71 "As Levien & Maron point out, and Bobrowanalyzes in detail; natural language is much toosyntactically complex and semantically ambiguousto be efficient for man-machine communication. Analternative is to develop formalized languages with

Page 92: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

a simplified syntax and vocabulary. Examination ofseveral query languages, for example, COLINGOand GIS, reveals a general (and natural) depend-ence on, and adaptation of, the rules of formallogic. However, even with English words for logicaloperations, relations, and object names, formalquery languages have been a less-than-ideal solutionto the man-machine communication problem. Ex-cept for the simplest queries, punctuating a nestedBoolean logical statement can be tricky and canlead to errors. Furthermore, syntactic problemsaside, a common difficulty arises when the userdoes not know the legal names of the data for whichhe is searching or the structural relationships amongthe data items in the data base, which may makeone formulation of his query very difficult and ex-pensive to answer whereas a slightly altered onemay be simple to answer." (Minker and Sable,1967, p. 136).

"The possibility of user-guided natural-languageprogramming offers a promise of bridging the man-machine communication gap that is today's greatestobstacle to wider enjoyment of the services of thecomputer." (Halpern, 1966, p. 649).

"Such a language would be largely built by theusers themselves, the processor being designed tofacilitate the admission of new functions and nota-tion of any time. The user of such a system wouldbegin by studying not a manual of a programminglanguage, but a comparatively few pages outliningwhat the computer must be told about the locationand format of data, the options it offers in outputmedia and format, the functions already availablein the system, and the way in which further func-tions and notation may be introduced. He wouldthen describe the procedure he desired in termsnatural to himself." (Halpern, 1967, p. 143).

5.72 "Further investigation is required in search-ing and maintaining relationships represented bygraph structures, as in 'fact retrieval' systems.Problems in which parts of the graph exist in onestore while other parts are in another store mustbe investigated, particularly when one breaks alink in the graphs. The coding of data and of rela-tions also needs much work." (Minker and Sable,1967, p. 151).

5.73 "This program package has been used inthe analysis of several multivariate data bases,including sociological questionnaires, projectivetest responses, and a sociopolitical study of Colom-bia. It is anticipated that the program will also proveuseful in pattern recognition, concept learning,medical diagnosis, and so on." (Press and Rogers,1967, p. 39).

5.74 "The execution of programs at differentinstallations whose total auxiliary storage capacitiesare made up of different amounts of random accessstorage media with different access characteristicscan be facilitated by the organization of the auxiliarystorage devices into a multilevel storage hierarchyand the application of level changing." (Morenoffand McLean, 1967, p. 1).

86

5.75 "A systems problem that has receivedconsiderable attention is how to determine whichdata should be in computer memory and whichshould be in the various members of the massstorage hierarchy." (Bonn, 1966, p. 1865).

"The key requirement in multiprogramming sys-tems is that information structures be representedin a hardware-independent form until the momentof execution, rather than being converted to ahardware-dependent form at load time. This require-ment leads directly to the concept of hardware-independent virtual address spaces, and to theconcept of virtual processors which are linked tophysical computer resources through addressmapping tables." (Wegner, 1967, p. 135).

5.76 "With respect to the central processingunit, the major compromise of future needs withpresent economy is the limitation on addressingcapacity." (Brooks, 1965, p. 90).

"Other major problems of large capacity mem-ories are related to the tremendous amount of elec-tronic circuitry required for addressing and sens-ing." (Kohn, 1965, p. 132).

5.77 For example, "the problem of assigninglocations in name space for procedures that may bereferenced by several system functiols and mayperhaps share references to other procedures, isnot widely recognized and leads to severe complica-tions when implementation is attempted in thecontext of conventional memory addressing."(Dennis and Glaser, 1965, p. 6).

5.78 "A particularly troublesome phenomenon,thrashing, may seriously interfere with the perform-ance of paged memory systems, reducing comput-ing giants (Multics, IBM System 360, and othersnot necessarily excepted) to computing dwarfs.The term thrashing denotes excessive overhead andsevere performance degradation or collapse causedby too much paging. Thrashing inevitably turns ashortage of memory space into a surplus ofprocessortime." (Denning, 1968, p. 915).

5.79 ". . . Global weather prediction. Here athree-dimensional grid covering the entire worldmust be stepped along through relatively shortperiods of simulated time to produce a forecast in areasonable amount of time. This type of problemwith its demand for increased speed in processinglarge arrays of data illustrates the applicability of acomputer designed specifically for array processing."(Senzig and Smith, 1965, p. 117).

"Most engineering data is best represented in thecomputer in array form. To achieve optimum capa-bility and remove the restrictions presently asso-ciated with normal FORTRAN DIMENSIONedarray storage, arrays should be dynamically allo-cated. Dynamic allocation of data ahieves thefollowing:

"1. Arrays are allocated space at executiontime rather than at compilation time. Theyare only allocated the amount of spaceneeded for the problem being solved. The

Itta-.114221a1,41

Page 93: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

size of the array (i.e., the amount of spaceused) may be changed at any time duringprogram execution. If an array is not usedduring the execution of a particular problem,then no space will be allocated.

"2. Arrays are automatically shifted betweenprimary and secondary storage to optimizethe use of primary memory.

"Dynamic memory allocation is a necessaryrequirement for an engineering computer systemcapable of solving different problems with differentdata size requirements. A dynamic commandstructured language requires a dynamic internaldata structure. The result of dynamic memoryallocation is that the size of a problem that canbe solved is virtually unlimited since secondarystorage becomes a logical extension of primarystorage." (Roos, 1965, p. 426).

5.80 "Any language which lacks provisionfor performing necessary operations, such as bitediting for telemetered data, forces the user towrite segments in assembly language. This destroysthe machine independence of the program andcomplicates the checkout." (Clippinger, 1965,p. 207).

5.81 "Thus one must consider not only whetherthe logical possibilities of a new device are ignoredwhen one is restricted to a binary logic, but alsowhether one is sufficiently using the signals whenonly one of the parameters characterizing thatsignal is used," (Ring et al., 1965, p. 33).

5.82 "For a variety of reasons, not the leastof which is maturing of integrated circuits withtheir low cost and high density, central processorsare becoming more complex in their organization."(Clippinger, 1965, p. 209).

5.83 "No large system is a static entityit must be capable of expansion of capacity andalteration of function to meet new and unforeseenrequirements." (Dennis and Glaser, 1965, p. 5).

"Changing objectives, increased demands foruse, added functions, improved algorithms andnew technologies all call for flexible evolutionof the system, both as a configuration of equipmentand as a collection of programs." (Dennis andVan Horn, 1965, p. 4).

"By pooling, the number of components pro-vided need not be large enough to accommodatepeak requirements occurring concurrently ineach computer, but may instead accommodatea peak in one occurring at the same time as anaverage requirement in the other." (Amdahl,1965, pp. 38-39).

5.84 "The use of modular configurations ofcomponents and the distributed executive princi-ple . . . insures there are multiple componentsof each system resource." (Dennis and Glaser, 1965,p. 14).

"Computers must be designed which allow theincremental addition of modular components, theuse by many processors of high speed random

87

access memory, and the use by many processors ofperipheral and input/output equipment. Thisimplies that high speed switching devices not nowincorporated in conventional computers be devel-oped and integrated with systems." (Bauer, 1965,p. 23). See also note 2.52.

5.85 "The actual execution of data movementcommands should be asychronous with the mainprocessing operation. It should be an excellent useof parallel processing capability." (Opler, 1965,p. 276).

5.86 "Work currently in progress [at WesternData Processing Center, UCLA] includes: investi-gations of intra-job parallel processing which willattempt to produce quantitative evaluations ofcomponent utilization; the increase in complexityof the task of programming; and the feasibility ofcompilers which perform the analysis necessaryto convert sequential programs into parallel pathprograms." (Digital Computer Newsletter 16,No. 4, 21 (1964)).

5.87 "The motivation for encouraging the useof parallelism in a computation is not so much tomake a particular computation run more efficientlyas it is to relax constraints on the order in whichparts of a computation are carried out. A multi-program scheduling algorithm should then be ableto take advantage of this extra freedom to allocatesystem resources with greater efficiency." (Dennisand Van Horn, 1965, pp. 19-20).

5.88 "The parallel processing capability of anassociative processor is well suited to the tasks ofabstracting pattern properties and of patternclassification by linear threshold techniques."(Fuller and Bird, 1965, p. 112).

5.89 "The idea of DO TOGETHER was firstmentioned (1959) by Mme. Jeanne Poyen in dis-cussing the AP 3 compiler for the BULL Gamma 60computer." (Opler, 1965, p. 307).

5.90 "To date, there have been relatively fewattempts made to program problems for parallelprocessing. It is not known how efficient, for ex-ample, one can make a compiler to handle theparallel processing of mathematical problems. Fur-thermore, it is not known how one breaks downproblems, such as mathematical differential equa-tions, such that parts can be processed independentlyand then recombined. These tasks are quiteformidable, but they must be undertaken to estab-lish whether the future development lies in the areaof parallel processing or not." (Fernbach, 1965,p. 82).

5.91 For example, in machine-aided simula-tions of nonsense syllable learning processes,Daly et al. comment: "Presuming that, for theparallel logic machine, the nonsense syllableswere presented on an optical retina in a fixedpoint fixed position set-up, there would be a re-quirement for recognizing (26)3 or about 104 dif-ferent patterns. If three sequential classificationdecisions were performed on the three letters

Page 94: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

of the nonsense word only 3(26) or 78 differentpatterns would be involved.

"In the above simple example converting frompurely parallel logic to partially sequential process-ing reduced the machine complexity by two order(s)of magnitude. The trend is typical and may involvemuch larger numbers in a more complicated prob-lem. Using both parallel and sequential logic asdesign tools the designer is able to trade-off timeversus size and so has an extra degree of freedomin developing his system." (Daly et al., 1962, pp.23-24).

5.92 ". . . The SOLOMON concept proposedby Slotnick at Westinghouse. Here it is plannedthat as many as a thousand independent simpleprocessors be made to operate in parallel underan instruction from a network sequencer." (Fern-bach, 1965, p. 82).

"Both the Solomon and Holland machines belongto a growing class of so-called 'iterative machines'.These machines are structured with many identical,and often interacting, elements.

"The Solomon machine resulted from thestudy of a number of problems whose solutionprocedures call for similar operations over manypieces of data. The Solomon system contains,essentially, a memory unit, an instruction unit,and an array of execution units. Each individualexecution unit works on a small part of a largeproblem. All of the executioli units are identical,so that all can operate simultaneously undercontrol of the single instruction unit.

"Holland, on the other hand, has proposeda fully distributed network of processors. Eachprocessor has its own local control, local storage,local processing ability, and local ability to controlpathfinding to other processors in the network.Since all processors are capable of independentoperation, the topology leads to the concept of`programs floating in a sea of hardware'." (Hudson,1968, p. 42).

"The SOLOMON (Simultaneous OperationLinked Ordinal MOdulator Network), a parallelnetwork computer, is a new system involvingthe interconnections and programming, underthe supervision of a central control unit, of manyidentical processing elements (as few or as manyas a given problem requires), in an arrangementthat can simulate directly the problem beingsolved." (Slotnick et al., 1962, p. 97).

"Three features of the computer are:(1) The structure of the computer is a 2-dimen-

sional modular (or iterative) network so that,if it were constructed, efficient use could bemade of the high element density and 'temp-late' techniques now being considered inresearch on microminiature elements.

(2) Sub-programs can be spatially organized andcan act simultaneously, thus facilitating thesimulation or direct control of 'highly-parallel'systems with many points or parts interacting

88

simultaneously (e.g., magneto-hydrodynamicproblems or pattern recognition).

(3) The computer's structure and belavior can,with simple generalizations, be formulated in away that provides a formal basis for theoreticalstudy of automata with changing structure(cf. the relation between Turing machinesand computable numbers)." (Holland, 1959,p. 108).

5.93 6 6.. The development of the Illiac III

computer, which incorporates a 'pattern articulationunit' (PAU) specifically designed for performinglocal operations in parallel, on pictures or similararrays." (Pfaltz et al., 1968, p. 354).

"One of the modules of the proposed ILLIAC IIIwill be designed as a list processor for interpretingthe list structure representation of bubble chamberphotographs." (Wigington, 1963, p. 707).

5.94 "I use this term ['firmware] to designatemicroprograms resident in the computer's controlmemory, which specializes the logical design for aspecial purpose, e.g., the emulation of anothercomputer. I project a tremendous expansion offirmware obviously at the expense of hardwarebut also at the expense of software. . . .

"Once the production of microprogrammed com-puters was commenced, a further area of hardware-software interaction was opened via microprogram-ming. For example, more than one set of micro-programs can be supplied with one computer. Asecond set might provide for execution of the orderset of a different computer perhaps one of thesecond generation. Additional microprogram setsmight take over certain functions of softwaresystems as simulators, compilers and control pro-grams. Provided that the microsteps remain a smallfraction of a main memory access cycle, micro-programming is certain to influence future softwaredesign." (Opler, 1966, p, 1759).

"Incompatibility between logic and memoryspeeds . . . has also led to the introduction ofmicroprogramming, in which instruction executionis controlled by a read-only memory. The fastaccess time of this memory allows full use of thespeed capabilities oflered by the fast logic." (Pyke,1967, p.161).

"A microprogrammed control section utilizesa macroinstruction to address the first word ofa series of microinstructions contained in aninternal, comparatively fast, control memory.These microinstructions are then decoded muchas normal instructions are in wired-in controlmachines . . ." (Briley, 1965, p. 93).

"The microprogrammed controller concepthas been used to implement the IBM 2841 StorageControl Unit, by means of which random accessstorage devices may be connected to a System/360central processor. Because of its microprogramimplementation, the 2841 can accommodatean unusually wide variety of devices, including

Page 95: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

CO

two kinds of disk storage drive, a data cell drive,and a drum," (McGee and Petersen, 1965, p. 78).

"In microprogram control, the functions of thecontroller for source experimental data auto-mation] are vested in a microprogram whichis stored in a control memory. The microprogramis made up of microinstructions which are fetchedin sequence from memory and executed. Themicroinstructions control a very general type ofhardware configuration, so that merely by changingthe microprogram, the functions available in thecontroller can be made to range between widelimits." (McGee and Petersen, 1965, p. 78).

5.95 "The computer science community hasnot recognized (let alone faced up to) the problemof anticipating and dealing with very large individual

differences in performing tasks involving man-computer communications for the general public."(Sackman, at al., 1968, pp. 9-10).

5.96 "The dynamic nature of multiprogramon-line computation should have a strong influenceon memory organization." (Lock, 1965, p. 471).

"The tradeoffs in speed, cost, logic complexity,and technology are inherent to the design ofsystems and are not separable in spite of the goodintentions of the semiconductor manufacturersor the abstract logicians." (Howe, 1965, p. 506).

5.97 "I cannot emphasize too strongly theinterdependence of hardware and software (thestatements of procedures, implementation ofwhich in a given equipment configuration constitutesthe processing capability)." (Schultz, 1967, p. 20).

6. Advanced Hardware Developments

6.1 "It will be necessary to scan the photo-chromic plane very quickly and accurately, with anextremely fine pinpoint of light. Lockheed Elec-tronics has been exploring a method of rapidlydeflecting a laser beam, nonmechanically, in twodimensions. The technique is based on refractionof the beam by acoustic energy." (Reich andDorion, 1965, p. 573).

6.2 "Laser holography is finding some practicalapplications. Technical Operations, lnc., of Burling-ton, Mass., says it has delivered what is believed tobe the first operational holography equipment toOtis Air Force Base in Massachusetts, where it willbe used to photograph fog in three dimensions."(Electronics 38, No. 20, 25 (1965).)

6.3 "A work horse of unsuspected power washarnessed in 1960 when the first operating laser wasdemonstrated at Hughes Research Laboratories."("The Lavish Laser", 1966, p. 15).

"The first device to be successfully operated wasa pulsed ruby laser." (Baker and Rugari, 1966, p. 37).Further reference is to Maiman, 1960.

6.4 "Gas lasers are the most monochromatic,fluorescent crystal lasers are the most powerful,while semiconductor lasers are the smallest, themost efficient and can be directly modulated."(Gordon, 1965, p. 61).

6.5 This area of technological development hasalready received such a responsive interest, ingeneral, that Lowry-Cocroft Abstracts, Evanston,Illinois, provides a punched card abstracts servicein the field of laser developments.

6.6 "The development of the laser as a practical,continuous, coherent light source has created a newdisplay technology, that of the laser-beam display.This type display can be considered to be analogousto well-known electron-beam type displays, e.g., thecathode ray tube and the liquid-light valve. Theprimary difference is that the electron beam isconstrained to a vacuum environment and requiresa special screen for the emission or control of light

89

while a laser beam can operate in air ant' be thesource of light directly. . . .

"The major significance of the laser in a displaysystem is that all of the energy is usable since theapparent source of this light is a diffraction-limitedpoint-dipole radiator. Conventional light sourcessuch as tungsten filament (. a mercury arc are quitewasteful since light is emitted into a 360-degreesolid angle from a relatively large area. When theselight sources are used to illaminate the limited aper-ture of a practical optical sy stem, only a small frac-tion of the emitted light is used." (Baker and Rugari,1966, p. 37).

6.7 "Since the polarization of the light can beelectro-optically switched in nanoseconds, theinherent speed of the electro-optic effect does notlimit the rate of data projection. However, in prac-tice the rate is limited by dissipation in the deflectionelements and by the stored-energy requirements ofthe associated circuitry. Therefore the voltageacross the half-wave plate and the loss tangent of thedielectric are important parameters." (Soref andMcMahon, 1965, p. 59).

6.8 "Coherent light from lasers will provide arevolutionary increase in the volume of communica-tion that can be bent over a single pathway."(McMains, 1966, p. 28).

"In communications the laser can far surpass con-ventional facilities. Operating on frequencies manytimes higher than radio, it can carry many times asmuch information. In fact, one laser beam couldcarry thousands of TV signals at once. Experimentsnow under way with lasers enclosed in large pipesindicate their wide employment for mass com-munications for the future." ("The Lavish Laser",1966, p. 16).

"As man goes farther away from the earth inspace exploration, laser communication will becomemore important, because the problems of powersupply and background noise besetting conventional,microwaves at distances beyond the moon will be

Page 96: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

minimized. In an example of speed comparison,eight hours were required to transmit the picturesfrom Mars, but a laser beam could carry even televi-sion images across the same distance in a fewminutes." ("The Lavish Laser", 1966, p. 16).

"The laser with its extremely narrow beam dueto its short wavelength, notwithstanding its highquantum and background noise offers the pos-sibility of surpassing RF techniques in its ability tosatisfy deep-space requirements." (Brookner et al.,1967, p. 75).

In terms of immediate practicality, however,experiments in the use of laser techniques for datatransmission have been limited to very short dis-tances. For example, "television signals have beentransmitted on laser beams for distances of the orderof a mile in clear weather . . ." (Gordon, 1965, p.60), and "the Lincoln Laboratory developed anoptical communications system based on an arrayof multiple semiconductor lasers that propagatespulses through a 1.8-mile path in most weatherconditions." (Swanson, 1967, p. 38).

Goettel therefore concludes that "years of con-tinued research remain beilme an economical laser-transmission system becomes a reality." (Goettel,1966, p. 193).

6.9 "Scientists from the Honeywell ResearchCenter in Minneapolis say that optical techniquesprovide a means for increasing information storagedensity above the levels obtainable with currenttechnology. A memory element under developmentwill permit over two million bits of information to bestored on a surface the size of a dime. The informa-tion can be read at the rate of 100 million bits persecond using a low power -1 milliwatt laser."(Bus. Automation 13, No. 12, 69 (Dec. 1967).)

6.10 "The laser will transform Raman spectros-copy from a time-consuming tool of limited useful-ness to an important analytical technique; forexample, the hour-long exposures of Raman spectraon photographic plates are eliminated. Raman spec-troscopy with gas-laser beams should have wide-spread application in analytical chemistry and solid-state physics." (Bloembergen, 1967, p. 85).

"The microscopic electrified fluid streams studiedoccur at high speed, and are virtually impossible torecord with a conventional optical microscope orimaging system. In order to overcome the working-distance and depth-of-field limitations of the clas-sical microscope, a two-step imaging process(holographic photomicroscopy) of high resolutionwas developed and applied to the study of theelectrostatic charging process. In this technique,one first records the optical interference pattern ofthe 'scene', and then uses this record to reconstructthe original scene. The reconstructed scene can beleisurely examined with conventional opticalsystems of limited object volume . . .

"The practical consequences of pulsed laserholographic photomicroscopy go beyond the re-quirements of the present application. Reasonableprojection would indicate application to scientific

90

studies that involve moving unpredictable phe-nomena of either uncertain or changing location.Physical applications include terminal and in-flight ballistics, aerosol-size distributions, cloudphysics, studies of sprays, and combustion androcket-exhaust studies, among others." (Stephens,1967, p. 26),

"Recently at Boulder, Colo., Michael McClintockof the NBS Institute for Basic Standards used anargon laser as a source to obtain and analyze theRaman and Rayleigh spectra in several transparentliquids . . . His mathematical evaluation of theexperimental data related scattered light spectra toviscosity, to molecular rotation and vibration, and tocertain molecular concentrations in mixtures of twounassociated liquids. Analysis of the Ramanspectrum also provided, new data on molecularcoupling . . .

"In general, the beam from an argon ion laser wasfirst passed through a dispersing prism to eliminateall but the 4880-angstrom radiation. The light wasthen examined from various angles by a spectrom-eter. Photomultiplier tubes served to increase theintensity of the spectral lines so that they could berecorded." ("Laser Applied to Molecular KineticsStudies", 1968, p. 242).

6.11 "A very special hologram, called a spatialfilter, has the capability of comparing two patternsand producing a signal which is a function of the cor-relation or similarity of the patterns. Experimentally,it has been found that complicated, natural objectswith irregular patterns can be recognized withgreater confidence than can man-made objectswhich tend to be geometrically symmetrical.Fingerprints, because of their randomness, appearto be ideal objects for the spatial filtering methodof recognition . . .

"The spatial filtering method of fingerprint recog-nition has several advantages over other methodsof recognition.

"Recognition is instantaneous, limited only bythe mechanical pattern input mechanism.

"Partial prints can be recognized. As long as theinformation which is available does correlate,recognition will take place even though one of thetwo patterns being compared is incomplete. Thisproperty is especially advantageous when youare attempting to correlate partial latent printswith complete recorded prints." (Horvath et al.,1967, pp. 485, 488).

6.12 "A laser image processing scanner (LIPS),able for the first time to quantize high resolutionphotographs for computer manipulation, has beendeveloped by CBS Laboratories . . . and acceptedby the Air Force . . . LIPS can simultaneouslydigitize a developed high-resolution photographfrom a negative and produce a much more detailednegative from the computer image adjacent tothe original on the same drum. This makes it mucheasier for a photo interpreter to recognize importantdetails . . .

Page 97: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

"The Air Force system uses a commercialhelium-neon gas laser to produce black and whiteimages . . . However, color photos could beproduced by substituting an argon, ion lasersystem . .

"Elements of LIPS include a laser light sourcefocused on a five microns spot size on, the negativebeing digitized in turn feeding into a linearlyscanning microdensitometer and a computer bufferstorage.

"On reconstruction of the higher quality negativeon the same drum, the same laser is employed incombination with an optical modulator in duplicatescanning . ." (Electronic News 14, No. 714,38 (June 23, 1969).)

"When the Air Force permitted CBS Labs totalk about its high-resolution laser-scanning system(part of Compass Link) used to transmit reconnais-sance photos from Vietnam to the Pentagon inminutes [Electronics, April 14, p. 56], CBS officialswere optimistic about the possibility of broaderapplications. A step in that direction has beentaken with the modification of the laser scannerso that it can convert high-resolution photos forhandling by a computer.

"Called LIPS-laser image processing scannerthe system digitizes the image, then feeds thesignal through a buffer to an IBM 360/40 computer.The computer processes the picture to emphasizefine details or improve the contrast. The recon-structed image is then read out of the computeronto photographic film. Thus, LIPS enables thephoto interpreter to manipulate his picture tobring out any desired detail with a high degreeof resolution.

"Routine work. In operation, the interpretertells the computer what areas he wants empha-sized. For example, he could call for a routinethat would bring out high-frequency detail. If thefinished picture were unsatisfactory he could goto a routine that not only would emphasize high-frequency detail, but also would suppress or cleanup large areas of black.

"LIPS uses a sequential scan to attain a resolu-tion of 100 lines per millimeter. It can digitize, orrecord from digital data, a 1.8-centimeter-squarearea in 15 minutes; that's at least twice as fast asconventional scanners such as those used on theRanger moon probes.

"CBS says the advantages of LIPS highresolution and geometric fidelity, high-speed read-write rates, and operation in standard room lightingcan be used by map makers, meteorologists, ornews organizations." (Electronics 42, No. 13, 46(June 23, 1969)).

6.13 "Laser photographs, called holograms,are true three-dimensional representations, andthe process of holography not only provides a'meansfor lensless microscopy but may make possiblemicroscopic systems at wavelengths where lenses

91

376-411 0 - 70 - 7

are not now available." ("The Lavish Laser",1966, p, 15).

"The original idea of holography and specifically,spatial filtering dates back to 1886 when ErnstAbbe suggested their existence. However, it re-mained for Dennis Gabor to show in 1951 that ahologram, which has little recognizable informationcould be 'reconstructed' to a normal recognizableimage. Various other workers showed his analysisto be correct. Spatial filtering was investigated atabout the same time by Marechal and othersprimarily as a means of improving photographicimages. These pioneers demonstrated that theconcepts of holography and spatial filtering wouldwork, but they were handicapped by the lack ofa strong source of coherent light. The advent ofthe laser in 1960 as a source of essentially a singlewavelength of light ,excited new interest in the fieldof holography. Scientists at General Electric demon-strated the feasibility of using a two-beamholographic spatial filter as a means for recogniz-ing patterns. A. Vander Lugt, at the University ofMichigan, also investigated methods by which two-beam spatial filters could be produced." (Horvathet al., 1967, p. 485).

"Laser beams will be used to print the catalogsand newspapers of the future using a new techniquedeveloped by Radio Corporation of America.Announcement of the development of the techniquethat can eliminate the need to print in signatureswas made last month by the company. The methoduses the intense light produced by the laser to fusepowdered ink spread over the paper to reproducethe original. Excess ink is removed by vacuum.

"The image comes from a photograph of the ma-terial to be printed half- tones, line drawing ornewspaper pageon a transparency. The imageis transferred with the aid of a laser beam to ahologram or lensless photograph which serves asa permanent plate. A separate hologram is usedfor each page.

"Dr. Kenneth H. Fishbeck, a technical advisorat the David Sarnoff Research Center, Princeton,N.J., and holder of the patent, said publishers willbe able to eliminate signatures since the newprocess reproduces pages in sequence, from titleto index. He claims publishers could almost print

5on demand." (SPIE Glass 5, No. 1, 12 (June 1969)).6.14 "Dillon et al. have proposed and operated

a limited-population memory using a ferramagneticgarnet and driven by a laser beam." (Kump andChang, 1966, p. 255; see also Dillon et al., 1964).

6.15 Reimann reports that: "The neuristorlaser computer, conceived at RCA, is an 'all-optical' computer in which all information andcontrol signals are in the form of optical energy . . .

"A theoretical study of the neuristor conceptin form of Fiberglas lasers concluded that thefundamental requirements of a neuristor linecould, at least in principle, be met with lasers . . .

"The main result of the laser neuristor feasibility

Page 98: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

study was the conclusion that lasers are capableof satisfying all the requirements for digital devices.It was shown that, in addition to the neuristor-typelogic, lasers in form of resonators and amplifierscan have input-output characteristics that resemblethose of conventional logic circuits such as gatesor flipflops." (Reimann, 1965, pp. 250-251).

"Fiber-optic elements, with appropriate con-centrations of active emissive ions and passiveabsorptive ions, are the basic components of tbizsystem. The computer is powered by being in acontinuous light environment that provides a con-stant pump power for maintaining an invertedpopulation of the emissive ions. Among the poten-tially attractive features of such a system a)re thefreedom from power-supply connections for indi-vidual circuits, the possibility of transmission ofsignals without actual connections between certainlocations, and a promise of high-speed operation."(Reimann and Kosonocky, 1965, p. 182).

6.16 "The feasibility of machining resistive andcapacitive components directly on thin film metal-lized substrates with a laser has been demonstrated.Tantalum films can be shaped into resistor geo-metries and trimmed to tolerance by removingmetal. These films also can be oxidized to valueusing the laser beam as the heat source. Resistorscan be made with tolerances in value of less than

0.1 per cent. . . .

"Pattern generation by laser machining hasbeer demonstrated on various thin films as wellas on electroplated films. Vaporized lines as fineas 0.25 mil are readily attainable in thin films,as are 0.4 mil lines in plated films. Much narrowerlines may be obtained under particularly well-controlled conditions. Uniform lines as fine as 1micron have been scribed in thin films on suffi-ciently flat substrates. These films have beenremoved with minimum effect to the substratesurface." (Cohen et al., 1968, p. 403).

"Semiconductor laser digital devices offer animprovement in information processing rates ofone to two orders of magnitude over that expectedfrom high-speed integrated transistor circuits.Data processing rates of 10 to 100 gigabits persecond may be possible using semiconductor lasers.However, the technology for fabricating low-powerlaser circuits is still undeveloped and low-tempera-ture operation may be required." (Kosonocky andComely, 1968, p. 1).

"Laser digital devices may be used for general-purpose logic circuits in very much the same waythat transistors are now used, except that all ofthe processing is done with optical rather thanelectric signals." (Reimann and Kosonocky, 1965,p. 183).

"Semiconductor current-injection lasers are mostattractive for digital devices because of their smallsize, high pumping efficiency, and high speed ofoperation." (Reimann and Kosonocky, 1965, p. 194).

"The utility of a laser as a tool for fabricating thin

92

film circuits results primarily from the spectralpurity and degree of collimation of the laser light.These characteristics allow the beam to be focusedto a very fine and intense spot. The high heatflux which occurs when the light is absorbed bythe target material, and the sharp definition andlocalized nature of the working region allow heating,melting, or vaporizing minute amounts of material,with minimum effect to adjacent material or com-ponents." (Cohen et al., 1968, p. 386).

6.17 See Hobbs, 1966, p. 42.6.18 "A new laser data storage/retrieval system

that provides a 1000-time increase in packingdensity over conventional mag tape, an error rateof 1 X 108 or better, permanent (nonerasable)storage, a transfer rate of 4 megabits/sec., andinstantaneous read-while-write verification hasbeen developed by Precision Instrument Co.,Palo Alto, Calif.

"A working demonstrator of the Unicon' systemuses a 1-watt argon gas laser, which makes a holein the metallic coating of a mylar-base tape wrappedaround a drum. The current system, using 5-micronholes, offers a packing.denSity of 13 million bits/sq.in.

"Readout is accomplished by reducing thelaser power; beam reflection or non-reflection in-dicates nonholes or holes. The tape being used onthe current system offers storage equivalent to10 2400-ft. reels of 800 bpi tape. The system canserve on- and off -line, and is capable of recordinganalog, FM or video data, all of which requirehigh speed." (Datamation 14, No. 4, 17 (Apr. 1968)).

6.19 "By early 1968, Precision InstrumentCo. had developed a massive-scale laser recorder/reader storage system, but the first order for thedevice was not received until this year. Ed Gray,the chief engineer on the UNICON (UnidensityCoherent Light Recorder/Reproducer) Laser MassMemory System, said that convincing the firstpotential customers that they should acquire a$500K to $1 million memory system was not easy,especially when you had to 'tell someone that youwere not going to store data with magnetics likeGod intended.' Now that the first order has beenplaced, by Pan American Petroleum Corp. ofTulsa, Oklahoma, Mr. Gray feels that the systemswill move a little faster in the marketplace.

"The $740K system placed with Pan Americanis to be installed with all requisite software aboutMarch of 1970. Four other potential customers,including some government agencies and a privatecredit-reporting firm, are also expected to placeorders." (Datamation 15, No. 3, 116 (Mar. 1969).)

6.20 "The National Archives and RecordsService has begun a cost-effectiveness study ofarchival storage systems in an effort to shrink itsmag tape library, which contains one million plusreels. The study, due for completion next month,is using the capabilities of Precision Instruments'Unicon device as a model. The Unicon employs a

Page 99: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

laser-etched aluminum strip with a 30-year shelflife." (Datamation 14, No. 10, 171 (Oct. 1968)).

6.21 "Honeywell scientists are investigatinga method that uses a laser for mass storage andretrieval of information in computer memory.Although emphasizing that development is stillin the research stage and may be several yearsaway from practical application, the researchersbelieve the discovery is a possible key to inex-pensive mass storage of data for the enormouscomputer networks envisioned for the 1970's."(Commun. ACM 11, 66 (Jan. 1968)).

6.22 "The system . . . uses a modulated laserbeam to inscribe data onto photosensitive discs . . .

Each disc contains 3, 100 tracks with a capacityof 67,207 bits per track, including error correctionsbits. The storage unit holds 2, 600 discs, storedon edge, in four [or eight] trays . . . two auxiliarydisc banks can be added to achieve the maximummemory capacity . . . of 150 billion characters.The reader reaches any piece of information onthe 3, 100 tracks (per disc) within 15 milli-seconds . . ." (Business Automation 12, No. 6,84 (1965)).

A CW helium-neon laser is used to "achievereal-time writing of information on the system'sphotosensitive memory discs." (Connolly, 1965,p. 4).

6.23 "A method for producing erasable holo-grams may enable an optical memory to store 100million bits in a film one inch square.

"The memory could be read out, erased andreused repeatedly, according to Dr. WilliamWebster, vice president in charge of RCA Labora-tories.

"Information can be written into the magneticfilm in 10 billionths of a second, and erased in20 millionths of a second. Laser light split intotwo beams, one going directly to the film and theother going to the information bit pattern, interferesconstructively to produce heat and consequentlya realignment of atoms.

"Where the two beams interfere destructively,nothing happens." (Data Proc. Mag. p. 21 (Sept.1969)).

6.24 In the IBM-laser system developed forArmy Electronics Command and installed atFort Monmouth, it is noted that: "Through employ-ment of a deflection technique, the shaft of lightcan be focused on 131,072 distinct points withina spa smaller than a match head. . . . Toprovide a readout in printed form, the ilser beamcan scan through a mask inscribed with the alpha-bet and other symbols and through the actionof light-bending (deflection) crystals turn out thefinal product on photo-sensitive paper." (Commun.ACM 9, 467 (1966)).

"At International Business Machines Corp. . . .

one method, devised for the Army ElectronicsCommand, Fort Monmouth, N.J., makes use ofa high-speed switching arrangement with electron-ically controlled crystals.

93

"Such a system could be used with a matrixcontaining alphabetical or other symbols. Thelaser would be used as a print-out device, pro-jecting the various symbols onto a recordingmedium.

"The Air Force Systems Command at Wright-Patterson AFB is interested in IBM's work on avariable frequency laser which might be used inconjunction with a color-sensitive computer.This type of setup is said to have a potential capacityof a hundred million bits per square inch of photo-graphic material." (Serchuk, 1967, p. 34).

6.25 "Instead of recording a bit as a hole ina card, it is recorded on the file as a grating patternof a certain spacing . . . A number of differentgrating patterns with different spacings can besuperposed and when light passes through, eachgrating bends the light its characteristic amount,with the result that the pattern decodes itself . . .

The new system allows for larger areas on the filmto be used and lessens dust sensitivity and thepossibility of dirt and scratch hazards." (Commun.ACM 9, No. 6, 467 (June 1966).)

6.26 "Recently Longuet-Higgins modeled atemporal analogue of the property of hologramsthat allows a complete image to be constructedfrom only a portion of the hologram. In the presentpaper a more general analogue is discussed andtwo two-step transformations that imitate therecording-reconstruction sequence in holographyare presented. The first transformation modelsthe recall of an entire sequence from a fragmentwhile the second is more like human memory inthat it provides recall of only the part of the se-quence that follows the keying fragment." (Gabor,1969, abstract, p. 156).

6.27 "A new recording mechanism . . . con-sists of the switching of magnetization under theinfluence of a stress resulting from a heat gradientintroduced by a very narrow light or electron beam.The mechanism is assumed to be magnetostrictionwith a rotation of the anisotrophy. The model pre-sented and the criteria for recording are supported,at least in part, by experimental observations."(Kump and Chang, 1966, p. 259).

6.28 "In attempts to provide computers withpreviously unavailable amounts of archival (read-only) storage, various techniques involving opticaland film technology have been employed to utilizethe high information capacity of film (approximately106 bits/in.2) and the high resolution and precisionof lasers and electron beams. The trillion-bitIBM 1350 storage device, an offshoot of a the`Cypress' system, . . . uses 35 mm X 70 mm silverhalide film 'chips.'

"A total of 4.5 million bits are prerecorded oneach chip by an electron beam. For readout, aplastic cell containing 32 film chips is transportedto a selector, which picks the proper chip fromamong the 32; average access time to any of the1012 bits is 6 seconds. After a chip is positioned,information is read using a flying-spot CRT scanner.

Page 100: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Two IBM 1350 units are scheduled for mid-1967delivery to the Atomic Energy Commission atLivermore and at Berke ly for use with bubblechamber data. Other techniques of reading andwriting with electron beams are explained byHerbert." (Van Dam and Michener, 1967, p. 205).

6.29 "Results of basic theoretical studies con-ducted at the NCR research laboratories haveindicated that CW lasers of relatively low powershould be capable of permitting very high resolu-tion real-time thermal recording on a variety ofmaterials in the form of thin films on suitablesubstrates. Subsequent laboratory studies haveshown that such thermal recording is indeedpossible. This recording technique has been termedheat-mode recording." (Carlson and Ives, 1968,p. 1).

"The recording medium is coated on a 5- by7-inch glass plate, a quarter of an inch thick. Theplate carrier mechanism is capable of stepping inthe horizontal and vertical directions to formmatrices of up to 5,000 images at an overall reduc-tion of 150 to 1." (Carlson and Ives, 1968, p. 5).

"The results of the studies described in thispaper have established laser heat-mode recordingas a very high resolution real-time recordingprocess capable of using a wide 'variety of thinfilm recording media. The best results were ob-tained with images which are compatible withmicroscope-type optics. The signals are in elec-tronic form prior to recording and can receiveextensive processing before the recording processoccurs. In fact, the recordings can b e completelygenerated from electronic input. For example,Figure 6 shows a section of a heat-mode microimagewith electronically generated characters, producedby the Engineering Department in our Division.The overall image is compatible with the 150-to-1PCMI system (less than 3 mm field), and consistsof 73 lines of characters, 128 characters per line.Although this image was recorded in 1.6 seconds,faster recordings are anticipated. A descriptionof this work will be published in the near future."(Carlson and Ives, 1968, p. 7).

6.30 "Another scheme for storing digital in-formation optically is the UNICON system, underdevelopment at Precision Instrument Company.This system uses a laser to write 0.7-micron-diameter holes in the pigment of a film. Informationis organized in records of at most a million bits;each record is in a 4-micron track extending about ameter along the film. Individual tracks are slantedslightly so that they extend diagonally across thefilm. (The amount of slant and the width of thefilm determine the length of the records.) Eachrecord is identified by information stored nextto the beginning of that record, in an additionaltrack at the edge of the film. Readout of a particularrecord involves scanning the identifier track forthe proper code and then scanning the track with alaser weaker than that used for writing. It is pre-dicted, on the basis of an experimental working

94

model, that one UNICON device with 35 mm filmcould store a trillion bits on 528 feet of film, withan average access time to a record of 13 seconds."(Van Dam and Michener, 1967, p, 205). (See alsonote 6.14),

6.31 "Considerable experimentation in modula-tion and transmission is needed before opticalcommunication by laser can be said to be reallyuseful except in very specialized cases." (Bloom,1966, p. 1274).

6.32 "At first sight a laser communicationsystem with its extremely wide information carryingcapacity would appear to be a natural choice foran interplanetary communication system. How-ever, among other things, the acquisition andtracking problems are considered to be so severethat such a system is not thought to be realistic atthe present time. This may be indicative of an in-formation technology utilization gap." \Asendorf,1968, p. 224).

6.33 "In general, earthbound laser-rangingsystems are limited by local atmospheric conditions.A typical value of range routinely measured is20 km or less." (Vollmer, 1967, p. 68).

"Earthbound applications of coherent opticalradiation for communications appear to be severelylimited for two reasons. The first, and most signifi-cant, is the effect of atmospheric turbulence on thecoherence of the radiation. The second is the effectof small vibrations on the coherent detectionefficiency and signal-to-noise ratio. This can beminimized by careful design, but the first factor isbeyond the designer's control. Although coherentoptical detection has been demonstrated oversome useful paths, the vulnerability of the link toatmospheric variations makes practical applica-tion somewhat doubtful." (Cooper, 1966, p. 88).

6.34 "Is the enormous increase in bandwidthoffered by light as a carrier frequency in communica-tions needed? For transmission in space the acquisi-tion and aiming of the light beams pose formidableproblems. In the atmosphere, rain, smog, fog,haze, snow, etc., make light a poor competitor ofmicrowaves. Can a system of enclosed tubes withcontrolled atmosphere and light repeater stationsbe built on a technologically sound and econom-ically feasible basis?" (Bloembergen, 1967, p. 86).

6.35 "Information Acquisition, Sensing, andInput", Sect. 3.1.1. Some additional referencesare as follows:

"Since the first laser was demonstrated in 1960,considerable interest has developed in its possi-bilities for use in communication systems. Thebasic sources of this interest are the coherentnature of the radiation obtained as compared withall previously known extended sources of opticalradiation, and the laser's short wavelength. Thislatter characteristic provides the potential abilityto achieve bandwidths, or information capacities,that are orders of magnitude greater than anythingobtained heretofore. A more realistic advantage,in terms of presently available information sources,

wyasaluitasmArin.citt

Page 101: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

results from the combination of high coherence andshort wavelength. It is the ability to generate ahighly collimated beam (limited by diffractionphenomena), which leads to the ability to achievecommunications over great distances. Of equalimportance is the fact that with a coherent signal,coherent detection of the information can beobtained with greatly improved immunity to naturalincoherent noise sources such as the sun." (Cooper,1966, p. 83).

"The first enthusiastic suggestions that lasertechnology potentially provides many orders ofmagnitude more communication capability thanRF technology, and that it might, therefore, offerthe only solution to the problem of general widebandcommunications with deep-space probes, needsto be more carefully assessed." (Dimeff et al.,1967, p. 104).

"For deep-space, wide-band communication . . .

another factor may be . . . important namely, thesize of the transmitting aperture. A very largeaperture, as would undoubtedly be required by amicrowave channel, is likely to prove an obstructionto the sensors of the aircraft and will, therefore,reduce the time available for collecting informationor transmitting it. In this respect, the laser has animportant advantage over microwave." (Brookneret al., 1967, p. 75).

"Several optical links which use GaAs injectionlasers as transmitters have been constructed. Oneof them has been demonstrated to be capableof transmitting 24 voice channels over 13 km."(Nathan, 1966, p. 1287).

6.36 "If we classify our communication require-ments on the basis of range, we find that lasers canbe helpful at the range extremities that is, fordistances less than about 15 km and for thosegreater than 80 million km." (Vollmer, 1967, p. 66).

6.37 "An electronic system that transmitsmilitary reconnaissance pictures from Saigon toWashington in minutes via satellite may soonenable news media to dispatch extremely high-quality photographs and type around the worldfor instant reproduction.

"Potential benefits are also foreseen for medicine,earth resources surveys and industry.

"The high-performance system was developedfor the U.S. Air Force Electronics Systems Divisionsby CBS Laboratories, a division of ColumbiaBroadcasting System, Inc. It combines electro-optical and photographic techniques to relay high-resolution aerial photographs of ground activityin Vietnam to the President and Pentagon officials.Pictures seen by the President are many timessharper than the best pictures shown on hometelevision sets.

"Within minutes after photographs have beentaken in Vietnam, they are readout by the system'selectronic scanning device and converted tovideo signals. The signals are then fed to a com-munication link, which relays them over the U.S.Defense Satellite Network to Washington. A

95

74.7.4",14,7, 4

similar receiving and recording station therereconstructs the photographs to their original formfor immediate inspection." (Spie Glass 5, No. 2,9 (Aug. 1969).)

"According to Air Force officials, pictures pro-duced by the CBS Laboratories Image Scanningand Recording System contain the highest resolu-tion ever reported in the transmission of aerialreconnaissance photographs. High-altitude photo-graphs processed by the system show such detailedinformation as identification numbers of ships inport, planes on runways and troop movements . . .

"In operation, the lightweight system uses a pre-cisely controlled laser beam to scan rapidly acrossphotographic film. The laser converts each pictureframe to an electronic video signal. The signal isthen fed to a transmitting device for satellite relay,said John Manniello, CBS Laboratories VicePresident for 'Government Operations, who con-ceived the system application. Once the signalscontact the satellite, they are flashed to a receivingstation in Washington within seconds, he added.

"The receiving station which has related photo-scanning, recording and developing equipmentreconstructs the video signal to the original filmimage and produces high-quality photographicprints.

"Because of the laser-scanning technique in-volved, no photographic resolution is lost betweenrecording and transmission from the originalfilm taken in Vietnam." (Spie Glass 5, No. 2, 9(Aug. 1969).)

6.38 "Superficially, it appears attractive to havefast switching, high storage density, direct visualdisplay. Such developments would depend heavilyon the availability of cheap, small, high-qualitysemiconductor lasers. If these were available, theentire organization of computers using them wouldprobably be different." (Bloembergen, 1967, p. 86).

6.39 "The power and efficiency available fromlasers at the desired wave lengths (particularlyultraviolet) must be improved, and adequate laserdeflection techniques must be developed beforelaser displays will be feasible for widespread use."(Hobbs, 1966, p. 1882).

"Since lasers don't require vacuums, there isa significant convenience relative to electronbeams. But there is a severe penalty compared toelectron beams due to problems in deflecting,modulating, and focusing." (Gross, 1967, pp. 7-8).

"Lasers offer great promise for future imple-mentation of display systems particularly large-screen displays. The ability of a laser to deliverhighly concentrated light energy in a coherent beamof very small spot size is well known. Several dif-ferent approaches to laser displays are being in-vestigated. Since they all require some means fordeflecting and modulating the laser beam, con-siderable development efforts are being expendedon deflection techniques. Digital deflection oflasers by crystals has been satisfactorily demon-strated for 256 positions in each direction, but at

Page 102: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

least 1024 positions in each direction are neededfor a practical large-screen display system." (Hobbs,1966, p. 1881).

"The laser is an efficient light source, and itsoutput can be focussed to small sizes and highpower densities. There is confidence that laboratorymeans for modulating lasers and deflecting theirbeams will be found practical." (Bonn, 1966, p.1869).

"More rapid progress would be made in utilizinglaser recording if better means of deflecting laserbeams at the desirable speeds and resolutionsexisted or were clearly foreseeable." (Smith, 1966,p. 1297).

"An experimental device that can switch theposition of a light beam more than a thousandtimes faster than the blink of an eye could becomean important part of computer memories of thefuture. The device, a digital light deflector, wasdeveloped at the IBM Systems DevelopmentDiv. laboratory in San Jose, Calif.

"The experimental deflector changes the locationof a beam in 35 millionths of a second by a uniquemethod of moving a glass plate in and out of contactwith a prism.

"High-speed deflectors of this type are poten-tially useful in future optical memories to randomlyposition a laser beam for data recording and reading.Such beam addressable memories are expectedto be many times faster than present magneticstorage methods because of the relative speedof relocating a light beam in comparison to movinga bulky recording head." (Computers & Automation18, No. 5, 68 (May 1969).)

6.40 "Another attractive approach is the use ofa laser beam to write directly on a large luminescentscreen. This is somewhat equivalent to an 'outdoor'cathode-ray tube in which the laser beam replacesthe electron beam and the luminescent screenreplaces the phosphor face plate of the tube.It offers advantages over a CRT in that a vacuumis not required and a large-screen image can begenerated directly. One feasibility system has beendeveloped using a 50 milliwatt neon-helium gaslaser, a KDP crystal modulator, a piezoelectriccrystal driven horizontal deflecting mirror, and agalvanometer driven vertical deflecting mirror toprovide a television rastor scan image projectedonto a 40 inch screen. Brightness of 50 foot-lam-berts, contrast ratio of 100 to 1 (dark environment),resolution of 1,000 to 2,000 lines, and update timeof 33 milliseconds are anticipated for direct viewlaser systems." (Hobbs, 1966, p. 1882).

6.41 "Electron-beam devices, including thosewhich use photographic emulsions and thermo-plastic films, operate in a vacuum, which is anuisance." (Bonn, 1966, p. 1869).

6.42 "By definition, phototropy is the photo-chemical phenomenon of changing some physicalproperty (color) on exposure to electromagneticradiation (light) and returning to its original (color.less) state after removal of the activating source

96

and under a de-activation condition and/or at alater time." ("Investigation of Inorganic Photo-tropic Materials . . .", 1962, p. 1).

"The property of certain dyes and other chemicalcompounds to exhibit a reversible change in theirabsorption spectrum upon irradiation with specificwavelengths of light has been termed phototropism,or photochromism. The emphasis in this definitionis on reversibility, because, upon removal of theactivating radiation the systems must revert totheir original states to be considered pho;.ochromic."(Reich and Dorion, 1965, p. 567).

"By definition, photochromic compounds exhibitreversible spectral absorption effects colorchanges, resulting from exposure to radiant energyin the visible, or near visible, portions of thespectrum. For example, one class of photochromicmaterials consists of light-sensitive organic dyes.NCR photochromic coatings consist of a moleculardispersion of these dyes in a suitable coatingmaterial. Photochromic coatings are similar tophotographic emulsions in appearance and withrespect to certain other properties. Coatings canbe made to retain two-dimensional patterns orimages which are optically transferred to theirsurface." (Hanlon et al., 1965, p. 7).

"Photochromic film, a reusable UV sensitiverecording media has progressed to the pointwhere prototype equipment is being designed."(Kesselman, 1967, p. 167).

6.43 "Photochromic coatings exhibit excellentresolution capabilities. In addition, both positive-to-negative and direct-positive transfers are pos-sible . . . The coatings are completely grain-free,have low gamma (excellent gray scale characteris-tics), and exhibit inherently high resolution . . .

Further, because the coatings are reversible, theinformation stored can be optically erased andrewritten repeatedly." (Hanlon et al., 1965, p. 7).

"Photochromism may be defined as a change incolor of a material with radiation (usually nearultraviolet) and the subsequent return to theoriginal color after storage in the dark. Reversiblephotochromism is a special case of this phenomenonin which a material can be reversibly switched byradiation between two colored states. Photochromiccompounds may be valuable for protection fromradiation; reversibly photochromic materials arepotentially valuable for data storage and displayapplications." (Abstract, talk on photochromicmaterials for data storage and display, by U. L.Hart and R. V. Andes, UNIVAC Defense SystemsDivision, at an ONR Data Processing Seminar,May 4, 1966, See also Hart, 1966).

6.44 "Most of the systems so far reported areonly partially, or with difficulty, reversible, or aresubject to fatigue a change in behavior eitherwith use or with time in storage." (Smith, 1966,p. 40).

"Organic photochromic materials fatigue withuse." (Bonn, 1966, p. 1869).

6.45 "Photochromic films permit the storage

Page 103: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

of images containing a wide contrast of grayscale because they are inherently low gamma andgrain-free." (Tauber and Myers, 1962, p. 409).

6.46 "It is recorded that Alexander the Greatdiscovered a substance, whose composition hasbeen lost in the obscurity of antiquity, that woulddarken when sunlight shone upon it. He dipped anarrow strip torn from the edge of his tunic intoa solution of the material and wore this stripwrapped about his left wrist. Many of his soldiersdid the same. By observing the changes of colorduring the day, they could tell the approximatehour. This became known as Alexander's ragtime-band. (I am sorry that I cannot identify, andhence cannot give proper credit to the author ofthis delightful footnote to history.)" (Smith, 1966,p. 39).

6.47 "1. Photochromic films provide very highresolution with no grain.

"2. Photochromic films permit the storageof images containing a wide contrastof gray scale because they are in-herently low gamma and grain-free.Photochromic films provide immediatevisibility of the image upon exposure.No development process is required.

"4. Photochromic films provide botherasing and rewriting functions.This permits the powerful processesediting, updating, inspection, anderror correction to be incorporatedinto systems.The PCMI process incorporates theability to effect a bulk-transfer read-outof micro-images at the 200:1 reductionlevel by contact printing.

"6. Use of high-resolution silver halidefilms provides both permanency forthe storage of micro-images and eco-nomical dissemination of duplicates.The very high density of 200:1 micro-images offers the possibility of usingsame form of manual retrieval tech-niques for many applications. Thiseliminates the normal requirement insystems of this size for expensive andcomplex random access hardware."(Tauber and Myers, 1962, p. 266).

In the photochromic micro-image (PCMI) micro-form process developed by the National CashRegister Company there have been achieved"linear reductions from 100-to-1 to greater than200-to-1, representing area reductions from 10,000 -to-1 to greater than 40,000-to-1, [which] have beensuccessfully demonstrated by using a variety ofimage formats, such as printed materials, photo-graphs, drawings, and even fingerprints." (Hanlonet al., 1965, p. 1).

"NCR has developed a number of researchprototype readers for viewing PCMI transpar-ences . . . [including] a miniaturized microimage

"3.

"5.

"7.

97

,".tORIPAIrrrrr,

reader . . . which was designed specifically forpossible use aboard a manned space vehicle. Thereader would have a self-contained, fixed referencefile of up to 50,000 pages of information, such asnavigational charts, planetary and space data,and checkout, maintenance, and emergencyprocedures." (Hanlon et al., 1965, p. 13).

"Current design emphasis by 'NCR has beentoward the development of low cost PCMI readersfor commercial applications." (Hanlon et al.,1965, p. 19).

6.48 "The information glut threatening to swampthe engineer and the scientist is being eased by aBritish organization called Technical Informationon Microfilm. The medium of the 'message' is theNational Cash Register Company's PCMI process.This makes possible the storage of over 3000 printedpages of information on a single 4-by-6-inch trans-parency. The system used by TIM enables theengineer to locate the data he wants in a matterof seconds. He simply selects the proper trans-parency and immediately locates the appropriatepage images with an NCR reader which displaysthe selected pages on an illuminated viewing screen.TIM points out that one of the most valuable sourcesof information to engineers and scientists is manu-facturers' literature. The problem has been thatthis is produced in an extraordinary variety offorms. These are difficult to catalogue compre-hensively and they also create an enormous bulk.The NCR-developed PCMI technology involvesa photochromic coating which produces an imagethat is virtually grain-free. The process permitsa microscopic-size reduction which is not practicalwith conventional microfilm processes. NCR isproducing the transparencies for TIM in its Dayton,Ohio processing center from 35-millimeter micro-film supplied by the British firm. All data is updatedevery six months." (bema News Bull., Dec. 9, 1968,p. 8).

6.49 "Information stored on photochromiccoatings is semipermanent . . . This is a resultof the reversible nature of the photochromic coating.The life of the photochromic micro-image is de-pendent upon the ambient temperature, of the coat-ing. At room temperature, image life is measuredin hours, but as the temperature is lowered, lifecan be extended very rapidly to months, and evenyears." (Hanlon et al., 1965, p. 8).

6.50 "The temperature-dependent decay ofimage life obviously prohibits the use of photo-chromic micro-images in their original form forarchival storage. To overcome this problem, meanshave been developed for contact-printing the photo-chromic micro-images to a high-resolutionphotographic emulsion, thereby producingpermanent micro-images." (Hanlon et al., 1965,pp. 8-9).

"The entire contents of the photochromic micro-image plate are then transferred (as micro-images)in one step, by contact-printing onto a high-resolu-tion silver halide plate . . . Micro-image dissemi-

Page 104: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

nation (duplicate) films are prepared in a similarmanner, using the silver masters to contact-printonto high-resolution silver halide film." (Hanlonet al., 1965, p. 9).

6.51 Further, "a more realistic assessment . . .

so that spillover, halation, and registration restric-tions would not be impossibly severe, still resultsin a contiguous bit density of 10Vcm2." (Reich andDorion, 1965, p. 572).

6.52 "Transparent silicate glass containingsilver halide particles darkens when exposed tovisible light, and is restored to its original trans-parency when the light source is removed. Theseglasses have been suggested for_self-erasing memorydisplays, readout displays for air traffic controls,and optical transmission systems. . . .

"Photochromic glass appears to be unique amongother similar materials because of its non-fatiguingcharacteristics. No significant changes in photo-chromic behavior have resulted from cyclingsamples with an artificial 3600A black light sourceup to 30,000 cycles. There were also no apparentsolarization effects causing changes in darkeningor fading rates after accelerated UV exposureequivalent to 20,000 hours of noon-day sunshine."(Justice and Leibold, 1965, p. 28).

"Another potentially important application ofphotochromic materials is in the display of infor-mation. Data can be recorded in photochromicglass in two ways: by darkening the glass with short-wavelength light in the desired pattern; or by uni-formly darkening the glass and bleaching it, in thedesired pattern, with longer wavelength light."(Smith, 1966, p. 45).

6.53 "To produce a display, patterns of varyingoptical density are written on photochromic filmwith a deflected ultraviolet light beam. The filmso exposed forms the 'object' in a projection system.Visible light is projected through the film onto ascreen. This display has mechanical simplicity,controllable persistence, and a brightness com-parable to conventional film-projection displays."(Soref and McMahon, 1965, p. 62).

6.54 "For dynamic applications such as targettracking, this technique not only permits a real-time target track, but also provides target trackhistory in the form of a trace with 'intensity'decreasing with time. The time period covered bythe visible target track history is a function ofthe photochromic material. At the present time,the speed of photochromic materials limits thecharacter generation rate to less than 100 charactersper second. Successful development of fasterphotochromic materials will provide an attractiveelectro-optical dynamic large-screen display withno mechanically moving parts." (Hobbs, 1966,p. 1879).

6.55 "One of the technological trends whichwill give us mass memories at a viable price isphotochromic microimagery. Photochromic tech-niques by which as many as 2,000,000 wordscan be stored on a film transparency only 4 inches

by 6 inches can now be used to store a patternof bits instead of images of pictorial or alphabeticalinformation. Photochromic high-resolution filmscoupled with proper light sources and opticalsystems can provide the storage of millions of bitsto the square inch. A micro-holographic indexingsystem used with such storage devices may revolu-tionise data storage and retrieval." ("R and D forTomorrow's Computers," 1969, p. 53).

6.56 "The breadth of the sensitivity charac-teristics of the photochromic films in conjunctionwith the width of the spectral characteristics of theavailable phosphors present a potential systemsdesigner with a choice of a number of componentparts . . . Future improvements in CRT-photo-chromic film display systems are dependent uponthe capabilities of each of the components. Thebasic parameters which enter into the cathoderay tube efficiency are the fiber optic plate andphosphor. An increase in the fiber optic efficiencyis doubtful except through the use of higher numeri-cal aperture fibers. Increasing the numericalaperture has the disadvantage of requiring a higherdegree of control on the film-CRT gap. An improve-ment in basic phosphor efficiency is difficult toforesee although several military agencies arenow or will be sponsoring programs to achievethis goal.

"An advance of the state-of-the-art of phosphortechnology should be possible by a factor of 4,but probably not beyond. By careful optimizationof the phosphor deposition with respect to particularapplications some improvement is possible. Atthe same time an increase in the efficiency of thephotochromic film by a factor of 2 is theoreticallypossible. Of more importance to the system de-signer is the understanding and optimization ofwriting and rewriting rates as they affect phosphorefficiency and life and in the matching of the CRTwith the photochromic film." (Dorion et al., 1966,p. 58).

6.57 "Wavefront reconstruction was inventedby Gabor and expounded by him in a series ofclassic papers [1948-1951] ." (Armstrong, 1965,p. 171).

"The wavefront reconstruction method of imageformation was first announced by Gabor in 1948."

6.58 Stroke gives a derivation of the term:"Hence, the name 'hologram' from the Greekroots for 'whole' and `writing'." (Stroke, 1965, p. 54).And also defines it: "A hologram is therefore aninterference pattern between a reference waveand the waves scattered by the object being re-corded." (Stroke, 1965, p. 53).

6.59 See also the following:"Arbitrary objects . . . are illuminated by

parallel laser light. In the general case, the lightreflected by these objects will be diffuse and thereflected wavefronts will proceed to interfere inthe photosensitive medium where the interferencepattern can be recorded. After the photosensitivemedium has been exposed and processed it is

98

Page 105: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

called the hologram, which may be defined as therecorded interference of two or more coherentwavefronts. When the hologram is illuminated byone of the original wavefronts used to form it, theremaining wavefronts are reconstructed .

Observation of these reconstructed wavefronts isnearly equivalent to observing the objects fromwhich they were originally derived." (Collier, 1966,p. 67).

"An optical hologram is a two-dimensional photo-graphic plate, which preserves information aboutthe wavefront of coherent light which is diffractedfrom an object and is incident upon the plate. Aproperly illuminated hologram yields a three-dimensional wavefront identical to that from theoriginal object, and thus the observed image isan exact reconstruction of the object. The observedimage has all of the usual optical properties as-sociated with real three-dimensional objects;e.g., parallax and perspective." (Lesem et al.,1967, p. 41).

6.60 "HolograjAy is the science of producingimages by wavefront reconstruction. In generalno lenses are involved. The reconstructed imagemay be either magnified or demagnified comparedto the object. Three-dimensional objects can bereconstructed as three-dimensional images."(Armstrong, 1965, p. 171).

6.61 "Are Holograms Already Outdated?Holography is one of the most exciting developmentsof today's technology. Holograms make use of ahigh-energy laser beam to store or display three-dimensional images for such applications as read-only storage; packing densities and device speedsare extremely impressive. However, at today'space of innovation, holography may be outmodedbefore it approaches being practical. One of thelatest competitors for 3-D display, storage, and waveconversion applications is the kinoform, a new wave-front reconstruction device which also projects a3-D image, but requires one-fourth of the computertime to generate and creates images roughly threetimes as bright.

"A computer program is used to produce a codeddescription of light being scattered from a particularobject. The resultant computations are used toproduce a 32-grey-level plot which is photoreducedand bleached. Then, when subjected to even avery small light source, such as the girl's earringin the photo above, the 3-D image is formed. Akinoform image can be produced of any object whichcan be computer-described. Examples might in-clude proposed buildings, auto designs, reliefmaps, or two-dimensional alphanumeric data."(Datamation 15, No 5, 131 (May 1969)).

6.62 "The kinoform is a new, computer-gener-ated, wavefront reconstruction device which, likethe hologram, provides the display of a three-dimensional image. In contrast, however, theilluminated kinoform yields a single diffractionorder and, ideally, all the incident light is usedto reconstruct this one image. Similarly, all the

99

1Mmft.eLou1a..1Lehndki19.sollg..aiNAIILI

spatial frequency content or bandwidth of thedevice is available for the single image. Com-putationally, kinoform construction is faster thanhologram construction because reference beamand image separation calculations are unnecessary,

"A kinoform operates only on the phase of anincident wave, being based on the assumption thatonly the phase information in a scattered wave-front is required for the construction of an imageof the scattering object. The amplitude of thewavefront in the kinoform plane is assumed con-stant. The kinoform may therefore be thought ofas a complex lens which transforms the knownwavefront incident on it into the wavefront neededto form the desired image. Although it was firstconceived as an optical focusing element, thekinoform can be used as a focusing element forany physical waveform, e.g., ultrasound or micro-waves." (Lesem et al., 1969, p. 150).

6.63 "A new hologram made at Bell TelephoneLaboratories now allows the viewer to see a 3Dimage rotate through a full 360 degrees as he moveshis head from side to side . . . To make a flathologram with a 360-degree view, vertical stripsof the photographic plate are exposed sequentiallyfrom left to right across the plate. A narrow slitin a mask in front of the plate allows only onestrip to be exposed at a time, each strip becominga complete hologram of one view of the object."(Data Proc. Mag. 10, No. 4, 16 (Apr. 1968)).

6.64 "Holography provides an alternativedescription of pictures, which might be moreamenable to bandwidth compression. To investigatethis possibility, it is desirable to measure variousstatistics of the hologram, and to try various opera-tions on it to see what their effects would be onthe reconstructed pictures. . . . Holography andother coherent optical processing . . . techniqueshave made possible relatively simple ways of obtain-ing the Fourier transforms of two-dimensionalfunctions and operating on them in the frequencydomain." (Quarterly Progress Report No. 81,Research Laboratory for Electronics, M.I.T.,199 (1966)).

6.65 "Gabor and others have proposed the useof the wavefront reconstruction method to producea highly magnified image, using either a change inwavelength between recording of the hologram andits reconstruction, or by using diverging light forone or both steps of the process. The two-beamprocess is readily amenable to such magnifica-tion . . ." (Leith et al., 1965, p. 155).

6.66 "Color reconstructions should be attainablefrom black and white holograms if suitable temporalcoherence conditions are ensured." (Stroke, 1965,p. 60).

"Holography is another field for which the laserhas opened many possibglies, Perhaps it willfind useful applications in pattern recognition andin storage of three-dimensional information as aFourier transform. . . .

Page 106: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

"Three-dimensional displays of airfield ap-proaches in the cockpit of a jet liner with the correctviewing angle from the position of the aircraftwould be a more interesting application [of laser-holographic recordings]." (Bloembergen, 1967,p. 86),

"We read about images having three-dimensionalproperties, magnification obtained by reconstruct-ing with a wavelength greater than that used informing the hologram, diffuse holograms which,even when broken, produce whole images, multi-color images obtaining from emulsions whichnormally produce only black and whites." (Collier,1966, p. 67).

"The recording of surface deformations inengineering components demonstrated here showshow these techniques may be applied at low costand in a short time. For teaching purposes it hasbeen shown that interference holography of thedistortions of a rajor blade can be demonstratedadequately to a large group of people in only afew minutes." (Bennett and Gates, 1969, p. 1235).

"With practical applications for holograms stillin the few-and-far-between stage, the Office ofNaval Research and IBM believe they have a holo-graphic application that is both practical andunique: in a head-up, all-weather landing system.

"The system now at the laboratory modelstage employs a hologram of an aircraft carrier.The hologram is picked up by an infrared vidiconand projected on a crt cockpit display . . .

"The achievement is one of application in whicha two-dimensional representation with the so-called six degrees of freedom encountered in acarrier landing, and full ranging capability, isproduced without employing a computer. Thedemonstration model simulates an approach windowtwo miles wide and a half-mile high and offers a3.5-degree glide slope. The six degrees (glide-slopedeviation, localized deviation, depression angle,bearing angle, roll, and slant angle) are achievedmechanically, electronically, and optically. Forexample, roll is achieved as the vidicon itself isrolled; glide-slope deviation is simulated by manip-ulating the hologram. In the model, the generatedimage allows a view which includes magnificationof the holographic image of the carrier up to16-to-1 and permits views including one belowthe deck of the carrier." (Electronics 42, No.13, 46 (June 23, 1969).)

"c .meral Electric has also examined the hologramfoi :,otential in charactek recognition. One methodsuggested by GE is to create a spatial filter usinga hologram. This filter can be used to detect, orrecognize, specific shapes from among a randomassortment.

"This general scheme is the basis for a personnelidentification system being developed by NationalCash Register Co., Dayton, Ohio.

"According to NCR, two of the most importantaspects of identification are signatures and photo-

graphs. In the NCR system, a hologram containingsignatures and numbers randomly located is placedin the optical path of a laser.

"If matching occurs when a signature card isinserted into a receiving device, the system locatesthe picture [which] is projected for comparison."(Se.4huk, 1967, p. 34).

6.67 "The wavefront reconstruction methodoffers the possibility of extending the highly de-veloped imagery methods of visible-light opticsto regions of the electromagnetic spectrum wherehigh-quality imagery has not yet been achieved . . ."(Leith et al., 1965, p. 157).

6.68 "A Megabit digital memory using an arrayof holograms has been investigated by Bell Labo-ratory scientists. The memory is semipermanent,with information being stored in the form of anarray of holograms, each hologram containing apage of information. A page is read . . . bydeflecting a laser beam to the desired element ofthe array, so as to obtain reconstruction of theimage stored in the element the digital informa-tion on a read-out plane which is common to allelements of the array. Photosensitive semicon-ductors arrayed on the read-out plane then sensethe stored information . . .

"In the Bell Labs experimental system, the lightsource is a continuous-wave helium-neon laseroperating in the lowest order transverse mode.Two-dimensional deflection is accomplished bycascaded water-cell deflectors, using Bragg dif-fraction from ultrasonic waves in water, andcapable of deflecting the beam to any of 300addresses in less than 15 psec .

"The present system comprises 6 k bits per page,and a 16 X 16 matrix of pages, for a total capacityof 1.5 M bits access time is 20 pc sec. Total opticalinsertion loss is 75 db, resulting in 70 k photonsimpinging on each bit detector . . . and BellLabs scientists project that, by straightforwardextensions of the present system, 25 M bits with anaccess time of 7 p,sec is a feasible system. Thissystem would have 65 X 65 matrix of 6 k bit pages, afaster deflection system, and a, reduced insertionloss of 65 db, resulting in 0.5 M photons per bitat the detectors.

"Ultimately, it is predicted that a memory canbe built having greater than 100 million bits ofstorage, with an access time in the one microsecondrange." (Modern Data Systems 1, No 2, 66 (Apr.1968).)

"Bell Labs has already constructed a 'bread-board' hologram memory system . . . that mayeventually be able to display any one of 100 millionunits of information upon one millionth of a second'snotice.

"It is based on using a number of closely spacedholograms on a single photographic plate. BellLabs had in mind switching operations as one funda-mental application . . .

"This memory system works by directing a laser

100

Page 107: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

beam to one 'page' (location of a hologram) in anarray. Initial goals are to make each hologramabout a millimeter in diameter and to space themrather closely in a pattern of 100 rows by 100 rows,Each hologram will store, encoded in the form ofan interference pattern, another 100 by 100 matrix,This will be coded in dots or blanks to representinformation, The reconstructed hologram willbe aligned precisely with an array of phototran-sistors (also under development at Bell Labs),which will `report' to the electronic device whichof the dots are present and which are absent, Thisroll call is the message," (Photo Methods forIndustry 12, No. 3, 61-62 (Mar. 1969)),

6.69 "Carson Laboratories, Bristol, Conn.,for example is working on the development ofpotassium bromide and similar crystals as holo-graphic materials.

"The , laser is used to bleach the crystal inaccordance with the holographic interferencepatterns. Such a memory device is said to have acapacity of 1 million bits per square half-inch ofmaterial." (Serchuk, 1967, p. 34).

6.70 "An experimental optical memory systemthat could lead. to computer storage devices athousand times faster than today's disk and drumstorage units was reported . . by three Inter-national Business Machines Corporation engineers.

"In the experimental system; blocks of infor-mation are accessed by a laser beam in just ten-millionths of a second. More than 100 million bitsof computer information could be stored on a ninesquare inch holographic plate . . .

"The experimental memory system uses a laserbeam to project blocks of information contained onthe hologram onto a light-sensitive detector. Thedetector then converts the projected hologram intoelectronic signals which can be processed by acomputer.

"In a feasibility model, assembled at IBM'sPoughkeepsie, N.Y., Systems Development Divi-sion Laboratory, size, direction, and focus of thelaser beam are determined by a series of lenses.The beam is positioned on the hologram by acrystal digital light deflector. By controlling thepolarization of the light from the laser the deflectoris used to select any block of information stored ona single plate.

"The hologram splits the laser beam into twoseparate rays: one non-functional and the other afirst-order diffraction pattern which carries theholographic information. This first-order diffractionpattern is then focused on a light-sensitive detectorarray, which converts the optical information toelectronic signals. The signals, representing data,are then sent to the computer's central processingunit at high speeds." (bema News Bull. 5, Nov.18, 1968).

6.71 "An advantage of storing information inthe form of a hologram rather than as a single realimage is that the loss of data due to dust andfilm defects is minimized, since a single bit is

101

stored not on a microscopic spot on the film but aspart of an optical interference pattern which iscontained in the entire hologram," (Modern DataSystems 1, No. 2, 66 (Apr. 1968),)

"A bad spot in a photographic image will notspoil all bits of information completely; the Fouriertransform of such a plate will still give a goodimage," (Bloembergen, 1967, p. 86).

"Since information from any one bit of the objectis spread out over the whole hologram, it is storedthere in a redundant form, and scratches or tearsof the hologram make only a minor deteriorationin the overall reconstructed image, In particular,no single bit is greatly marred by such damage tothe hologram." (Smith, 1966, p. 1298).

"Leith reports that diffused illumination holo-grams have an immunity to dust and scratchesand that particles have little effect in producingerroneous signals as in previous photographicmemories." (Chapman and. Fisher, 1967, p. 372).

"Since light from the point source is spread overthe entire hologram's surface (thus ensuring inter-ference patterns over the entire film surface),any part of the hologram will reproduce the sameimage as any other part of the hologram. It canbe seen that the only effect of dust and scratchesis to reduce the active area of the hologram."(Vilkomerson et al., 1968, p. 1199).

"Generally, the light projected into an image bya hologram is not associated with any specificpoint of the hologram, thus, if the hologram becomesmarred by dust or scratches there is little degrada-tion of any one point in the image. Dust and filmimperfections can be a severe problem in non-holographic storage, because errors arise from thedegradation of specific bits." (Gamblin, 1968,pp. 1-2).

6.72 Further, "the results of this study haveindicated that holographic techniques are particu-larly suited to satisfy the functional requirementsof read-only memory . . . Holography offers solu-tions to two key problems associated with therequirement for a single removable media storingup to 160,000 bits. First, the unique redundanceinherent in holograms constructed with diffusedillumination eliminates the loss of data due to suchenvironmental effects as dust and scratches.Second, the potential freedom from registrationeffects which Cit» be achieved by proper selectionof construction techniques allows the manualinsertion and removal of media with high bitpacking densities and does not add a requirementfor complicated mechanical positioning or complexelectrical interconnection in the read unit." (Chap-man and Fisher, 1967, p. 379).

6.73 "One can construct computer techniqueswhich would take an acoustic hologram (the wave-ont from a scattered sound wave) and transform

it into an optical hologram, thereby allowing usto construct the three-dimensional image of thescatterer of the sound waves." (Lesem et al.,1967, p. 41).

Page 108: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

6.74 "In a paper presented at the InternationalSymposium on Modern Optics, researchers at theIBM Scientific Centre at Houston, Texas, de-scribed how they have programmed a computeran IBM System/360 Model 50 to calculate theinterference patterns that would be created iflight waves were actually reflected from a realobject. Neither the real object nor actual light wavesare required to produce holograms with the com-puter technique. While the initial IBM computerhologram experiments have been restricted totwo-dimensional objects for research simplicity,the authors said further work is expected to makepossible digital holograms which can be recon.constructed into 3.D pictures. An engineer couldthen get a 3.D view of a bridge or car body designwithout actually building the physical object oreven drawing it by hand." (The Compoto Bull,1 1 , No. 2, 159 (Sept. 1967)).

"More recently, firms have experimentedwith computer-generated holograms for uniquedata display, NASA's Electronics Research Centerin Boston, Mass., is said to be investigating makingreal -time holograms for such applications as air-port display to approaching aircraft.

"A team at IBM's Houston Scientific ResearchCenter has programmed a System/360 Model 50to create hologram by calculating the necessaryinterference patterns.

"Thus it may soon be possible to use the computerto create a mathematical model of a device andthen translate equations into a three-dimensionalhologram of the mathematical model." (Serchuk,1967, p. 34).

6.75 "Holograms of three dimensional imageshave been constructed with a computer and re-constructed optically. Digital holograms have beengenerated by simulating, with a computer, the wavefronts emanating from optical elements, taking intoaccount their geometrical relationship. We havestudied in particular the effects of various types ofdiffuse illumination. Economical calculations ofhigh resolution images have been accomplishedusing the fast finite Fourier transform algorithm toevaluate the integrals in Kirchoff diffractiontheory. We have obtained high resolution threedimensional images with all the holographic prop-erties such as parallax, perspective and redun-dancy." (Hirsh et al., 1968, abstract, p. H 104).

"Kinoforms serve for all of the applications ofcomputer-generated holograms, e.g., three-dimen-sional display, wave conversion, read-only storage,etc. However, kinoforms give a more practical,computationally faster display construction thatyields more economical use of the reconstructingenergy and that yields only the desired image.

"The principal computational advantage of kino-forms as compared with digital holograms is em-bodied in the fact that all of the spatial frequencycontent of the device is used in the formation of thereal image; none is required for the separation ofthe real and conjugate images. There is then at

least a factor of four reduction in the computertime needed to calculate the wavefront patternnecessary for equivalent image quality. Corre-spondingly there is a reduction in plotting timefor the kinoform.

"A further economy is achieved in that no cal-culations involving a reference beam are necessary.Finally, in the cases of one- and two-dimensionalobjects only real-number additions are required,once the basic transform is calculated, to determinethe wavefront phase for plotting, The correspondingquantity to be plotted for digital holograms is thewavefront intensity which requires multiplicationof complex numbers." (Lesem et al., 1969, p. 155).

6.76 "A[n] . important reason for synthe-sizing holograms is to create optical wavefrontsfrom objects that do not physically exist. A need toform such a wavefront from a numerically describedobject occurs whenever the results of a three-dimensional investigation, for example, the analysisof an x-ray difiractogram must be displayed inthree dimensions." (Brown and Lohmann, 1969,p. 160).

"Scientists, stock brokers, architects, statisti-cians and many others who use computers maysoon have a practical, fast, and inexpensive wayof converting memory data into three-dimensionalpictures and graphs.

"With a process devised at Bell TelephoneLaboratories, it takes only a few seconds of com-puter time to turn equations, formulas, statisticaldata and other information into a form suitablefor the making of holograms. Viewed under ordinarylight, the holograms produce three-dimensionalpictures that can display a full 360-degree view ofthe object shown.

"Holography, which has been called genslessphotography,' records a subject through theinterference of two laser beams on a photographicplate. One beam is aimed directly at the plate,and the other reaches the plate after being trans-mitted through, or reflected by, the subject being`photographed.'

"In the BTL method, the original subject existsonly as a group of numbers or coordinate pointsin three dimensions, far example, in the com-puter's memory. The hologram is made in two steps.First, the computer is programmed to constructa series of two-dimensional pictures, or projections,each showing the 3-D data from a precisely definedunique angle. A microfilm plotter, connected tothe computer, produces a microfilm frame for eachpicture.

"In the second step, a holographic transparencyis made. The frames of the microfilm are used assubjects to make very small holograms (1 to 3 mmacross), which are positioned sequentially on aholographic medium."

"Thus, a composite hologram is made up of aseries of small holograms, each of which is formedwith a two-dimensional image. But the compositeimage appears three-dimensional, and shows a

102

Page 109: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

360-degree view of the object. With this type ofhologram also invented at BTL the viewer cansee the object rotating through a full cycle by simplymoving his head from side to side in front of thehologram." (Computer Design 8, No. 6, 28 (June1969).)

6.77 "By the use of photographic recordingtechniques a very high information density can beachieved to which rapid random access can be madeby appropriate electronic and optical techniques.lf, therefore, there are any classes of informationwhich must be read frequently, but are not changedfor at least a week, then such a storage techniquewould be appropriate. This is evidently the casefor all system programmes including compilers andmonitor programs . ." (Scarrott, 1965, p. 141).

"Photographic media are quite inexpensive, arecapable of extremely high bit densities, and exhibitan inherent write-once, read-only storage capability.The optical read-out techniques, which are used,are nondestructive." (Chapman and Fisher, 1967,p. 372).

6.78 "To get an order of magnitude idea ofthe memory capacity, we will consider a memoryplane of 2 in. square . . . There will be approxi-mately 645 subarrays [individually accessible].Consider that only one-half the memory plane iscomposed of active film. The memory would thencontain almost 13 million bits." (Reich and Dorion,1965, p. 579).

6.79 "The inherent power of optical processingcan be exploited without suffering the speed limita-tions usually associated with static spatial filters.The method consists of using an electron beam-addressed eloctro-optic light valve (EOLV) as thespatial filter. Thus the filter need no longer be afixed transparency, but can instead be a dynamicdevice whose orientation is controlled electronicallyrather than mechanically. This opens the methodof optical processing to the domain of real time andpresents exciting possibilities for its use in a varietyof applications." (Wieder et al., 1969, p. 169).

6.80 "In optical transmission lines, the wave-length of the signals will be shorter than any ofthe circuit dimensions; therefore, one could elimi-nate, for example, all the reactive effects in theinterconnections." (Reimann, 1965, p. 247).

"High-speed electronic computer circuitry isbecoming 'interconnection limited. The reactanceassociated with the mounting and interconnectionsof the devices, rather than the response of the activecomponents, is becoming the main factor limitingthe speed of operation of the circuits.

"A possible approach to computer developmentthat might circumvent interconnection limitationsis the use of optical digital devices rather thanelectronic devices as active components." (Reimann,1965, p. 247).

"One factor of growing significance, as circuitsize is reduced, is the increasing amount of surfacearea consumed by areas devoted to interconnectionsand pads for interconnections. There have been

marginal improvements over the past few years,but no startling improvements have been madein comparison to reductions in the basic devicegeometry.

"The consumption of real estate may be reducedby interconnecting the logic circuits with thenarrow lines allowed by the masking technology,thus reducing to a minimum the area requirementsfor external lead pads. At this point, the semi-conductor manufacturer relaxes and says in effectto the computer designer: Reduce your logic to afew standard configurations, and I will reducecosts by a large factor. Hence, we have a search formagic standard logic functions." (Howe, 1965,p. 507).

"Interconnections are already our problem fordesigning and building systems, and applyingLarge Scale Integration (LSI) to digital systemswill inevitably force the realization that intercon-nections will be more important in determiningperformance than all other hardware factors. Thisis because the problems of physical size and bulk,DC Shift over long cables, reflections and stublengths, crosstalk and RFI, and skin effect degrada-tion are making computer systems interconnectionlimited." (Shah and Konnerth, 1968, p. 1).

6.81 "One example of these more exploratoryattempts is the optically addressed memory withmicrosecond nondestructive read cycle and muchlonger write cycles. Chang, Dillon and Gianolapropose such a changeable memory employinggadolinium iron garnets as storing elements."(Kohn, 1965, p. 133).

6.82 "Maintaining low power supply and dis-tribution impedances in the presence of nanosecondnoise pulses is an increasingly difficult problem . .As more circuits are placed on a chip, decouplingof power supply noise will be required on or inclose proximity to the chip." (Henle and Hill,1966, p. 1858).

6.83 "Integrated circuitry has been widelyheld to be the most significant advance in com-puter technology since the development of thetransistor in the mid-fifties . . . Semiconductorintegrated circuits are microminiature circuitswith the active and passive microcomponents onor in active substrate terminals. In thin-film inte-grated circuitry, terminals, interconnections, re-sistors and capacitors are formed by depositing athin film of various materials on an insulatingsubstrate. Microsize active components are theninserted separately to complete the circuit. Micro-modules are tiny ceramic wafers made from semi-conductive and insulative materials. These thenfunction either as transistors, resistors, capacitors,or other basic components." ("The Impact . . .",1965, p. 9).

6.84 "Integrated circuit technology will bringrevolutionary changes in the size, cost, andreliability of logical components. Lesser improve-ments will be realized in circuit speed. . . .

"Advances in integrated circuit logic components103

Page 110: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

and memories , . , will provide significant reduc-tions in cost since the implementation of flexiblecharacter recognition equipment involves complexlogical functions." (Hobbs, 1966, p. 37).

"Of course, the cost of electronics associatedwith peripherals will be drastically reduced byLSI. But the promise of LSI is greater than that.Functions that are now handled by mechanicalparts will be performed by electronics, More logicwill be built into terminals, and I/O devices suchas graphic displays, in which the major cost iscircuits, will come into more general use." (Hudson,1968, p. 47).

"This speed power performance requires onlymodest advances from today's arrays. The boardmodule size is convenient for small memoryapplications and indicates the method wherebyLSI memories will establish the production volumeand the impetus for main frame memory applica-tions. The LSI memory being produced for the AirForce by Texas Instruments Incorporated falls intothis category." (Dunn, 1967, p. 598).

"The Air Force contract [with Texas Instruments]has as a specific goal the achievement of at least atenfold increase in reliability through LSI tech-nology as compared with present-day integratedcircuits." (Petritz, 1967, p. 85).

"Impetus for continued development in micro-electronics has stemmed from changing motivations.Major emphasis was originally placed on size re-duction. Later, reliability was a primary objective.Today, .development of new materials and processespoint toward effort to reduce cost as well as tofurther increase reliability and to decrease weight,cube, and power." (Walter et al., 1968, p. 49).

"This [LSI] technology promises major impactin many areas of electronics. A few of these are:

1. Lower cost data processing systems.2. Higher reliability processing systems.3. More powerful processing systems.4. Incorporation of software into hardware,

with subsequent simplification of hardware."(Petritz, 1966, p. 84).

"Computer systems built with integrated cir-cuits have higher reliability than discrete-com-ponent machines. This improved reliability is due totwo factors: (1) the silicon chip has a higher relia-bility than the sum of the discrete components itreplaces, and (2) the high density packagingsignificantly reduces the number of pluggable con-tracts in the system." (Hen le and Hill, 1966, p.1854).

6.85 "One of the most interesting and significantparadoxes of the new technology is the apparentreconciliation of a desire to achieve high speedand low cost. The parameters which yield highspeed, i.e., low parasitics, small device geometry,also yield lowest ultimate production cost insilicon integrated circuits." (Howe, 1965, p. 506).

6.86 "Some examples of functional expansionwe would naturally consider are as follows. In the

central processor LSI might' be used to carry outmore micro-operations per instruction; addressmore operands per instruction; control morelevels of look-ahead; and provide both repetitionand more variety in the types of functions to beexecuted. In system control, LSI might providegreater system availability through error detection,error correction, instruction retry, reconfigurationto bypass faulty units, and fault diagnosis; moresophisticated interrupt facilities; more levels ofmemory protection; and concurrent access to inde-pendent memory units within more complex pro-gram constraints. In system memory, LSI mightprovide additional fast local memory for operandsand addresses; improved address transformationcapability; content-addressable memory; andspecial fast program status tables. In systeminput,output, LSI might provide more channels;improved interlacing of concurrent input-outputoperations with automatic memory protectionfeatures; and more sophisticated pre- and post-processing of data and instructions to relieve thecentral processor of these tasks." (Smith and Notz,1967, p. 88).

"LSI has an inherent functional advantage overmagnetics in associative applications, namelythat fast bit-parallel searches can be achieved.The main drawback of magnetic associative memo-ries, even in those applications which requirerelatively simple match-logic per word, is thatimperfect cancellation of analog sense signals andother noise effects give rise to a low signal-to-noiseratio and thereby limit the technology to essentiallybit-serial operation. Thus, the more nearly binarysignals available from semiconductor associativedevices seem to provide a unique advantage overmagnetics which is not strongly evident in compar-isons of the two technologies over other categoriesof memory." (Conway and Spandorfer, 1968,p. 842).

"Content Addressable Memories: As the semi-conductor manufacturer learns to produce moreand more components on a single silicon chip,reasonably sized content-addressable memoriesmay become feasible. Memories of this type,available on a large scale, should permit significantchanges in the machine language of the computer,and possibly provide simplification in the designof such software as operating systems and com-pilers." (Graham, 1969, p. 104).

6.87 "Revolutionary advances, if they come,must come by the exploitation of the high degreeof parallelism that the use of integrated circuitswill make possible." (Wilkes, 1968, p. 7).

"One area in which r feel that we must pin ourhopes on a high degree of parallelism is that ofpattern recognition in two dimensions. Present-daycomputers are woefully inefficient in this area."(Wilkes, 1968, p. 7).

6.88 "The recent advance from discrete tran-sistor circuits to integrated circuits is, about to be

104

Page 111: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

overshadowed by an even greater jump to LSIcircuitry. This new jump will result in 100-gate andthen 1000-gate circuit modules which are littlelarger in size or higher in cost than the presentfour-gate integrated circuit modules." (Savitt et al.,1967, p. 87).

6.89 "Discrete components have given way tointegrated circuits based on conventional etchedcircuit boards. This fabrication technique is inturn giving way to large scale integration (LSI),in which sheets of logic elements are produced asa unit." (Pyke, 1967, p. 161).

"The initial results suggest the possibility offabricating, in one step, a complete integratedcircuit with all the passive elements. Such a processwould start with a metallized substrate and woulduse a programmable laser and work stage. Com-plete laser fabrication of hybrid circuits will requirea process in which a metal film is removed selec-tively, exposing a different film. For example sucha process ma; be necessary in order to remove thenonductor and expose the resistor film.

"In the present tantalum-chrome-gold technologysuch a selective removal of the gold presents sub-stantial difficulties because the reflectivity of thegold is much greater than that of the tantalumnitride. It is quite probable, however, that somecombination of films will be found for which theupper film can be removed from the resistor withoutdamaging it." (Cohen et al., 1968, p. 402).

6.90 "Integrated circuits (more importantly,large scale integration (LSI) which involvesnumerous integrated circuits tied together on thesame chip) offer the best promise from the stand-point of size, reliability and cost [for scratch-padmemories]." (Gross, 1967, p. 5).

6.91 "Of all the potential applications of large-scale integration, new memory techniques arethe most startling. Ferrite core memories havejust about reached their limit in terms of accessspeeds required for internal scratchpads. Magnetic-thin films, while fast enough, are too costly. Studiesshow that because of LSI, semiconductor memoriesare less costly than any other approach for speedsfrom 25 to 200 nanoseconds and for capacities upto 20,000 bitsjust the range required by scratch-pads." (Hudson, 1968, p. 41).

6.92 "In addition to being used for circuitry,LSI techniques apply to the construction ofmemories, since some of the new memory elementsmentioned above can be fabricated using themicro-construction techniques. The possibilityalso comes to mind of fabricating both the com-parison circuitry and the memory cells of a content-addressable memory into a single unit. Thus thedevelopment of large-scale integration holdsconsiderable promise for improving computerhardware." (Van Dam and Michener, 1967, p. 210).

6.93 "In view of the economy that shouldaccompany widespread use of LSI, it may becomeless expensive to use LSI logic elements as main

memory elements, at least for some portion ofprimary storage. Even today some systems havescratchpad memories constructed of machinelogic elements, so that the fast processor logicis not held back by the slower memory capability."(Pyke, 1967, p. 161).

6.94 "LSI memories show considerable potentialin the range of several hundred nanoseconds downto several tens of nanoseconds. In contrast withlogic, LSI memory is ideally suited to exploitthe advantages and liabilities of large chips:partitioning is straightforward and flexible, ahigh circuit density can be obtained with a manage-able number of input-output pads, and the majoreconomic barriers of part numbers and volumewhich confront LSI logic are considerably lower.Small-scale memory cell chips have alreadysuperseded film memories in the fast scratchpadarena; the depth of penetration into the main-frame is the major unresolved question." (Conwayand Spandorfer, 1968, p. 837).

6.95 "As technological advances are made,the planar technology permits us to pack moreand more bits on a single substrate thus reducingthe interconnection problem and simplifyingthe total memory packaging job. This integrationwill reflect in the long run on product cost andproduct reliability." (Simkins, 1967, p. 594).

6.96 "The new technology has a number ofproblems whose solution can be facilitated byarranging the circuitry on these arrays in a 'cellular'form that is, in a two-dimensional iterativepattern with mainly local intercell connectionsthat offers such advantages as extra-high packingdensity, ease of fabrication, simplified testingand diagnosis, ease of circuit and logical design,the possibility of bypassing faulty cells, and par-ticularly an unusually high flexibility in functionand performance." (Kautz et al., 1968, p. 443).

6.97 "LSI offers improvements in cost andreliability over discrete circuits and older integratedcircuits. Improvement in reliability is due to thereduction of both the size and the number ofnecessary connections. Reductions in cost aredue largely to lower-cost batch-fabrication tech-niques. One problem in fabrication is the increasedrepercussion of a single production defect, necessi-tating the discarding of an entire integrated com-ponent if defective instead of merely a singletransistor or diode. This problem is attackedby a technique called discretionary wiring; acomputer tests for defective cells in a redundantlyconstructed integrated array and selects, for thegood cells, an interconnection pattern that yieldsthe proper function." (Van Dam and Michener,1967, p. 210).

"Since packaging and interconnections aremajor factors in the cost of an integrated circuit,the cost potentials . . . can be achieved only bybatch fabricating large arrays of interconnectedcircuits in a single package. This raises manydifficult and conflicting questions, such as packag-

..541.1Stikisat4..k...1,,,akvaqAiglisliataUs.2}4

Page 112: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

ing design, maintenance philosophy, flexibilityand functional logic segmentation." (Hobbs, 1966,

p. 39)."The rapid and widespread use of integrated

circuit logic devices by computer designers,coupled with further improvements in semi-conductor technology has raised the questionof the impact of Large Scale Integration (LSI)on computer equipment. It is generally agreedthat this is a very complex problem. The useof Large Scale Arrays for logic require solutionsto the problems such as forming interconnections,debugging logic networks, specifying and testingmultistate arrays, and attempting to standardizearrays so that reasonable production runs andlow per unit design costs can be obtained." (Pet-schauer, 1967, pp. 598-599).

"The advent of large-scale integration and itsresultant economy has made it clear that a completere-evaluation of what makes a good computerorganization is imperative. Methods of machineorganization that provide highly repetitive logicalsubsystems are needed. As noted previously,certain portions of present computers (such assuccessive stages in the adder of a parallel machine)are repetitive; but others (such as the controlunit) tend to have unique nonrepetitive logicalconfigurations." (Hudson, 1968, p. 42).

6.98 "Graceful performance degradation throughuse of majority voting logic.

"LSI will allow a single logical element to bereplaced by several logical elements in a mannersuch that the elements can be used to determinethe state or condition of a situation. The state orcondition of the situation indicated by a majorityof the elements can be accepted as valid hence,majority voting logic." (Walter et al., 1968, p. 52.)

"Because of low cost modules, pennies and lessper logic function, maintenance will be simplifiedand maintenance cost will be reduced by usingthrow-away modules. By 'wiring' the spares in,fabricated on the same LSI wafer that they aresparing, it becomes practical to self-repair acomputer. This is accomplished by includingdiagnostic logic (coupled with programs) to effectthe self-repair. Such a self-healing computersystem, using electronic surgery, need only bemanually maintained when its spare parts bankbecomes exhausted. Some advantages of self-repair are:

Increased system reliabilityContinuous operation (system always avail-able)Long term (years) remote system operationwithout manual repairConsiderable reduction in maintenancecosts." (Joseph, 1968, p. 152).

6.99 See, for example, Rajchman (1965) andVan Dam and Michener as follows: "However,new memory elements, such as plated wires,planar thin films, monolithic ferrites, and inte-grated circuits, are becoming competitive. One

106

advantage of these new elements is that a memoryarray made with them can be batch-fabricatedin one step, leading to simpler, lower-cost pro-duction (the production of core memories requiresmaking the magnetic cores and then stringingthe cores together to make a memory). Anotheradvantage appears to be in memory speed. Thenew memory elements have taken over the fieldsof high-speed registers and temporary 'scratch-pad' memories. Integrated circuitry . . . willprobably replace the planar magnetic thin-filmcurrently used for high-speed registers; however,the planar film will be extended to intermediate-sized stores (105-106 bits)." (Van Dam and Mich-ener, 1967, p. 207).

6.100 "Today most common types [of corememories] have about a million bits and cycletimes of about one microsecond, with biggerand faster types available. Capacity and speedhave been constantly increasing and cost con-stantly decreasing." (Rajchman, 1965, p. 123).

"The ferrite core memory with 106 bits and1/.4 sec cycle time is the present standard for mainmemories on the computer market. Larger mem-ories up to 20.106 bits at 10/h sec cycle time havealready been announced." (Kohn, 1965, p. 131).

"Ferrite cores dominated the main memorytechnology throughout the second generation.Most although not all, of the third generationmachines thus far announced have core memories."(Nisenoff, 1966, p. 1825).

6.101 "The NCR 315 RMC (Rod MemoryComputer) has about the fastest main memorycycle time of any commercial computer yet de-livered 800 nanoseconds. The unique thin-filmmemory is fabricated from hairlike berryllium-copper wires plated with a magnetic coating.In the Rich's system the 315 RMC processes datafrom over 100 NCR optical print cash registers . . ."(Data Proc. Mag. 7, No. 11, 12 (1965)).

A later version of NCR's rod-memory computer,the 315-502, adds multiogramming capabilityat an 800 nanosecond cycle time. (Datamation 12,No. 11, 95 (Nov. 1966)).

"NCR's new thin-film, short-rod memory repre-sents one of the most significant technical in-novations in the Century series . . . The rodsare made by depositing a thin metallic film andthen a protective coating on 5-mil copper wire.This process yields a plated wire 0.006 inch indiameter, which is then cut into 0.110-inch lengthsto form the 'bit rods'. The basic memory planeis formed by inserting the bit rods into solenoidcoils wound on a plastic frame. Then the entireplane is sealed between two sheets of plastic.Automated processes are used to plate the wire,cut the rods, wind the solenoid coils, insert therods into the solenoids, and seal the planes. Theresult is a high-performance memory at an un-usually low bit cost." (Hillegass, 1968, p. 47).

6.102 For example, "laminate-diode memories

Page 113: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

of millions of bits operating in, less than one micro-second seem possible in the near future." (Rajch-man, 1965, p. 125).

"The cryotrons, the memory structure, and allconnections are constructed by a single integratedtechnique. Thin films of tin, lead and siliconmonoxide are evaporated through appropriatemasks to obtain the desired pattern of lines.The masks are made by photoforming techniquesand permit simple fabrication of any desiredintricate patterns. . . .

"The superconductive-continuous sheet-cryotron-addressed approach to large capacity memoryoffers all the qualities, in its principle of opera-tion and its construction, to support ambitionsof integration on a grand scale as yet not attemptedby any other technology. No experimental ortheoretical result negates the promise . . . Thereis, however, a serious difficulty: The variation ofthe thresholds of switching between elements inthe memory plane." (Rajchman, 1965, p. 127).

6.103 "A planar magnetic thin film memoryhas been designed and built by Texas Instrumentsusing all integrated circuits for electronics achiev-ing a cycle time faster than 500 ns, and an accesstime of 250 ns. The memory is organized as 1024words by 72 bits in order to balance the costs ofthe word drive circuits against the sense-digitcircuits. The inherent advantage of this particularorganization is that the computer can achievespeed advantage not only because of a fast repeti-tion rate, but also because four 18 bit words areaccessed simultaneously. (Comparable core memorydesigns are ordinarily organized 4096 wordsof 18 bits each.) The outlook is for higher speed(faster than 150 ns) memories in similar organiza-tions to be developed in planar magnetic films.The cost of these memories will be competitivewith 2-1/2 D core memories of the same capacitybut the organization and speed can be consideredto offer at least a 4:1 improvement in multipleword accessing and a 3 :1 improvement in speed. Asa result of this, more computers will be designed totake advantage of the long word either by extend-ing the word length of the computer itself or byordering instructions and data in such a mannerthat sequential addressing will be required a largepercentage of the time." (Simpson, 1968, pp.1222-1223).

6.104 "If the high-speed memory is to operateat a cycle time in the 100-nanosecond region, theclass of storage elements that can be used is some-what limited. Storage elements capable of switchingspeeds compatible with 100-nanosecond cycletimes include (a) thin magnetic films of severaltypes, (b) some forms of laminated ferrites, (c)tunnel diodes, and (d) semiconductor flip-flop typedevices." (Shively, 1965, p. 637).

6.105 For example, "most forms of thin filmsand laminated exhibit adequately fast switchingtimes, but the drive current requirements are large

107

376 -411 0 - 70 - 8

and the readout signals small." (Shively, 1965,p. 637).

"The thin film transistor is barely emergingfrom the laboratory and it may require severalyears before it becomes a serious contender forintegrated-all-transistor-random-access-memories oflarge capacities." (Rajchman, 1965, p. 126).

"The development of higher-speed conventionalmemory devices, of cores and thin films, hasslowed, and progress with such devices in breakingthe hundred nanosecond barrier will probablytake some time." (Pyke, 1967, p. 161).

"Thin films appeared to be more hopeful and arecertainly an area where extensive research is beingcarried out. The main problems still appear to bethose of production, especially the problem ofachieving reproduceability from one film to an-other." (Fleet, 1965, p. 29).

"The development of Cryogenic memories hasreached the point where planes storing severalthousand bits can be fabricated with high yield.However, there are still many problems, such asinterconnections, cost, data rate, etc., to be solvedbefore considering a mass store large enough tojustify the overhead of cryostat and dewar."(Bonn, 1966, p. 1869).

"Key problems in the fabrication of large mono-lithic memories are reliability (what to do if a bitfails) and the volatility of the monolithic cell (ifthe power goes off, the information is lost)." (Henleand Hill, 1966, p. 1859).

6.106 Kohn points out, for example, that "atpresent, such optically addressed memories seemto be capable of storing 105 . . . 106 bits/sq in.to have about 0.1 pi, sec read access time, and onecell can be written in 100 pi, sec. Very high voltagesfor the light switches are required. This appearsto be quite unfavorable from a technical point ofview; however, an intensive materials researchmay overcome the weakness of electrooptic effectsand lead to more realistic devices." (Kohn, 1965,p. 133).

6.107 "In the subsystems of a large computer,one serious problem is ground plane noise spurioussignals generated by large currents flowing in cir-cuits which have a common ground . . . Anothernoise nuisance arises when signals have to becoupled from two subsytems which are operatingat two widely different voltages. Lumped together,such difficulties are known as the 'interface prob-lem'." (Merryman, 1965, p. 52).

6.108 "Superconductive cryogenic techniques,which were advocated for quick, on-line storage,will probably not become operational because ofthe high costs of refrigeration." (Van Dam andMichener, 1967, p. 207).

6.109 "The projected 'break-even' capacity,including refrigeration cost, for a cryoelectricmemory is approximately 107 bits." (Sass et al.,1967, p. 92).

"The cryoelectronic memory is made up of

Page 114: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

strip lines, which, though interconnected fromplane to plane, display low characteristic impedanceand high propagation velocity, and require modestperipheral electronics. Therefore, propagationvelocity is the only real limit to memory cycle time.Typical cycle time for the 108-bit AB system . . .

is approximately 1 As." (Sass et al., 1967, p. 97).6.110 "The use of small special purpose mem-

ories has become prevalent in recent years. TheHoneywell H-800 employed a small core memoryto permit multiprocessing as far back as 1960."(Nisenoff, 1966, p. 1826).

6.111 "Much interest has recently been shownin the computer art in incorporating a low-cost,mechanically changeable, read-only store in thecontrol section of a central processing unit. Flexi-bility of organization and compatibility with othersystems can be built into a computer that has areadily changeable read-only store. The printedcard capacitor Read-only store is one of threetechnologies selected for the ROS function inSystem/360." (Haskell, 1966, p. 142).

"The Read Only Store (ROS) memory is a pre-wired set of micro-instructions generally set upfor each specific application. This means that thespecifications of the computer may be tailored tosuit the specific application of the user. Thus inan application where square root or some otherspecial function must be performed rapidly orrepeatedly, such a sequence of operations maybe hard wired into the ROS." (Dreyer, 1968, p. 40).

"Micro-steps, the basic instructions of a storedlogic computer, permit the programmer to controlthe operation of all registers at a more basic levelthan is possible in the conventional computer.Sequences of micro-steps (each of which requiresonly 400 nsec to perform) are stored in the ROSas 'micro-routines' which are executed much asa conventional subroutine. However, unlike theunalterable commands of the conventional com-puter, stored micro-routines may be designed bythe programmer to form the most efficient com-bination of basic computer logical operations fora given application. The speed increase availableby use of a ROS does not come from faster com-puting circuits, but from operating instructionsbuilt into the hardware for more efficient orderingof gates, flipflops, registers et al. Thus at theoutset of each application, tradeoff studies mustbe made to determine to what extent software maybe replaced by hardware through use of the ROS."(Dreyer, 1968, p. 41).

"The implementation of read-only memories asthe control element in a computer has significancefor maintainability and emulation. Instruction de-coders and controls present a difficult problem tothe designer. These elements contain no repetitivepatterns like those in data paths and arithmeticunits. In addition, they have many external con-nections. A read-only memory c n be used to pro-vide these same control signals. It would containa long list hundreds or thousands of microinstruc-

11

tions. Each program microinstruction from the mainmemory addresses a sequence of microinstructionsin the read-only memory. Each microinstruction inthe sequence describes the state of the entiremachine during its next cycle. The read-onlymemory divides easily into segments, since itsonly external connections are the words addressinputs and contri)1 signal outputs. LSI read-onlymemories are being offered by several manufactur-ers." (Hudson, 1968, p. 42).

"If a read only-memory (ROM) module wereused to store subroutines, the relative-efficiencyof the code would be much less important. ROMmodules cost less than one-fifth as much as com-parable amounts of main core storage. Use ofROM to 'can' bread and butter subroutines in low-cost hardware provides an effective solution to aparticular problem. The main or read/write memory,thus liberated, can be used to provide feasibleflexibility for the main program and to incorporateinevitable, unforeseen, jobs that arise during thedevelopment and operating life of a computersystem. . . .

"One of the most significant aspects of fourthgeneration computers will be the use of read-onlymemories. Tradeoffs of hardware for softwareand/or speed, and/or reliability, will significantlyaffect computer organization. Advantages to begained through the use of ROM include (1) increasedspeed, output signal level and reliability, (2) de-creased read-cycle time, operating power, size,weight, and cost, and (3) nonvolatility." (Walteret al., 1968, pp. 51, 54).

"The 'read-only' function includes the storageof indirect accessing schemes, the implementationof logic functions, the storage of microprogrammedinstructions, and related applications." (Chapmanand Fisher, 1967, p. 371).

"The attractions of a good read-only storageinclude not only extremely reliable nondestructivereadout, but also lower cost." (Pick et al., 1964,p. 27).

"Special hardware functions implemented in theread-only memory of the 70/46 supplement the[address] translation memory. They are used asadditional privileged instructions. Among the capa-bilities they provide are the ability to load or un-load all or selected parts of translation memoryand to scan translation memory in such a way thatonly the entries of those pages that have beenaccessed are stored into main memory." (Oppen-heimer and Weizer, 1968, p. 313).

6.112 "A 1VIYRA memory element is a MYRiAperture ferrite disk which, when accessed, pro-duces sequential trains of logic-level pulses upon64 or more otherwise isolated wires . . . A pico-programmed system, then, consists essentially ofan arithmetic section and a modified MYRAmemory. A macroinstruction merely addressesan element in the MYRA memory; when the elementis accessed, it produces the gating signals whichcause the arithmetic unit to perform the desired

108

Page 115: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

functions. In addition, it provides gating pulseswhich fetch the operand (if needed), increment thecontrol counter, and fetch the next instruction."(Briley, 1965, p. 94).

"Picoprogramming is realized by the use of theMYRi Aperture (MYRA) element, a multiapertureferrite device which is the basic building blockof the instruction mode. Each instruction moduleis a complete entity and is fabricated on a con-ventional printed wiring card that can be insertedin a conventional PC connector. Incorporation of anew instruction in the computer or alteration of anexisting one is accomplished by the addition orsubstitution of the appropriate instruction modulecard." (Valassis, 1967, p. 611).

6.113 "With this GaAs diode array system avery fast, medium capacity, read-only memory withchangeable contents becomes realizable. Sincemany of the existing third generation computerscontain microinstructions in read-only stores ofabout the same capacity as that indicated for thediode-accessed holographic memory, it would seemthat the existing read-only memories could bereplaced by this type of holographic memory;such a system could be an order of magnitudefaster and allow for increased flexibility of CPUconfiguration by easy change of the microinstruc-tion repertoire." (Vilkomerson et al., 1968, p. 1198).

6.14 In general, these terms are interchangable.Some typical definitions are as follows: "Basicallyan associative memory involves sufficient logicalcapability to permit all memory locations to besearched essentially simultaneouslyi.e., withinsome specified memory cycle time . . . Searchesmay be made on the basis of equality, greater-than-or-equal-to, less-than-or-equal-to, between limits,and in some cases more complex criteria." (Hobbs,1966, p. 41).

"An associative memory which permits thespecification of any arbitrary bit pattern as thebasis for the extraction of the record within whichthis pattern appears is called a fully associativememory." (McDermid and Petersen, 1961, p. 59).

"The content-addressable memory (CAM) wasinitially proposed by Slade as a cryogenic device . . .For this type of memory, word cells are accessedby the character of stored data rather than byphysical location of the word cell. The characterof data is evaluated in parallel throughout thememory. A common addressing characteristic isequality of stored data and some externally pre-sented key. Memories of this type have also beencalled associative since a part of the cell contentsmay be used to call other cells in an 'associative'chain." (Fuller, 1963, p. 2).

"An associative memory is a storage device thatpermits retrieval of a record stored within byreferring to the contents or description rather thanthe relative address within the memory." (Prywes,1965, p. 3).

"The distinguishing feature of an associative

109

memory is that it has no explicit addresses. Anyreference to information stored in an associativememory is made by specifying the contents of apart of a cell. All cells in the memory which meetthe specification are referred to by the statement."(Feldman, 1965, p. 1).

"We have described here an iterative cell whichcan be used as a content addressable memoryfrom which an entry may be retrieved by knowingpart of its contents." (Gaines and Lee, 1965, p. 74).

"Memory systems which retrieve informationon the basis of a given characteristic rather thanby physical location . . . are called 'content-addressed', 'catalog', or 'associative'. In thesetypes of memory systems, an interrogation word. . . is presented to the memory and a parallelsearch of all words within the memory is conducted.Those stored words which have a prescribed rela-tionship (e.g., equal to, nearest to, greater than,etc.) to the interrogation word are tagged. Sub-sequently, the multiple tagged words or responsesare retrieved by some interrogation routine."(Miller, 1964, p. 614).

"Associative Memories. An associative memoryis a collection of storage cells that are accessedsimultaneously on the basis of content rather thanlocation. The ability to associate with circuit logicthose cells with similar contents achieves a hard-ware implementation of an associatively linkedor indexed file. Sufficient quantitative results havenot yet been developed to establish conclusivelythe merits of the hardware implementation asagainst software associative systems. A compre-hensive study of hardware associative memorysystems given by Hanlon should be read by thoseinterested in this growing technology." (Minker andSable, 1967, p. 129.1

6.115 "It is extremely unlikely that large fastassociative stores will become practicable in thenear future . . . We cannot expect associativestores to contribute to a solution of our problemsother than in very small sizes to carry out specialtasks, e.g., the page register addresses in Atlas."(Scarrott, 1965, pp. 137-138).

"The concept of the content-addressable memoryhas been a popular one for study in recent years,but relatively few real systems have used content-addressable memories successfully. This has beenpartly for economic reasons the cost of earlydesigns of content-addressable memories hasbeen very high and partly because it is a difficultproblem to embed a content-addressable memoryinto a processing system to increase system effec-tiveness for a large class of problems." (Stone,1968, p. 949).

"Considerations of cost make it impossible for theassociative memory 'to contain many registers, andthe number that has been adopted in current de-signs is eight. Unless the associative memory hasvery recently been cleared, it will be necessary tosuppress an item of information in order to make

1 IA

Page 116: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

room for a new one; the obvious thing is to suppressthe item that has been there for the longest period oftime, but other algorithms slightly cheaper to imple-ment have also been proposed. It is claimed on thebasis of simulations that eight associative registersenable the full procedure of three memory cycles tobe shortcircuited on 90% of occasions." (Wilkes,1967, p. 4).

"In the past, associative or content addressablememories of any significant size have been impracti-cal for widespread use. Relatively small associativememories have been built with various technologies,such as multiaperture ferrite cores, cryotrons, andvarious thin-film techniques. The logical flexibilityof microelectronics now makes at least seratchpad-size associative memories practical." (Hudson, 1968,p. 42).

6.116 "By a slave memory I mean one whichautomatically accumulates to itself words that comefrom a slower main memory and keeps them avail-able for subsequent use without it being necessaryfor the penalty of main memory access to be in-curred again. Since the slave memory can only be afraction of the size of the main memory, wordscannot be preserved in it indefinitely, and theremust be wired into the system an algorithm by whichthey are progressively overwritten. In favorablecircumstances, however, a good proportion of thewords will survive long enough to be used on sub-sequent occasions and a distinct gain of speedresults. The actual gain depends on the statistics ofthe particular situation.

"Slave memories have recently come into promi-nence as a way of reducing instruction access timein an otherwise conventional computer. A small,very-high-speed memory of, say, 32 words, accumu-lates instructions as they are taken out of the mainmemory. Since instructions often occur in smallloops a quite appreciable speeding up can beobtained. . . .

"A number of base registers could be providedand the fast core memory divided into sections, eachserving as a slave to a separate program block inthe main memory. Such a provision would, in prin-ciple, enable short programs belonging to a numberof users to remain in the fast memory while someother user was active, being displaced only whenthe space they occupied was required for some otherpurpose." (Wilkes, 1965, pp. 270-271).

6.117 "The B8500 scratchpads are implementedby magnetic thin film techniques developed andorganized into linear-select memory arrays . . . Torealize the high speed access requirement of 45nanoseconds, the reading function is nondestructive,eliminating the need for a restoring write cycle whendata are to be retained unchanged. . . .

"Insertion of new data into the local memories(writing) can be accomplished within the 100-nanosecond clock period of the computer module."(Gluck, 1965, p. 663).

"4 52-bit words can be requested from a memorymodule and received at a computer module in a

total of 1.0 microsecond, or an average of 250nanoseconds per word." (Gluck, 1965, p. 662).

"The rationale behind the inclusion of localscratchpad memories in the B8500 computermodule encompasses . . . the need for bufferingof four-fetches of instructions and data in advanceof their use, i.e., lookahead. Also important areits uses as storage for intermediate results, as aneconomical implementation for registers andcounters, and for the extension of the push-downstack." (Gluck, 1965, p. 663).

6.118 "A specific application for a CAM isencountered when assembling or compiling pro-grams where it is common to refer to variables,locations and other items in terms of a symbol.The value or information associated with eachsymbol must be stored somewhere in memoryand a table must be made to relate each symbolto its value. As an example, the symbol ABLKRmay be assigned the value 5076. The computermay take this information and store the value5076 at location 1000 for example. Then the firstentry in the symbol table will relate the symbolABLKR to the location 1000 where the value ofABLKR is stored. As more symbols are defined,this symbol table will grow in length." (Rux, 1967,p. 10).

6.119 "Tied in with scratchpad No. 2 is a small28-word associative memory (19 bits per word)whose use enhances the utilization of the scratchpadmemory by providing content addressing as wellas the conventional binary coded word addressingcapability." (Gluck, 1965, p. 663).

6.120 "Each cell of the memory receives signalsfrom a set of pattern lines and command lines inparallel, and the commands are executed simul-taneously in each cell. One of the commands orderseach cell to match its contents against the patternlines. Each cell in which a match occurs sets itsmatch flip-flop and also generates an outputsignal . . ." (Gaines and Lee, 1965, p. 72).

These investigators describe some of the differ-ences between their proposed system and others,in part as follows: "The memory we describe hereis a logical and practical outgrowth of the contentaddressable distributed logic memory of Lee andPaull. However, there are several significantdifferences: the inclusion of a 'match' flip-flop anda 'control' flip-flop in each cell of the memory, theaddition of a 'mark' line to activate many cellssimultaneously, and the control of the propagationof the marking signal. As a consequence of these,the memory has some novel capabilities, amongwhich are the ability to simultaneously shift thecontents of a large group of cells, thus opening orclosing a gap in the memory, and the ability tosimultaneously mark strings of interest in separateparts of the memory.

"By properly manipulating the cell states, simpleprograms for correcting errors involving missingor extraneous letters, multiple mispellings, etc.,

110

Page 117: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

can be devised. Furthermore, by using the markingcapabilities of the memory, error correctionduring retrieval can be accomplished on a selectedsubset of strings which may be located at widelyseparated parts of the cell memory." (Gaines andLee, 1965, p. 75).

6.121 This Sylvania development involves theuse of automatically preprocessed plastic sheetsto affect the performance and logic behavior of asolenoid-transformer array.

"The interrogation, which may consist of anumber of descriptors, each containing many bitsof information, causes an appropriate group ofsolenoids to be driven . . . The solenoids interactsimultaneously with all emolosed loops on all thedata pknes, resulting in a simultaneous voltageon the output of each data plane that is the cross-correlation between the dthen input solenoids andeach individual data plane. The output of eachplane is connected to its own detector-driver whichtests the output in comparison with all the otherdata plane outp'uts to find that outputthe best correlation. Alternatively, the detector-driver can be set to test for some pre-determinedthreshold." (Pick and Brick, 1963, p. 245).

Brick and Pick (1964) describe "the applicationof the solenoid array principle to the problem ofword recognition, code recognition, and (in a limitedsenso), associative memory. The proposed device,based entirely on existing experience with a largecharacter recognition cross correlator, is capableof recognizing one of 24,000 individual Englishwords up to 16 letters long. The simultaneouscorrelation and selection is made in less than3 Asec. The selection can be made either on aperfect-match or a best-match basis." (Brick andPick, 1964, p. 57).

"This form of semipermanent memory offersmany advantages to computer and memory users.Among these are: a) ease of contents preparation,involving automated punching of inexpensivestandardized cards; b) reliability, as a result offew electrical connections, loose mechanicaltolerances, and passive components; c) low cost,since the cards are not magnetic and need only acontinuous conducting path; and d) high speed,with estimated cycle time below 1 11 sec." (Picket al., 1964, p. 35).

6.122 "A number of associrtive memory stacksof 120 resistor cards have been constructed, eachstack storing 7200 bits, with each card storingone word of 60 bits length." (Lewin et al., 1965,p. 432)..

"This paper describes a fixed memory consistingof one or more stacks of paper or plastic cards,each of which contains an interconnected arrayof printed or silk-screened film resistors. Eachcard is compatible with conventional key punches,and information is inserted by the punching of apattern of holes, each of which breaks an appro-priate electrical connection. All punched cards

111

in a stack are cheaply and reliably interconnectedusing a Iry batch interconnection technique whichresembles an injection molding process, usingmolten low-temperature solder. The circuit whichresults is a resistor matrix where the informationstored is in the form of a connection pattern. Thematrix may be operated as a content-addressableor associative memory, so that the entire arraycan be searched in parallel, and any word or wordsstored answering a given description can beretrieved in microseconds." (Lewin et al., 1965,p. 428).

6.123 "The study by Dugan was restricted toconsidering an existing computer environment andthe Goodyear Associative Processor (GAP), a2048-word associative memory with related logicand instructions. A benchmark problem wasstudied in which the data base exceeded the sizeof GAP and was stored on disc. The disc-storeddata required transfers to the associative processoror the conventional core for further operations.The study concluded that the effectiveness of asmall associative processor, such as GAP, forformatted file problems depended upon the inter-face of the associative processor with the computersysteni, the logic of the associative processor, andthe load/unload characteristics of the memoryassociated with the problem. The authors showedthat embedding the associative processor withinthe core memory provided the best system. It alsoprovided a facility for performing arithmeticoperations on data, which is ordinarily difficultfor an associative processor. The study did notshow any major advantages in using a system withan associative processor similar to GAP overone without an associative processor. . . .

"Gall utilized the same computer environment asDugan but .investigated a dictionary lookup phaseof an automatic abstracting problem. He concludt,sthat incorporating associative memories that donot have the capacity to store the entire data baserequires excessive data transfer and cannot competewith conventional systems that employ a pseudo-random mapping of a word onto a storage locationand, therefore, can locate a word by content.Randomized addressing is another softwaresimulation of but one of the facilities provided byan associative processor, namely, the so-called`exact-match' 'function." (Minker and Sable, 1967,pp. 130-131).

6.124 The Librascope Associative ParallelProcessor was developed for use in the extractionof pattern properties and for automatic classifica-tion patterns. It is noted in particular that "thepara)lel search function of associative memoriesrequires that comparison logic be provided at eachmemory word cell. The APP, by moderate additionsto this logic, allows the contents of many cells,selected on the basis of their initial content, tobe modified simultaneously through a 'multiwrite'operation." (Fuller and Bird, 1965, p. 108).

Page 118: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Swanson comments as follows: "Fuller, Bird,and Worthy recently described two machines;an associative parallel processor programmed toAbstract properties from visual and other patternsand classify the patterns from the properties; andan associative file processor for rapid parallelsearch of large complex data bases." (1967, p. 38).

6.125 "The ASP machine organization . . .

[has as its dominant element] the context-addressedmemory . . . [which] stores both ASP data andprograms, and . . . provides the capability toidentify, in parallel, unknown items (and linklabels) by specifying the context of relations inwhich the unknowns appear . . . . [It] consists ofa square array of identical storage cells which areinterconnected both globally and locally. Each cellcontains both memory and logic circuitry. Thememory circuitry stores either an item, link label,or a relation, plus tag bits. The main purpose ofthe logic circuitry is to perform the comparisonoperations which are required to implement globalsearches of the array and local inter -cell com-munication." (Savitt et al., 1967, p. 95). See alsonote 5.47.

6.126 "In the earliest associative memoriesall bits of all the words of the memory were simul.taneously compared with a search word; this iscalled word-parallel search. For such word-parallelsearch, the memory has to be of the nondestructive-readout (ND110) type." (Chu, 1965, p. 600).

Instead of word-parallel search, bit-parallelsearch has been developed because of its simplerdesign and because word-parallel search is of lessimportance in more complex searches. Bit-parallelsearch (or bit-sequential search) searches onecorresponding bit of all words at one time. For aword of 64 bits, a maximum of 64 bit-parallelsearches is made in succession; thus, bit-parallelsearch pays a price in speed . . . [but] the priceis a limited one. In a bit-parallel-search associativememory, nondestructive readout property ofmemory elements is not necessarily required. Thispaper describes the organization of a destructive-readout associative memory which can be imple-mented by a special, very high-speed, magnetic-core memory using conventional technology.7(Chu, 1965, p. 600).

"Because parallel-search logic is implementedfor only one long-word, implementation of severalvarieties of search logic is practical. In additionto a bit-comparison logic, other logical operations(such as NAND, NOR, AND, OR) can be imple-mented relatively simply and less expensi- Jly."(Chu, 1965, p. 600).

"For these operations [bit count and bit countand store], each bit of the memory short-word mayrepresent an attribute (a property or a character-istic), and the count of attributes is a useful argu-ment for searching closeness in attributes." (Chu,1965, p. 605).

6.127 "Circulating memories offer an enormous

savings in quantities of logic necessary for a CAMsince one set of comparison logic can be used tocompare the key register with many memory loca-tions. The comparison logic need only monitor thememory's contents as it passes through the circulat-ing system.. .

"The principle disadvantage of a circulatingCAM is speed. M least one circulation time of thememory is required to interrogate the entirememory. In the case of a magnetic drum system,this time would be measured in milliseconds whichis much too slow for many applications. However,with the use of glass delay lines, information canbe stored at very high rates, 20 MHZ and higher,and short circulation times can store large amountsof data. For example, a 100 microsecond delayline at 20 MHZ can store 2,000 bits of information.Thus 32 delay lines could store 2,000 words of 32bits each and this memory could be searched in100 ihsec." (Rux, 1967, p. 2).

6.128 "A goal in designing and interfacing theassociative mapping device into the System/360,Model 40, was to introduce no time degradation inthe critical main memory address path. We haveaccomplished this goal by designing the hardwareto perform this address translation function in220 ns. This interrogate time through the associativememory is approximately 50 ns and the remainingtime is spent in wire delay and conventional logic,such as the encode circuit which was designedusing the 30-ns IBM SLT family. . . .

"The technology used to implement the associa-tive mapping device is the IBM SLT technology.Four special circuits were designed for the associa-tive memory array. They are the associative memorycell used for storing one bit of information, the bitdriver, the word driver, and the common senseamplifier used for sensing a mismatch signal inthe word direction or a binary 'one' signal in thebit direction." (Lindquist, et al., 1966, p. 1777).

"The mapping device which provides the dynamicstorage allocation function in the time-shared systemis a 64-word, 16 bit per word, associative memory."(Lindquist et al., 1966, p. 1776).

. . The Univac 128-word by 36 bit-per-word,600-nsec scratch pad memory." (Pugh et al., 1967,p. 169).

"The memory . . . utilizes a plated-wire (Rod)memory device operating in a 512-word 36 bit perword memory system. The DRO mode is employedand operation at a 100-nanosecond read-write cycletime is achieved." (Kaufman et al., 1966, p. 293).

"The memory is word-organized with a capacityof 64 words each 24 bits long. Cycle time is approxi-mately 250 nanoseconds. Such memories aresuitable for use as `scratchpads' operating withinthe central processor or input-output controlsystems of a computer." (Bialer et al., 1965, p. 109).

"All of the memory circuits approximately 180chips plus 1,536 bits of thin-film magnetic storageand the thin-film interconnection wiring are on a

112

Page 119: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

glass substrate measuring 3 by 44/2 by 1/10 inches.The circuits occupy about half the substrate area.The extremely small physical size of the memory,the shorter signal paths, the elimination of redun-dant connections, which ,packaged circuits wouldhave required, all contribute to an improvement insystem speed. .

"The 64-word memory has a cycle time of about250 nanoseconds . Plans are to build a 256 -wordmemory that is equally fast and expectations arethat eventually 50-nanosecond memories can bebuilt with similar design and fabrication methods[i.e., ultrasonic face-down bonding for interconnec-tion of integrated circuit chips with thin-films]."(Bialer et al., 1965, pp. 102-103).

6.129 "The sonic film memory represents anovel approach to the storage of digital information.Thin magnetic films and scanning strain waves arecombined to realize a memory in which informationis stored serially. The remanent property of mag-netic films is used for nonvolatile storage. The effectof strain waves on magnetic films is used to obtainserial accessing. This effect is also used to derive anondestructive read signal for interrogation."(Weinstein et al., 1966, p. 333).

6.130 "The new [tunnel diode] memory systemcontains 64 words of 48 bits each, and test resultsfrom a partially-populated cross-sectional modelindicate a complete READ/RESTORE or a CLEAR/WRITE cycle time of less than 25 nanoseconds."(Crawford et al., 1965, p. 627).

6.131 "The basic cell employs a thick magneticfilm as the high-speed sensing element to sense theinformation which is stored as a pattern of magnetson a card. Since the magnet card is separate fromthe array, the latter can be permanently laminatedor sealed and the information can be changed verysimply and reliably. The advantages of this systemstem from a combination of several important fea-tures, namely card changeability, high speed, widemechrnical and electrical tolerances, and a lineardrive-sense relationship which results in a widerange of operating levels.

"Circuit costs can be minimized by using low-leveldrivers, giving an additional increase in speed withonly a minor increase in sense circuitry . . . Fora memory containing four arrays of 256 words and288 bits per word, an access and cycle time of 19and 45 ns respectively was achieved . . ." (Maticket al., 1966, p. 341.)

6.132 These investigators suggest further that"the number of bits of storage can be increased inseveral ways. A modular approach can be used byconnecting 64 X 8 memories in parallel or the mem-ory boards can be redesigned to accept the largernumber of bits. The modular approach is particu-larly applicable to the distribution of small memoriesof various sizes throughout a large computer. It ispossible to construct a 64 X 32 memory using eitherof the above approaches with a cycle time of approxi-mately 20 nanoseconds." (Catt et al., 1966, p. 330).

6.133 "It has been demonstrated that 1000-bitNiFe film DRO memories with cycle times of60 n sec and access tittles of about 30 it sec can bebuilt using existing components. Experience withthis model indicates that the design can be extendedto allow a significant increase of capacity in amemory having this same cycle time and accesstime; however, it is felt that to achieve a markedincrease in speed will require radical departuresfrom the conventional circuit and array techniquesthat were employed in the model described here."(Anacker et al., 1966, p. 50).

"IBM has developed a bipolar monolithic ICbuffer memory for use on the 360/85 that is fasterthan any they have previously introduced. Accesstime to the entire contents of the 2K by 72-bitmemory is 40 nsec. The buffer memory is con-structed of half-inch square building blocks com-posed of two silicon chips and their leads andinsulation. Each of the chips measures less thanan eighth of a square inch and contains 664 com-ponents (transistors, diodes, and resistors). Eachchip provides 64 distinct but interconnectedstorage cells. The components involved are sominute that 53,000 can fit into a one square incharea.

"The significance of the microminiaturization isof course little related to 'how many of what fitwhere.' What IBM gains from this construction isa circuit speed demonstrated on some experi-mental chipsthat is as fast as 750 picoseconds(trillionths of a second).

"The speed of the buffer memory (which at onetime was to be called a 'cache', but that term hasapparently been dropped) is not down to the750 pisec figure, but a 7 n sec/chip read and a12 n sec/chip write speed isn't bad." (DatamationI:5, No. 4, 193 (Apr. 1969).)

6.134 "Electronic Memories, Inc., demonstratedits NANOMEMORY 650 . . . capacity of 16,384words of up to 84 bits, and an access time of300 n sec." (Commun. ACM 9, No. 6, 468 (June1966).)

6.135 "The ICM-40, a oneµ sec cycle time,500 nsec access time, core memory, available withcapacities from 4K X 6 bits to 16K X 84 bits hasbeen announced by Computer Control Company,Inc." (Commun. ACM 9, 316 (1966).)

6.136 "International Business Machines Corp.has developed an experimental thin-film computermemory that has a 120-nanosecond cvs,le time, a589, 824-bit capacity and fits in a frathe 68 by 42by 7 inchesincluding the electronic circuits fordriving and sensing." (Electronics 39, No. 3, 41(1966).

6.137 "The memory has a capacity of 8192words, 72 bits per word, and has a cycle time of110 nanoseconds and an access time of 67 nano-seconds. The storage devices are miniature ferritecores, 0.0075 by 0.0123 by 0.0029 inches, andare operated in a two-core-per-bit destructive read-out mode. A planar array geometry with cores

113

Page 120: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

resting on a single ground plane is used to controldrive line parameters. Device switching speed andbit line recovery are treated as special problems."(Werner at al 1967, abstract, p. 153).

6.138 "It seems a certainty that plated wirememories will become a very important memberin the hierarchy of storage systems to be used inthe computers of tomorrow." (McCallister andChong, 1966, p. 313).

"Both UNIVAC 920%9300 Systems utilize a newplated-wire memory for internal storage featuringa non-destructive read-out mode and monolithiccircuitry." (Commun. ACM 9, 650 (1966).)

6.139 "Capacity of this memory is 4096 68-bit words (278, 528 bits, to be exact) and it operateswith a cycle time of 200 nanoseconds and an accesstime of 160 nanoseconds. It is a word-organized,random-access memory. The memory element iscomposed of a pair of planar thin films coupledtogether and read out destructively." (Meddaughand Pearson, 1966, p. 281).

6.140 "The operation of this half-microsecond-cycle memory module represents a significantachievement in a program of magnetic thin-filmdevelopment for computer storage which wasbegun at these laboratories in 1955. Large numbersof substrates were processed and tested, andmemory plane assembly and test are now routineoperations.

"Memory frames which contain 20 substrates(15,360 bits) can be assembled without greatdifficulty . . .

"A shorter memory cycle can be made possibleby reducing the total sense delay, and by the elimina-tion of the bit recover pulse. The pulse trans-formers will be replaced by active solid-statedevices. A reduction of 150 nsec 50 nsec from ashorter sense delay and 100 nsec from eliminationof the bit recover pulse make a cycle time of350 nsec, 3-Mc operation, possible." (Bittman,1964, p. 105).

"Fabrication, assembly, and operation of thesehalf-microsecond memories has proven that largenumbers of reliable film substrates are producibleand that the completed memories can competein both speed and price with the high-speed 2-1/2D-type core memories. The future for planar filmslooks very bright both larger and faster memoriesare in the design stage. These memories will com-bine the economic advantages of batch fabricationwith the fast switching properties of thin-films."(Jones and Bittmann, 1967, p. 352).

6.141 "Extensive memory research aimed atimplementing the inherent 1-ns switching capa-bilities of thin magnetic films within a systemenvironment has resulted in a cross-sectional147000-bit capacity film memory model with anondestructive -READ -cycle time of 20 ns, anaccess time of 30 ns, and a WRITE-READ timeinterval of 65 ns. The shortest time interval between

addressing of two different word lines is 20 ns."Seitzer, 1967, p. 172).

6.142 "A single layer composite magnetic filmis operated in a rotational destructive-read-outmode with two access wires. Each bit is composedof two 2 X 6 mil intersections of the word anddigit lines with a density of 12,500 bits/in, Mag-netic film structures which provide flux closurein the hard, easy, or both directions were con-sidered by rejected when adequate margins wereobtained with the single layer. Although the openstructure has fabrication advantages, the closedstructures remain of interest for future work."(Raffel et al., 1968, pp. 259-260).

"The access time of the memory from changeof address to information output from the bufferflip-flops is about 450 nsec. The largest contributionto this delay is the transient on the sense-line dueto group-switch voltage transitions. The circuit-limited cycle time for read-write or clear-write is600 nsec. Recovery from the digit-pulse transientlimits the total cycle time to 1psec with the digittransient overlapping the group-switch transient."(Raffel et al., 1968, p. 261).

6.143 "The chains are made from copperstrips which have been plated with a NiFe filmand are used to carry word current. The bit/sensesignals are carried in wires which pass throughthe holes in the 'links' of the chain. The memoryelement thus formed will operate in a rotationalswitching mode and can be used for a word-orga-nized memory." (Geldermans et al., 1967, abstract,p. 291).

6.144 "It has been shown that high-speedchain memories can be built in very high-densityarrays with minimum electromagnetic interactions.The bit/sense wires can be treated as homogeneoustransmission lines with relatively high character-istic impedence (100 Cl) and good signal-to-noiseratios. The word lines are high-impedence striplines whose inductance is mainly determined bythe nonlinear magnetic film. This makes evaluationmore difficult, but implies favorable propertiesfor the design of very long lines.

"Based on the analysis of recently plated chainswith smaller dimensions and better films, the char-acteristics of various possible chain memories havebeen extrapolated. Straightforward design philoso-phy, using transistor selection can be applied for a0.3 X 106-bit NDRO memory, a 106-bit, 100-nsecDRO memory, and a 38 X 106-bit 500-nseo DROmemory.

"These performance predictions reflect the meritsof a film device with complete flux closure and high-quality oriented films as exhibited in the chaindevice; they appear quite attractive for their size,speed, and circuitry requirements. Chains imply asimple semi-batch process and combine fast rota-tional switching properties of oriented films withthe larger signal capability of cores." (Abbas et al.,1967, p. 311).

114

Page 121: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

1.145 Further, "the memory under develop-ment has a capacity of 108 bits. This capacity isachieved by stacking 107-bit modules into oneunit , . , Each module has its own set of drivingcircuits and sense amplifiers. This arrangementleads to a fast, random-access memory, readilyrealizable mechanically; it is justified from theviewpoint of modularity and cost because the elec-tronic circuits are shared by a large number of bits,All modulus share one set of auxiliary circuits,which include the address decoders, timing cir-cuits, information registers, and power supply . . .

"The plated wire used is a nondestructive read-out (NDRO) element with equal word currents forreading and writing. This property makes it unneces-sary to have rewrite circuitry for each stored bit,"(Chong et al., 1967, p. 363).

6.146 "Much attention is currently focused onthe development of block-oriented random-accessmemories. One prospect is the magnetosonicdelay-line memory which employs magnetic stor-age and block-access by semiconductor electronics(to cause the propagation of a sonic wave in theselected line). Nondestructive read-out is derived onthe digit lines in sequence by the propagation sonicwave, and write-in is carried out by the coincidenceof digit currents and the propagating wave. Anotherprospect is the opto-electric read-only memory,where the stored information on high resolutionphotographic plate is block-selected by opticalmeans, employing light-beam deflection or an arrayof light-emitting elements. The optical readout (of allthe bits in the block in parallel) is converted toelectric signal by an array of photosensitive ele-ments. Holographic techniques are proposed forthe implementation of the high-density photographicprocessing. The practicality of these block-orientedsystems are too early to be realistically appraised."(Lo, 1968, p. 1465).

6.147 For example: "A new mass core memorywhich offers data access time of 1.5 microsecondsand capacity of up to 20 million bits has been placedon the market as a standard product by AmpexCorporation. The RM, which is suitable for use withmost large scale computers and data processingsystems now in production or use, will be availablefor delivery early in 1968." (Data Proc. Mag. 10,No. 1, 58 (Jan. 1968).)

"A randomly-addressable, low cost magneticmass core memory system with a storage capacityof 0.5 megabytes at a cost of I to 2 cents per bit isnow available from Ferroxcube's Systems Division.The new memory offers the optimal compromisebetween cost, bit transfer rate and capacity. It hasa full cycle time of 2.5 AS and is capable of operationin ambients to 105° F. The memory system can beorganized in word capacities of from 9 to 144 bits(in multiples of 9) per 524-K byte module. Anynumber of modules can be connected for series orparallel operation to build systems of almost infinitestorage capacities. A total of 4.7 million cores are

used in the unique 2.1/2 D selection organization,which incorporates an extra wire for sensing theinterrogated bits. The total package with all electron-ics and power supplies measures 72'1X 25" X 28","(Computer Design 7, No. 6, 70 (June 1968).)

"A new, duplex version of the Potter RAM, amagnetic tape random access memory. The newunit has the same performance characteristics asits predecessor, 50.2 million bits of informationpacked at 1,000 bp; and an average access speedof less than 90 milliseconds." (Commun. ACM 8,343-344 (1965).)

"Available in storage capacities up to 32 millionbits (1,024,000 words of 32 bits), a new magneticcore memory has a cycle time of less than 4 micro-seconds. Dependent upon quantity and capacity,the price will be as low as 1.1/2 cents per bit. TheModel CM-300 is said to offer true random accessat speeds and capacities not previously availablein static storage devices. As such, this memory isexpected to forge a place in the hierarchy of bulkstorage peripherals permitting greater programmingflexibility and increased computer throughput. A2-wire, 2.1/2D magnetics organization and fieldproven circuitry are utilized to assure high reliabilityand wide operating margins. All circuits in thesystem have been subjected to verifiable worst case'design. Modular design permits exceptional flexi-bility in selecting memory interface, ease of main-tenance and low logistic support cost. LockheedElectronics Company, Los Angeles, Cal." (ModernData Systems 1, No. 2, 74 (Apr. 1968).)

"The LIBRAFILE 4800 is one of a series of largecapacity, high-speed head-per-track disc filememories developed by the Systems Division ofLibrascope Group. It has a capacity in excess of 400million data bits, with an average access time of35 milliseconds. Additional memory modules may beadded to increase storage and the head-per-trackdesign permits bit parallel data transfers to meetinterface and speed requirements." (ComputerDesign 7, No. 6, 22 (June 1968).)

"Laboratory developments completed prior tothe initiation of the program described here demon-strated that tape speeds well in excess of 1000inches per second (ips) and packing densities ofone million bits per square inch, with high datareliability, were feasible. Using these developmentsas a basis, a large memory system could thereforebe designed. A prototype 'small' system with a totalstorage capacity of 1011 bits has been built andtested. The following is a description of such asystem and its major components.

TBM system description: "The TBM (terabitmemory, i.e., 1012 bit memory) random accessmemory uses magnetic tape as a basic storagemedium. Random access is provided by using tapesearch speeds of 1000 ips (compared to approxi-mately 300 ips used on conventional transports)and by using packing den*ies of 700,000 bits persquare inch (compared to approximately 14,000

115

Page 122: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

bits/in. for standard computer tape transports).Data recording is done in the transverse mode (tothe direction of tape motion) using rotating heads,the technique used for video recording. A redundantrecording scheme permits the achievement of errorrates of two errors in 10"0 information bits. Thesalient features of the tape transport, permittingthis high search speed, and of the recording modeand associated data channels, permitting this highpacking density, will be considered in detail follow-ing the system description." (Damron et al., 1968,pp. 1381-1382).

"The NCR 353-5 Card Random Access Memory(CRAM) File provides high-speed random orsequential processing of data for NCR 315 computersystems. The data recording is done on magneticcards 3.65" x 14", each containing 144 recordingtracks. Each track has a storage capacity of 1,5006-bit alphanumeric characters. The removablecartridge houses 384 magnetic cards, providing atotal storage capacity of 82,944,000 6-bit alpha-numeric characters.

"Any card from a cartridge can be dropped toread/write position within 125 milliseconds, provid-ing throughput of 5 cards per second. Data istransferred to the processor at the rate of 50,000alphanumeric characters per second.

"Up to sixteen CRAM Handlers may be connectedto the processor. This provides an online filecapacity of over 1,327,104,000 alphanumeric char-acters (over 1,990,656,000 digits)." (ManagementReport: MASS RANDOM ACCESS FILES fromNCR, nd., p. 1).

6.147a "Enthusiasts of Bell Telephone Labs'recently-patented single-wall domain magneticmemory claim it may some day obsolete the disk.By controlling the magnetic domains, millions ofbits can be stored in a diameter less than a micron.The action can actually be seen through a micro-scope, according to one source.

"Developed by William Schockley, AndrewBobeck, and H. E. Scoville, the memory works onthe spin moments between electrons and thenucleus in a magnetoplumbite material containingrare earth orthoferrites." (Data Proc. Mag. 1 1 ,No. 8, 19 (Aug. 1969).)

6.148 "Magnetic recording bit and track densi-ties, each an order of magnitude higher than thosenow used, have been demonstrated in the labora-tory. Twenty thousand bits per inch and one thou-sand tracks per inch have been reported. The practi-cal application of these experiments requiresconsiderable development of magnetic heads,recording media, and track location techniques."(Bonn, 1966, p. 1868).

"Ferroxcube Corporation has announced thedevelopment of a monocrystalline ferrite materialfor use in magnetic recording heads. The new ma-terial, with its increased wear resistant character-istics, is expected to find wide use in video and highdensity tape-recording applications where recordinghead wear is a significant problem. Recording heads

made of a new material are expected to increase thehead life in these applications by a factor of 10times or more thereby reducing the service costs ofusers.

"The practical process for growing single-crystalmanganese-zinc ferrite has been developed byusing a technique similar to that used in producingsynthetic gem stones. This material not the con-ventionally designated `monocrystalline' ferrite,which, though composed of large-size crystals, igactually polycrystalline.

"The single-crystal ferrite, as the name implies,is a single, completely homogeneous crystal, withno grain boundaries to permit crystal pullout whichis frequently responsible for the familiar crumblingor wear of the contact face and gap edges. Thesuperior mechanical properties of this new ferriteare further enhanced by proprietary glass-bonding,or metal-bonding processes. Heads fabricated fromsingle-crystal ferrite and bonded by this means pre-sent a monolithic contact surface of extremely highdensity and very low porosity in which the magneticgap can be controlled to within .7.L. 5 mcroinches orless and original sharp edges and machined pro-files preserved intact through thousands of hours ofoperation.

"Characteristics include an initial permeability(uo) of 2250 ± 250 at 100 kHz, 350 ± 50 at 5 MHz."(Comp. Design 8, No. 1, 30 (Jan. 1969)).

6.149 "There are severe problems in locatingand tracking information stored at very highdensity. Servo-techniques (also being pursued forhigher track density in magnetic recording) basedupon track-seeking principles are essential for beamscanning approaches." (Hoagland, 1967, p. 259).

6.150 "Adapting videotape recording methods tocomputer systems may increase the capacity ofbulk, random access computer memories a thou-sandfold. According to Dr. William A. Gross, Ampexvice president, an experimental system now in thelab stores 50 billion bits on single 10- by 1/2-inchreels of magnetic tape, or about 1000 times thecapacity of reels currently in use. Information canbe accessed and transferred in less than 10 seconds.

"A finished memory based on these developmentswould enable a user to place all of his digital rec-ords on line, ready for random access. This wouldeliminate shelf storage and delays when a diskpack must be located and placed in the system.

"This videotape recorder increases recordingdensity by using four recording and playback headsmounted on a small metal disk that rotates perpen-dicularly across the moving tape. This rotary headincreases tape-to-head speed by six times that ofthe fastest fized-head device and enables the record-ing of tv pictures or, when applied to coded data,increases the density." (Data Proc. Mag. 1 1 , No. 2,14 (Feb. 1969).)

6.151 "A light sensitive recording process calledPhotocharge, uses a photoelectric potential ofmaterial in the film to produce images. It wasinvented by Dr. Joseph Gaynor and Gordon Sewell

116

Page 123: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

[G. E. Advanced Technology Labs.]. Its mainadvantage lies in the fact that no external electricalcharges or fields are required, as in conventionalelectrophotographic processes.

"Light and heat alone produce a completelydeveloped picture which consists of minute defor-mations on the surface of the film.

"Recorded images can be erased by reheatingthe film. It is then ready to receive and recordanother picture. . . .

"Since the material is limited in the range oflight to which it is sensitive and requires brighterlight than conventional film, Dr. Gaynor anticipatesthat early applications probably will be directedtoward memories for graphic and pictorial informa-tion and high resolution, inexpensiVe imago repro-duction." (Bus. Automation 12, No. 7, 50 (1965).)

6.152 "A technological development prerequisiteto the widespread exploitation of large, multiaccessdata-base systems is the large, low-cost, fastrandom-access mass store. Kuehler & Kerbydescribe an IBM. 'Photo-Digital' storage device,which is a step in this direction. The article givesno cost information, which makes evaluationdifficult, and the long random-access delay is ahandicap; the other properties of the device arenevertheless rather exciting. Briefly, a modestconfiguration can store 10 12 bits of data, acceptingthem at a rate of about 0.2 megabits per second andsequentially reading them at a rate of about 1megabit per second. Random access requires amaximum 3-second delay for acquisition of theproper 'cartridge.' A cartridge is roughly equivalentin capacity to a reel of magnetic tape; the readingrate after acquisition is also roughly equivalent tothat of a high-performance tape drive. It wouldrequire nearly two months of continuous writing atthe maximum rate to fill a 1012 -bit configuration."(Mills, 1967, p. 226).

Recent IBM papers describe error detection andcorrection and data recovery techniques for thissystem. For example: "In a high-density photo-digital storage system, contamination and otherdefects can easily obliterate a group of data bits. Tooperate successfully in spite of this problem in theIBM Photo-Digital Storage System . . . a powerfulerror-correction code is used . . .

"The problem of effectively implementing a codeof this complexity has been solved by a number ofinnovations. Most important is the use of hardwarefor encoding, calculation of the power sums, anderror detection, while using a control processor, ona time-sharing basis, for error correction. Anotherimportant feature is that single-character errorcorrection is tried first; if this is not sufficient,further correction activities can be tried. Otherimportant features are use of a 'trial and recheck'method of error correction, selection of a symmetriccode polynomial, use of a tabic of logarithms formultiplication and division in a Galois field of 64elements, et cetera." (Oldham et al., 1968, p, 422).

"The chance for error in a line of data read by

the world's largest computer storage system isonly 0.000075 percent. This statistic is based on testdata in 'Error Detection and Correction in a Photo-Digital Memory System,' an article in the Novemberissue of the 'IBM Journal of Research and Devel-opment.' The journal article describes the trillion-bit system's error-correction techniques and showshow the system can correct errors rapidly enoughfor real-time operation. The Photo-Digital StorageSystem was built by the International BusinessMachines Corporation under a special contractfor the U.S. Atomic Energy Commission. The AECcontract specified the allowable error rate as nomore than one 300-bit line with uncorrected errorsin 2,700,000 lines read. The system has demon-strated a much better average of one line with un-corrected errors in 13,500,000 lines read. Con-structed by the IBM Systems Development Labora-tory, San Jose, Calif., the system functions in anetwork of computers at the Lawrence RadiationLaboratory in Livermore, Calif. A second Photo-Digitrl System, using the same error-correctiontechniques and having approximately one-thirdthe storage capacity of the first, has been installedat the Lawrence Radiation Laboratory in Berkeley,Calif." (bema News Bull., Dec. 16, 1968, p. 6).

"The theory of error detection and correctioncodes is finding its way into practice. With thesecodes, machines can reconstruct data that havebeen destroyed. The first practical implementationof a powerful and complex code was achieved inthe trillion-bit IBM Photo-Digital Storage System,which uses a Reed-Solomon code for correcting upto 5 error characters in a line of 50 data characters.

"With a correction facility able to correct burstsof errors in a line, a recorded bit need not be largerthan the average flaws in the recording medium.In this system flaws cannot be eliminated by pre-testing the medium, because the recording ispermanent on silver halide photographic film. How-ever, since the film has high resolution, there is atradeoff between the cost of the correction facilityand the cost savings of high-density recording."(Griffith, 1969, p. 456).

6.153 "The principle of the UNICON ComputerMass Memory is derived from the UNICONCoherent Light Data Processing System to createand detect (record and reproduce) informationelements in two dimensions by means of signal-modulated coherent laser radiation. . .

"Continuous readout of the UNICON systemutilizes a lightguide surrounding the imaging circleof the rotating objective, carrying the laser radiationtransmitted through the unidensity film to a centralphotomultiplier. Hence, any coherently illuminatedinformation bit is photoelectrically detected withinfew nanoseconds. . . .

"Width of the information-carrying area of the16 mm Unidensity film is 8 mm. Information packingdensity is 6.45 X 108 bits per square inch. Rate ofinformation processing is in the megabits-per-second range. Total capacity of one UNICON

117

Page 124: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

memory system is 88 X 109 bits for a 16 mm Uni-density film reel of 100 feet." (Becker, 1966, pp.711-712).

"The Laser Recording Unit designed and devel-oped by Precision Instrument Company providesa means for reliably and permanently recording andreproducing digital data.

"The Laser Recording Unit uses a new type ofpermanent recording process which employs alaser to vaporize minute holes in the metallic surfaceof the recording medium. In this manner digitalinformation is recorded in parallel data tracks alongthe length of a recording medium strip. The tracksare spaced on the order of five to ten microns (micro-meters), center-to-center; each track is composedof bit cells three to five microns in size. For re-cording or reproducing sequential tracks of digitaldata, the maximum transfer rate of the Laser Re-cording Unit is approximately four million bits persecond, with an average unrecoverable error rateof one in 108 to 109 bits, depending on the datadensity selected.

"The Laser Recording Unit includes a program-mable Recorder Control Subsystem which can bedesigned to provide a hardware and software inter-face compatible with a specified computer system.

"The major benefits offered by the Laser Record-ing Unit as a mass digital-data storage unit aresummarized below:

(1) Permanent Storage: Data do not degradeover a period of time of the order of years.

(2) Compact Storage: Data are stored at adensity approximately 250 times. greaterthan that of digital magnetic tape.

(3) Unlimited Readout: Data can be repeatedlyreadout for long periods of time withoutreduction in quality or damage to the record.

(4) Recording Verification: Essentially error-free data records result from the simultaneousread-while-write verification capability thatis unique to the laser hole-vaporizationmethod of permanent recording.

(5) Low Error Rates: The average unrecoverableerror rate is approximately one in 108 to 109bits.

(6) Economical Data Storage: Recording of largequantities of data on the Laser RecordingUnit and permanent storage of the data inP1 Record Strips significantly decreases thecost-per-bit of recording and storage imposedby existing methods." (McFarland andHashiguchi, 1968, p. 1369).

See also notes 6.18, 6.19, 6.20.6.154 "A Photo-Optical Random Access Mass

Memory (FM 390) with multibillion bit capacity,announced by Foto-Mem, Inc., can be used toreplace or supplement magnetic tape, disk ordrum units. Used separately or combined into onesystem, the FM 390 uses a Photo-Data Card (PDCtm)for data storage. Advantages over magnetic storageare in cost and space saving. A typical Foto-DataCelltm with 100 PDC's stores 3 billion bits of in-

formation, allowing a typical installation to holdseveral trillion bits of data on line." (Data Proc.Mag. 1 1 , No. 8, 73 (Aug. 1969)).

6.155 "Thin dielectric films which exhibitsustained electronic bombardment induced con-ductivity (SEBIC) appear to satisfy the controllayer requirements for high sensitivity and storage.

"Thin films of cadmium sulfide which exhibitSEBIC were first developed by the Hughes ResearchLaboratories. . . .

"SEBIC layers can store information in theform of two dimensional conductivity modulationswith almost photographic resolution. In addition,they can be excited with brief pulses of highenergy electron beams, and they are reusablebecause they can be erased almost instantaneously.In a sense, they may be thought of as a fc,rm of realtime photographic film the principal remainingproblem concerns the readout of information."(Lehrer and Ketchpel, 1966, p. 533).

6.156 "This program is devoted to the prepara-tion and investigation of novel kinds of data storageelements of about micron size, and high-densityregular arrays thereof, to be addressed with an elec-tron beam of diameter comparable to the elementsize. Such storage mosaics are formed by develop-ing and adapting appropriate thin-film depositionand micromachining techniques. The latter isbased on the use of an electron beam probe toexpose an electron-sensitive resist. A storagecapacity of about 108 bits is believed to be realizableand accessible with an electron beam, withoutmechanical movement of the storage surface. Cur-rently we are investigating two kinds of elements.The first one is an electrically isolated micro-capacitor . . . at the bottom of a hole in a metal-dielectric-metal film sandwich. The other consistsof an isolated washer or ring of metal embedded inthe dielectric of a multilayer metal and dielectricfilm sandwich. Ultimately, elements 1/4k in di-ameter or smaller, spaced approximately 1/2ptcenter to center, are expected to be feasible, repre-senting ultimate packing densities up to 4 X108/cm2." (Rogers and Kelly, 1967, p. 1).

6.157 "The selection of materials and thick-ne, ses for the recording media and substrates is

on obtaining maximum sensitivity to a mini-- di power density in the recording spot, whilemaintaining adequate contrast for the readoutmeans selected and stability to the anticipatedenvironment. This basis for selection implies thatan increase in the absorption efficiency of themedium is useful only if it leads to a more sensitivemedia and/or improved contrast. Excellent record-ing have been achieved at high rates with coatingshaving less than 20 percent absorption at the laserwavelength. Additional coating and substrate con-siderations are: adequate adhesion to one another,abrasion resistance, permanency, cost, etc., andspecial considerations, e.g., the use of a mica sub-strate to help obtain certain unique properties in the

118

Page 125: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

1Y,

thin films of MnBi which are used for Curie pointmagnetic recording." (Carlson and Ives, 1968, p. 1).

6.158 "The results of the studies described inthis paper have established laser heat-mode record-ing as a very high resolution real-time recordingprocess capable of using a wide variety of thinfilm recording media. The best results were ob-tained with images which are compatible withmicroscope-type optics. The signals are in elec-tronic form prior to recording and can receive ex-tensive processing before the recording processoccurs. In fact, the recordings can be completelygenerated from electronic input." (Carlson andIves, 1968, p. 7).

6.159 "Instead of recording a bit as a hole in acard, it is recorded on the file as a grating patternof a certain spacing . . . A number of differentgrating patterns with different spacings can besuperposed and when light passes through, eachgrating bends the light its characteristic amount,with the result that the pattern decodes itself . . .

The new system allows for larger areas on the filmto be used and lessens dust sensitivity and the possi-bility of dirt and scratch hazards." (Commun.ACM 9, No. 6, 467 (June 1966)).

6.160 "Both black-and-white and color videorecordings have recently been made on magneticfilm plated discs. . . .

"Permanent memory systems employing silver-halide film exposed by electron or laser beams. Itis possible to record a higher density with beams.Readout at an acceptable error rate is the majorproblem." (Gross, 1967, p. 5).

"A recently conceived memory which usesoptical readout. Instead of recording bits as pulses,bits are recorded as frequencies. An electron beam,intensity modulated with the appropriate fre-quencies, strikes the electron sensitile silver-halidefilm moving transverse to the direction of tape mo-tion . . ." (Gross, 1967, p. 8).

"For recording analog information Ampex hasfocussed efforts on silver-halide film . . [which]can be made sensitive to either electron or laserbeams . . . packing density is an order of magnitudegreater than the most dense magnetic recording."(Gross, 1967, p. 6).

"Recent work at Ampex indicates that theKerr magneto-optic effect is likely to be practicalfor reading digital information. Recording on areflective plated tape for magneto-optic repro-ducing can be done by local heating with a laser orelectron beam." [Eras able, potential density1 X 108]." (Gross. 1967, p. 8).

6.161 "At first glance, machining with electronbeams, or adding ions, appear to be suitable forrecording digital information. However, problems inobtaining sufficient linearity in the transfer func-tion (the dynamic range and signal-to-noise limits),and accurately positioning the electron beam forreading make it impossible to read out the potentialrecording density with acceptable error rates.

119

Rfi

The reduced packing density necessary for accept-able error rates cause these approaches to sufferby comparison with magnetic recording." (Gross,1967, p. 6).

6.162 "The advantages of electron beams overlight are a thousandfold increase in energy density,easy control of intensity and position, and a sub-stantial increase in resolution. To offset theseadvantages, there are the complications of ademountable vacuum system." (Thornley et al.,1964, p. 36).

6.163 ". . . Some [media] like thermoplastics,involve nearly reversible changes, and the noisecontent therefore rises with use." (Gross, 1967,p. 2).

6.164 "The standing-wave read-only memoryis based on the Lippmann process . . . [in which]a panchromatic photographic emulsion is placedin contact with a metallic mirror . . . Sufficientlycoherent light passes normally through the emul-sion, reflects from the mirror, and returns throughthe emulsion. This sets up standing waves with anode at the metallic mirror surface. Developablesilver ions form in the regions of the antinodesof the standing wave . . . If several anharmonicwaves are used to expose the same region of theemulsion, each will set up a separate layer struc-ture . . . Conceivably, n color sources spacedappropriately over the band of sensitivity couldprovide n information bits, one per color, at eachlocation". (Fleisher et al., 1965, p. 1).

Advantage would then be taken of ". . . theBragg effect, which causes the reflected light toshift to shorter wavelength as the angle of incidenceincreases . . . With this method, a monochromaticlight source, say of violet color, could read out theviolet bit at normal incidence and the red bit at theappropriate angle from normal. Hence, a singlemonochromatic source, such as a laser, could beused to read out all bits . . ." (Fleisher et al., 1965,p. 2).

Further, "random word selection requires a sum-mation of various injection lasers . . . or the use ofa white light source in which all colors are present.This source is then deflected to the selectedlocation by the electro-optical deflector. The outputfrom the memory plane is then separated into thevarious colors by means of a prism or other dis-persive medium for a parallel bit readout". (Fleisheret al., 1965, p. 19).

6.165 "A feature of the SWROM [standing-waveread-only memory] which appears to be unique isits capability of storing both digital and video(analog) information. This feature, combined withthe capability of the memory for simultaneous,multibit readout with minimal cross talk, will givethe SWROM an even wider range of application."(Fleisher et al., 1965, p. 25).

6.166 "Parallel word selection . . . could be

Page 126: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

., 1, .e. . s 7 7 7F,..11745,14.7, 41:777 .

accomplished by fiber-optic light splitting. It couldalso be accomplished by flooding the area to beread out with monochromatic light whose frequen,.:y

is that of the Et or series of bits to be selected. Thistype of word selection would be useful for associa-tive word selection." (Fleisher et al., 1965, p. 21).

7 Debugging, On-Line Diagnosis, Instrumentation, and Problems of Simulation

7.1 "The quantity and quality of debuggingmust be faced up to in facility design. This isperhaps the area which has been given more lipservice and less attention than any other." (Wagnerand Granholm, 1965, p. 284).

"Software checkout still remains an unstructuredart and leaves a lot to be desired for the productionof perfect code." (Weissman, 1967, p. 31).

"Debugging, regardless of the language used, isone of the most time consuming elements in thesoftware process." (Rich, 1968, p. 34).

"It has been suggested that . . . we are nowentering an era in which computer use is 'debugging-limited'." (Evans and Dar ley, 1966, p. 49).

7.2 "As computing systems increase in size andcomplexity, it becomes both more important andmore difficult to determine precisely what ishappening within a computer. The two sorts ofperformance measurements which are readilyavailable are not very useful; they are the micro-timing information provided by the manufacturer(.4 microseconds/floating add) and the macro-timinginformation provided by the user ("why does ittake three days to get my job back?"). The relation-ship, if any, between them is obscured by theintricate bulk of the operating system; if it is amulti-programming or time-sharing system, theobscurity is compounded.

"The tools available to the average installationfor penetrating this maze are few and inadequate.Simulation is not particularly helpful: the informa-tion which is lacking is the very information neces-sary for the construction of an accurate model.Trace routines interfere excessively with the opera-tion of the system, distorting storage requirementsas well as relative timing information. Hardwaremonitors are not generally available, and though awondrous future is foreseen for certain of them,they have yet to demonstrate their capabilities inan operational environment; furthermore, they arecertain to be too costly for permanent installation,and perhaps too cumbersome for temporary use.The peripheral processor of the Control Data 6000series computers, however, provides some installa-tions with an easily utilized, programmable hard-ware monitor for temporary use at no extra cost."(Stevens, 1968, p. C34).

"Without instrumentation, the user is swimmingagainst the tide of history. It is commonly thoughtthat a good programmer naturally achieves at least80% of the maximum potential efficiency for aprogram. But while systems have increased greatlyin size and complexity, the average expertise ofprogrammers has decreased. In fact, it is axiomatic

that virtually any program can be speeded up 25 to50% without significant redesign! Unless monitoredand measured, a program's efficiency may easilybe as low as 25%. What is worse, multiprogramming,multiprocessing, real time, and other present-daymethods have created such a jumble of interactionsand interferences that without instrumentation itwould be impossible to know where effort appliedfor change would yield the best return. One triesto mine the highgrade ore first, while it still exists."(Bemer and Ellison, 1968, p. C40).

7.3 For example, "another practical problem,which is now beginning to loom very large indeedand offers little prospect of a satisfactory solution,is that of checking the correctness of a large pro-gram." (Gill, 1965, p. 203).

"With the introduction of time-sharing systems,the conventional tools have become almost worth-less. This has forced a reappraisal of debuggingprocedures. It has become apparent that a new typeof debugging tool is needed to handle the specialproblems created by a time-sharing system. Thesespecial problems result from the dynamic characterof a time-sharing system a system in which theprogram environment is continually changing, asystem in which a user is unaware of the actions ofother users, a system in which program segmentsare rolled in and out of storage locations, and asystem in which one copy of code can be shared bymany users. To debug in this dynamic environment,the programmer needs a debugging support sys-tem a set of debugging programs that operatelargely independently from the operating systemthey service. . .

"What is needed for time-sharing is a debuggingsupport system that meets the following require-ments:

The system should permit a system pro-grammer at a user terminal to debug systemprograms associated with his task. Whenused in this manner, the support system shouldoperate in a time-sliced mode.When used to debug a separate task, the sup-port system should provide the facility tomodify a system program in relation to thattask, without affecting the program as executedin relation to other tasks.When a system program bug cannot be lo-cated and repaired from a user terminal, thesupport system should permit a skilled sys-tem programmer at a central location to sus-pend time-sharing activity until the error islocated and repaired. The support systemshould then permit time-sharing activity to be

120

Page 127: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

resumed as though there had been no inter-ruption. The support system should permit asystem programmer to monitor the progress ofany task from a remote terminal or from theuser's terminal.The support system should contain the facilityto remain dormant until activated by a speci-fied condition. When activated by the condi-tion, the system should be able to gatherspecified data automatically and then permitprocessing to continue.In its dormant state, the support system shouldnot impact the performance of the parent time-sharing system.The support system should use a minimum ofmain storage and reside primarily on high-speed external storage.The support system should be completelyIndependent of the time-sharing system (thatis, it must use none of the facilities of theparent system), and it must be simple enoughto eliminate any requirement for a supportsystem of its own.

"An effort is currently under way to produce atime-sharing support system that meets these re-quirements." (Bernstein and Owens, 1968, pp. 7, 9).

"There has been far too little concern on the partof the hardware system designers with the prob-lems of debugging of complex programs. Hardwareaids to program debugging would be among the mostimportant hardware aids to software production.On-line debugging is essential. It shouAbeigo-s-Sibre-to monitor the performance of $,Dftwaie on a cathoderay tube console, without-hiterfering with the per-formance of the .soft-w-are. It should be possible toexamine areas-of peripheral storage as well as areasof core serge." (Rosen, 1968, p. 1448).

Fu tier, "the error reporting rate from a program.tem of several million instructions is sufficient to

occupy a staff larger than most computing installa-tions possess." (Steel, 1965, p. 233).

7.4 "By online debugging we mean programdebugging by a programmer in direct communica-tion with a computer (through, typically, a type-writer or teletype), making changes, testing hisprogram, making further changes, etc., all with areasonably short response time from the computer,until a satisfactory result is achieved." (Evans andDariey, 1965, pg 321).

7.5 "Another area of contact between hardwareand debugging is involved with trapping . . . Theuser may ask for a trap on any combination of anumber of conditions, such as a store into a specifiedregister, execution of an instruction at a specifiedlocation, or execution of any skip or jump instruc-tion. The debugging program handles the interruptand reports the relevant information to the user."(Evans and Darley, 1966, p. 44).

It is to be noted that although these authors,as of 1966, were concerned that "very little dataseems to exist on the relative efficiency of on-line

121

program debugging versus debugging in a batch-processing mode." (Evans and Darley, 1966, p.48), by 1968 Sackman et al., could report "on thebasis of the concrete results of these experiments,the online conditions resulted in substantiallyand, by and large, significantly better performancefor debug man hours than the offline conditions."(Sackman et al., 1968, p. 8).

7.6 "We have, in general, merely copied theon-line assembly-language debugging aids, ratherthan design totally new facilities for higher-levellanguages. We have neither created new graphicalformats in which to present the debugging infor-mation, nor provided a reasonable means by whichusers can specify the processing required on anyavailable debugging data.

"These features have been largely ignored be-cause of the difficulty of their implementation.The debugging systems for higher-level languagesare much more complex than those for assemblycode. They must locate the symbol table, findthe beginning and end of source-level statements,and determine some way to extract the dynamicinformation needed for debugging about theprogram's behavior, which is now hidden in asequence of machine instructions rather thanbeing the obvious result of one machine instruction.Is it any wonder that, after all this effort merelyto create a minimal environment in which to per-form on-line higher-level language debugging,little energy remained for creating new debugging-aids--that -would probably require an increaseddynamic information-gathering capability.

"EXDAMS (EXtendable Debugging And Moni-toring System) is an attempt to break this impasseby providing a single environment in whichusers can easily add new on-line debugging aidsto the system one-at-a-time without further modify-ing the source-level compilers, EXDAMS, Of theirprograms to be debugged. It is hoped that EXDAMSwill encourage the creation of new methods ofdebugging by reducing the cost of an attemptsufficiently to make experimentation practical.At the same time, it is similarly hoped that EX-DAMS will stimulate interest in the closely relatedbut largely neglected problem of monitoring aprogram by providing new ways of processingthe program's behavioral information and present-ing it to the user. Or, as a famous philosopheronce almost said, 'Give me a suitable debuggingenvironment and a tool-building facility powerful(and simple) enough, and I will debug the world'."(Balzer, 1969, p. 567).

7.7 "Diagnostics have been almost nonexistentas a part of operating software and very weak asa part of maintenance software. As a result needlesstime is spent determining the cause of malfunctions;whether they exist in the program, the hardware,the subsets, the facilities or the terminals." (Dan-tine, 1966, pp. 405-406).

7.8 "Another advantage of computer simulationis that it may enable a system's manager to shrink

Page 128: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

the anticipated real world life of his system into arelatively short span of simulation time. This capa-bility can provide the manager with a means ofexamining next week's (month's, year's) productionproblems this week; thus he can begin to antici-pPte the points where the operations will requiremodification. Moreover, he can examine alternativecourses of action, prior to their being implementedin the system, to determine which decision is mosteffective. For example, the manager can increasethe processing load in the simulation to determinewhere the saturation points are. Once these havebeen determined, he can hold these overloadingstates constant and vary the other variables (e.g.,number of service units, types of devices, methodsof operations) to determine how best to increase thesystem's capacity." (Blunt et al., 1967, p. 76).

Mazzarese (1965) describes the Air Force Cam-bridge DX-1 system with a "dual computer con-cept" that permits investigators to change computerlogic and configuration in one machine withoutinterference to programs which run on its inter-connected mate, especially for study of real timedata filtering operations.

7.9 "A technique for servicing time-sharedcomputers without shutting them down has beendeveloped by Jesse T. Quatse, manager of engi-neering development in the Computation Center atthe Carnegie Institute of Technology. The tech-nique is called STROBES, an acronym for shared-time repair of big electronic systems. It includes atest program to exercise the computer, and modi-fied test gear to detect faults in the system." (Elec-tronics 38, No. 18, 26 (1965)).

7.10 "Diagnostic engineering begins in the initialphases of system design. A maintenance strategyis defined and the system is designed to includefeatures necessary to meet the requirements of thisstrategy. Special features, known as 'diagnostichandles', are needed for testing the system auto-matically, and for providing adequate error isola-tion." (Dent, 1967, p. 100).

"An instantaneous alarm followed by a quick andcorrect diagnosis in a self-controlling system willlimit down-time in many cases to the mere time ofrepair. Instruments for error detection are unneces-sary." (Steinbuch and Piske, 1963, p. 859).

7.11 Further, "when a digital system is par-titioned under certain restrictions into subsystemsit is possible to achieve self-diagnosis of the systemthrough the mutual diagnosis of its subsystems."(Forbes et al., 1965, p. 1074).

"A diagnostic subsystem is that portion of a digitalsystem capable of effectively diagnosing anotherportion of the digital system. It has been shown thatat least two mutually exclusive diagnost'c subsys-tems are needed in self-diagnosable systems."(Forbes et al., 1965, p. 1081).

7.12 "Systems are used to test themselves bygeneration of diagnostic programs using prede-fined data sets and by explicit controls permitting

degradation of the environment." (Estrin et al.,1967, p. 645).

"The Nightwatchman' experiments are directedtoward the maintenance problem. Attempts will bemade to structure a maintenance concept that willallow for the remote-automatic-checkout of all thecomputers in the network from a single point. Theconcept is an extension of the `FALT' principlementioned previously. Diagnostic programs will besent over the lines, during off-use time, to checkcomponents, aggregates of components, completemodules, and the entire system. The 'Sentinel'station of the network will be responsible for thegathering of statistical data concerning the data,the queries, the traffic, and the overall operations."(Hoffman, 1965, pp. 98-100.)

"The Sentinel is the very heart of the experi-mental network. It is charged with the gatheringof the information needed for long range planning,the formulation of data automation requirements,and the structuring of prototype systems. Inaddition to the gathering of statistical data, thesentinel will be the control center for the network,generating priority, policy, and operational details.The responsibility for the observance of securityand proprietary procedures will rest with thesentinel station." (Hoffman, 1965, p. 100.)

"This data was taken by a program written to runas part of the CTSS Supervisory Program. Thedata-taking program was entered each time theScheduling Algorithm was entered and thus wasable to determine the exact time of user statechanges." (Scherr, 1965, pp. 27-28).

"Data taken over the summer of 1964 by T.Hastings . . . indicates that the average programaccesses (i.e., reads or writes) approximately 1500disk words per interaction." (Scherr, 1965, p. 29).

"We can and will develop instrumentation whichwill be automatically inserted at compile time. Auser easily will be able to get a plot of the variousrunning times of his program . . "

Sutherland also refers to a Stanford Universityprogram which "plots the depth of a problem treeversus time was used to trace the operation of aKalah-playing program." (Sutherland, 1965, pp.12-13).

7.13 "The techniques of fault detection fallinto two major categories:

1. Concurrent diagnosis by the application oferror-detecting codes and special monitoringcircuits. Detection occurs while the systemis being used.

2. Periodic diagnosis using diagnostic hardwareand/or programs. Use of the system is inter-rupted for diagnosis." (Aviiienis, 1967, p. 734).

"The four principal techniques of correction are:1. Correction of errors by the use of error-

correcting codes and associated specialpurpose hardware and/or software (includingrecomputation).

122

Page 129: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

2. Replacement of the faulty element or systemby a stand-by spare.

3. Replacement as above, with subsequent main-tenance of the replaced part and its returnto the stand-by state.

4. Reorganization of the system into a differentfault-free configuration which can continuethe specified task." (Aviiienis, 1967, p. 734).

"The STAR (Self-Testing and Repairing) com-puter, scheduled to begin experimental operationat the Jet Propulsion Laboratory of the CaliforniaInstitute of Technology this fall, is expected to beone of the first computers with fully automatic self-repair as one of its normal operating functions . . .

There are three 'recovery' functions of the STARcomputer: (1) detection of faults; (2) recognition oftemporary malfunctions and of permanent failures;and (3) module replacement by power switching.The occurrence of a fault is detected by applyingan error-detecting code to all instructions and num-bers within the computer. Temporary malfunctionsare corrected by repeating a part of the program. Ifthe fault persists, the faulty module is replaced."(Aviiienis, 1968, p. 13).

7.14 "Diagnostic routines can check the oper-ating of a computer for the following possible mal-functions: a single continuous malfunction, severalcontinuous malfunctions, and intermittent mal-functions. When the test routine finds an error itcan transfer program control to an appropriate mal-function isolation subroutine. This type of diagnostictechnique, is standard and has been well used by thecomputer industry for large package replacement."(Jacoby, 1959, p. 7-1).

"Needless to say, in order for any malfunctionto be isolated by an automatic program, it is neces-sary for a minimum amount of equipment to func-tion adequately. One of the functions of this mini-mum equipment includes the ability to sequencefrom one instruction to another, and to be able tointerpret (correctly) and execute at least one trans-fer of control instruction so that logical choices canbe made. The control functions of a computer canbe defined as Boolean algebraic expressions of theinstantaneous state of the computer. If we statethat a line, or path, common to two control state-ments contains those components that are acti-vated when either of the statements is true, this lineis either a factor of both statements or a factor ofterms of both statements. Similarly, if we considercircuit elements activated by one but not both oftwo ways to accomplish the same control function,we have a picture of two terms in the algebraicstatement for the control function separated bythe connector OR.

"A Boolean term will appear as a circuit whichmust be active for any statement, of which it is afactor, to be true. Hence the location of circuitmalfunctions may be considered from the point ofview of isolating the minimal Boolean term in-volved." (Jacoby, 1959, p. 7-1).

123

376-411 0 - 70 - 9

7.15 "A program testing method based on themonitoring of object-program instruction addresses(as opposed to a method dependent on, e.g., theoccurrence of particular types of instruction, orthe use of particular data addresses) would appearto be the most suitable, because the instructionaddress is the basic variable of this monitoringtechnique. Monitoring could be made 'selective'by specifying instruction addresses at which it isto start and stop: to start it at an arbitrary instruc-tion address it is only necessary to replace theinstruction located there by the first unconditionalinterrupt inserted, and similarly when monitoringis to stop and restart later. . . .

"Another use in this field would be to include inthe Monitor facilities for simulating any instruction,anei ro supply it with details of particular instructionssuspected of malfunctioning. The Monitor couldthen stop any program just before one of theseinstructions was to be obeyed, simulate it, allowthe program to execute the same instruction inthe normal way, and then compare the resultsobtained by the normal action and by simulation."(Wetherfield, 1966, p. 165).

"Of course, having achieved the aim of beingable to trace in advance the exact course of theobject program's instructions, the Monitor is thenable to simulate their actions to any desired degree,and it is here that the power of the technique can beexploited. The contents of the store, registers, etc.,before the execution of any instruction can be in-spected by the Monitor if it temporarily replacesthat instruction by an unconditional interrupt."(Wetherfield, 1966, p. 162).

"The monitoring operation can go wrong for anyof the following three reasons.

"(1) In the first case one of the planted uncon-ditional interrupt instructions actually overwritesthe instruction at which the object program is goingto resume (the one at which monitoring started).This would effectively bring things to a standstillsince the situation will recur indefinitely. If the rulesabove have been followed, this situation can onlyarise when a branching instruction includes itselfamong its possible destinations, i.e., there is a po-tential loop stop in the object program. in order tocope with this situation, if it could occur, it may benecessary for the Monitor to simulate the actionof the branch instruction completely and make theobject program bypass it. The loop stop might stilloccur, but it would be foreseen.

"(2) The second possible reason for a failure ofthe monitoring operation occurs if one of the plantedinstructions overwrites part of the data of the objectprogram, thus affecting the latter's behaviour. This`data' might be a genuine instruction which Asexamined, as well as obeyed, by the object program.Alternatively it might be genuine data whichhappens to be stored in a position which is, byaccident or design, a 'redundant' destination of abranching instruction. Both of these dangers can beanticipated by the Monitor, at the cost of a more

Page 130: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

detailed examination of instructions (to find outwhich store references by the object programinvolve a replaced instruction location) and morefrequent interrupts.

"The situation savours of 'trick' programming. Itis apparent that the monitoring process will besimplified if there is some guarantee that theseoddities are absent from object programs." (Wether-

, field, 1966, pp. 162-163).7.16 "MAID (Monroe Automatic Internal

Diagnosis) is a program that tells a machine howto measure its circuitry and ttest performanceon sample problems computei. hypochondria."(Whiteman, 1966, p. 67).

7.17 "I have used a program which interpretsthe program under test and makes a plot of thememory address of the instruction being executedversus time. Such a plot shows the time the programspends doing its various jobs. In one case, it showedme an error which caused a loss of time in a programwhich nevertheless gave correct answers.. . .

"Think of the thousands of dollars saved bytightening up that one most-used program loop.Instrumentation can identify which loop is themost used." (Sutherland, 1965, pp. 12-13).

7.18 "On-Line Instrumentation will bring usbetter understanding of the interplay of the pro-grams and data within the computer. Simple devicesand programs to keep track, on-line, of what thecomputer does will bring us better understandingof what our information reprocessing systemsare actually doing." (Sutherland, 1965, p. 9).

7.19 "The process of building a pilot systemconfiguration and then evaluating it, modifying it,and improving it is very costly both in time andmoney. Another approach is possible. Before hebuilds the system, the designer should be ableto test his concepts on a simulation model of adocument retrieval system. One such model forsimulating information storage and retrievalsystems was designed by Blunt and his co-workersat HRB-Singer, Inc., under a contract with theOffice of Naval Research. In this model, theinput parameters for the simulation reflects theconfiguration of the system, the work schedule of thesystem, the work schedule of the personnel,equipment availability, the likelihood and effectof errors in processing and the location and availa-bility of the system user. Simulation output pro-vides a study of system response time (both delaytime and processing time), equipment and personnelwork and idle time and the location and size ofthe data queues. The systems designer can thus varythe inputs, use the model to simulate the inter-actions among personnel, equipment, and data ateach step of the information processing cycle,and then determine the effect on the system re-sponse time." (Borko, 1967, p. 55).

7.20 "Simulation is a tool for investigation and,like any tool, is limited to its inherent potential.Moreover, the realization of this potential is depend-ent upon economics, the craftsmanship of the

designer and the ingenuity of the user. Digitalsimulation can expedite the analysis of a complexsystem under various stimuli if the aggregate canbe divided into elements whose performance can besuitably described. If the smallest elements intowhich we can divide a system are themselves un-predictable (even in a probabilistic sense) digitalsimulation is not feasible. (Conway, et al., 1959,p. 94). This feasibility test uncovers an importantlimitation in today's simulation technology withrespect to information systems. In many respect.,some of the more important man-information-systeminteractions cannot now be described in a formalmanner; hence, cannot be characterized fordigital simulation. For example, one can calculatethe speed and costs of processing an inquiry, butcannot predict if the output will satisfy the user orestimate its impact on his operations.

"This limitation, therefore, (1) restricts simulationapplications to examining the more mechanicalaspects of data processing, or (2) forces the designengineer to adopt some simplifying assumptionsconcerning the effects of man's influence on thesystem. An example of the first point is a dataflow simulation examining the rate of data proc-essing without regard to the quality of the typesand mixes of equipment and personnel- This capa-bility for examining the resultant effects in varyingparameters of the system enable the design engineerto explore more alternatives in less time and at lesscost than ever before; e.g., he can develop cost-capability curves for different possible systemconfigurations under both present and anticipatedprocessing needs. Neglecting this aspect of systemsanalysis has sometimes led to the implementationof a system saturated by later requirements andconfronted by an unnecessary high cost for modifica-tion or replacement." (Blunt et al., 1967, pp. 75-76).

"To use simulation techniques in evaluatingdifferent computer systems, one must be able tospecify formally the expected job mix and con-straints under which the simulated system mustoperate, e.g., operating time per week. Equally im-portant, one must carefully select a set of char-acteristics on which the competing systems will bejudged. For different installations the most im-portant characteristics may well be different. Eachsystem under consideration is modelled, simulationruns are executed, and the results are compared onthe selected characteristics.

"Unfortunately, the ideal case seldom occurs.Often the available information about the computersystem's expected job mix is very limited. Further-more, it is a well-known fact that an installation'sjob mix itself may be strongly influenced bothqualitatively and quantitatively by the proposedchanges in the system. For example, many of thedifficulties with early time-sharing systems can beattributed to the changes in user practices causedby the introduction of the system. When statisticson job mix are available, they are often expressed inaverages. Yet, it may be most important to simulate

124

Page 131: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

a systera's performance under extreme conditions.Finally, it is often difficult to show that a simulationis valid that is, that it actually does simulate thesystem in question." (Huesmann and Goldberg,1967, p. 150).

7.21 "The field of information retrieval has beenmarked by a paucity of mathematical models, andthe basis of present operational computer retrievalsystems is essentially heuristic in design." (Baker,1965, p. 150).

"The semantic and linguistic aspects of infor-mation retrieval systems also lend themselvespoorly to the rigidity of models and model tech-niques, for which claim:, often lack empiricalsupport." (Blunt, 1965, p. 105).

7.22 "There are structures which can easilybe defined but which present-day mathematicscannot handle because of the limitations of present-day theory." (Hayes, 1963, p. 284).

"Markov models cannot, in general, be usedto represent processes where other than randomqueuing is used." (Scherr, 1965, p. 32).

"Clearly, we need some mathematical modelspermitting the derivation of methods which willaccomplish the desired results and for whichcriteria of effectiveness can be determined. Suchmodels do not appear often in the literature."(Bryant, 1964, p. 504).

"First, it will be necessary to construct mathe-matical models of systems in which content, struc-ture, communication, and decision variables allappear. For example, several cost variables areusually included in a typical operations researchmodel. These are either taken as uncontrollableor as controllable only by manipulating such othervariables as quantity purchased or produced, timeof purchase or production, number and type offacilities, and allocation of jobs to these facilities.These costs, however, are always dependent onhuman performance, but the relevant variablesdealing with personnel, structure, and communica-tion seldom appear in such models. To a largeextent this is due to the lack of operational defini-tions of many of these variables and, consequently,to the absence of suitable measures in terms ofwhich they can be characterized." (Ackoff, 1961,p. 38).

"Mathematical analysis of complex systemsis very often impossible; experimentation withactual or pilot systems is costly and time consum-ing, and the relevant variables are not alwayssubject to control. . . .

"Simulation problems are characterized by beingmathematically intractable and having resistedsolution by analytic methods. The problemsusually involve many variables, many parameters,functions which are not well-behaved mathe-matically, and random variables. Thus, simula-tion is a technique of last resort." (Teichroew andLubin, 1966, p. 724).

"The complex systems generally encountered

125

in the real world do not lend themselves to neatmathematical formulations and in most casesthe operations analyst is forced to reduce theproblem to simpler terms to make it tractable."(Clapp, 1967, p. 5).

"Admittedly the degree to which identifiablefactors can be measured compared to the in-fluence of unidentifiable factors does help deter-mine whether or not an approach can be scientific.It acts as a limit on the area where scientfic methodscan be applied. Precision in model building isrelating to the difficulty of the problem and thestate of human knowledge concerning specifictechniques and their application." (Kozmetskyand Kircher, 1956, p. 137).

7.23 "There is no guarantee that a modelsuch as latent class analysis, factor analysis, oranything else borrowed from another field willmeet the needs of its new context; however thisshould not dissuade one from investigating suchplausible models." (Baker, 1965, p. 150).

"Models must be used but must never be believed.As T. C. Chamberlain said, 'science is the holdingof multiple working hypotheses'." (Tukey and Wilk,1966, p. 697).

7.24 "System simulation or modeling was subse-quently proposed as a substitute for deriving testproblems and is still generally accepted as such eventhough its use introduced the new difficulty of deter-mining and building meaningful models." (Davis,1965, p. 82).

"The biggest problem in simulation modeling, asin all model building, is to retain all 'essential'detail and remove the nonessential features."(Scherr, 1965, p. 09).

"The fundamental problem in simulation of digitalnetworks is that of economically constructing amathematical model capable of faithfully replicatingthe real network's behavior in regard to simulationobjectives." (Larsen and Mano, 1965, p. 308).

7.25 "At this time, there exists no special-purpose simulation programming language specif-ically for use with models of digital computersystems. The general-purposes languages, such asSIMSCRIPT, GPSS, etc., all have faults whichrender them unsuitable for this type of work."(Scherr, 1965, p. 43).

"The invention of an adequate simulationlanguage promises to be the crowbar needed tobring the programming of operating systems tothe level of sophistication of algebraic or commercialprogramming languages." (Perlis, 1965, p. 192).

"The technique of input simulation . . . can bevery expensive. The programs necessary to createthe simulated inputs are far from trivial and maywell constitute a second system larger than theoperational system." (Steel, 1965, p. 232).

7.26 "Those programs which require the simu-lated computer system and job mix to be specifiedin algebraic or assembly languages have proveduseful; but as general computer systems simulationtools, they require too much difficult recording to be

Page 132: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

completely satisfactory. One way to improve uponthis situation has been to use languages specificallydesigned to simulate systems. Teichroew and Lubinin a recent review have listed more than twentylanguages, among them GPSS, SIMSCRIPT, SOL,and CSL. These simulation languages allow themodeller to specify the computer configurationand job mix in a more convenient manner."(Huesmann, and Goldberg, 1967, p. 152).

7.27 "One of the more exotic applications ofdigital computers is to simulate a digital computeron another entirely different type of computer. Usinga simulation program, application programs de-veloped for the first computer, the source computer,may be executed on a second computer, the objectcomputer.

"Simulation obviously provides many advantagesin situations where a computer is replaced by adifferent computer, for which the applications havenot yet been programmed. Simulation techniquesenable an installation to continue solving problemsusing existing programs after the new computerhas been installed and the old one removed.. . .

"Another situation in which simulation is advan-tageous is during the development of a new com-puter. Once specifications for the new computerhave been established, programming of applicationsfor the computer can proceed in parallel with hard-ware developments. The use of a simulator in thissituation enables the users to debug their applica-tions before the hardware is actually available."(Trimble, 1965, p. 18).

"One of the most successful applications of therecent microprogramming technology is in thesimulation of computers on computers.

"The microprogram control and the set ofmicroprogram routines are in effect a simulationprogram that simulates the programmer's instruc-tion set on a computer whose instruction set isthe set of elementary operations. It may be equallypossible to simulate computers with other' pro-grammer instruction sets in terms of the same setof elementary .operations. This, slightly oversimpli-fied perhaps, is the idea of hardware assistedsimulation that is now usually called emulation."(Rosen, 1968, p. 1444).

"As a result of simulation's ability to deal withmany details, it is a good tool for studying extensiveand complicated computer systems. With simula-tion, one may assess the interaction of several sub-systems, the performances of which are, modifiedby internal feedback loops among the subsystems.For instance, in a teleprocessing system where pro-grams are being read from drum storage and heldtemporarily in main storage, the number of mes-sages in the processing unit depends upon the drumresponse time, which depends upon the drum accessrate, which, in turn, depends upon the number ofmessages in main storage. In this case, only asystem-wide simulation that includes communica-tion lines, processing unit,, and I/O subsystems willdetermine the impact of varying program priorities

on main-storage usage. Studies of this nature canbecome very time consuming unless parameterselections and variations are carefully limited. Itis no small problem to determine which are themajor variations that affect the system. In thisaspect, simulation is not as convenient as algo-rithmic methods with which many variations canbe tabulated quickly and cheaply." (Seaman, 1966,p. 177).

7.28 "The [SIMSCRIPT] notation is an aug-mented version of FORTRAN, which is acceptable;but this organization does not take advantage of themodularity of digital systems.

"SIMSCRIPT is an event-based language. Thatis, the simulation is described, event by event, withsmall programs, one per event. Each event program(or sub-program) must specify the times for theevents following it. Conditional scheduling of anevent is extremely difficult." (Scherr, 1965, p. 43).

7.29 Ewards points out that ". . . the prepara-tion of so-called scenarios, or sequences of eventsto occur as inputs to the simulation, is a majorproblem, perhaps the most important one, in thedesign of simulations, especially simulations ofinformation-processing systems." (Edwards, 1965,p. 152).

730 "Parallel processes can be renderedsequential, for simulation purpose; difficulties thenarise when the processes influence each other,leading perhaps to incompatibilities barring asimultaneous development. Difficulties of this typecannot be avoided, as a matter of principle, and thesystem is thus not deterministic; the only way outwould be to restore determinism through recourse toappropriate priority rules. This approach is justifiedonly if it reflects priorities actually inherent in thesystem." (Caracciolo di Forino, 1965, p. 18).

7.31 "As a programming language, apart fromsimulation, SIMULA has extensive list processingfacilities and introduces an extended co-routineconcept in a high-level language." (Dahl andNygaard, 1966, p. 671).

7.32 "The LOMUSS model of the LockheedUNIVAC on-line, time-sharing, remote terminalsystem simulated two critical periods . . . andprovided information upon which the design of the1108 system configuration was based. An effort iscontinuing which will monitor the level and char-acteristics of the workload, equipment utilization,turnaround time, etc., for further model validation."(Hutchinson and Maguire, 1965, pp. 166-167).

"A digital computer is being used to simulate thelogic, determine parts values, compute subunitloading, write wiring lists, design logic boards, printcheckout charts and maintenance charts. Simulatingthe logic and computing the loading of subunitsgives assurance that a computer design will functionproperly before the fabrication starts. After the logicequations are simulated, it is a matter of hours untilall fabrication information and checkout informationis available. Examples are given of the use of these

126

Page 133: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

techniques on the design and fabrication of a largescale military computer." (Malbrain, 1959, p. 4-1).

"The new EDP Machine Logic and ControlSimulator, LOCS, is designed to facilitate the simu-lation of data processing systems and logic on theIBM 7090 Data Processing System . . . The inputsof LOCS consist of a description of the machine tobe simulated, coded in LOCS language, and a set oftest programs coded in either the procedure lan-guage of the test problems (e.g., FORTRAN) or inthe instruction language of the simulated ma-chine . . . The outputs of LOCS consist of the per-formance statistics, computation results, anddiagnostic data which are relevant to both the testprograms and the design of the simulated machine."(Zucker, 1965, pp. 403-404).

7.33 ". . . A system of software and hardwarewhich simulates, with a few exceptions, a multipleset of IBM System/360 computing systems (hard-ware and software) that are simultaneously availableto many users." (Lindquist et al., 1966, p. 1775).

7.34 "Another simulation program designed tosimulate multiprocessor systems is being developedby R. Goldstein at Lawrence Radiation Laboratory.Written in SIMTRAN, this program is specificallydesigned to simulate the OCTOPUS computersystem at LRL which includes an IBM 7030STRETCH, two IBM 7094's, two CDC 6600's, oneCDC 3600, two PDP 6's, an IBM 1401, and variousI/O devices. A parameterized input table specifiesthe general multiprocessing configuration, the datatransmission rates, memory sizes and buffer sizes.Any other hardware variations and operating systemcharacteristics are introduced with SIMTRANroutines. As output the program produces figures onactual memory utiliiation, graphs of memory accesstime, graphs of overhead, graphs of response time,and graphs of several other relevant variables."(Huesmann and Goldberg, 1967, p. 153).

7.35 "One of the more exotic applications ofdigital computers is to simulate a digital computeron another entirely different type of computer. Usinga simulation program, application programs devel-oped for the first computer, the source computer,may be executed on a second computer, the objectcomputer." (Trimble, 1965, p. 18).

7.36 "The flexibility of a digital computer enablesone to try out complicated picture processingschemes with a relatively small amount of effort.To facilitate this simulation, a digital picture scannerand cathode-ray tube display was constructed.Pictures were scanned with this system, the signalwas recorded on a computer magnetic tape, and thistape was used as input to a program that simulateda picture-transmitting system." (Huang and Tretiak,1965, pp. 45-46).

"Computer simulation of a letter generator usingthe above grammar is a comparatively straight-forward programming task. Such a program, making

127

use of COMPAX, has been written for CDC 3600system at the Tata Institute of FundamentalResearch, Bombay." (Narasimhan, 1966, p. 171).

At IBM, there has been developed a computerprogram for image-forming systems simulation(IMSIM/1), so that the photo-optical design engineercan study performance before such systemS areactually built (Paris, 1966).

7.37 "The first model developed matchesCTSS . . . Next, a simple, first-come, first-servedround-robin scheduling procedure will be sub-stituted. Then, a model which incorporates multi-programming techniques with the CTSS hardwareconfiguration will be developed. Finally, a simplecontinuous-time Markov model will be used to repre-sent both single-processor and multiple-processortime-shared systems. . . .

"The primary result obtained is that it is possibleto successfully model users of interactive computersystems and systems themselves with a good degreeof accuracy using relatively simple models." (Scherr,1965, p. 31).

"The final model to be simulated will represent asystem in which swapping and processor operationare overlapped. While a program is being run bythe processor, the program which was runningpreviously is dumped and the next program to runis loaded. Since loading and dumping cannot occursimultaneously, there must be room in the corememory for at least two complete user programsthe program being dumped or loaded. Should twoprograms intended to run in sequence not fit togetherin the core memory, the processor must be stoppedto complete the swapping." (Scherr, 1965, pp.40-41).

Other examples of experiments in computersimulation of multiple access and time-sharedsystems include the following: "The development ofthe simulation program now provides a first-come-first-serve queue unloading strategy. Continuingeffort, however, will provide for optional strategies,e.g., selecting the data unit with the shortest servic-ing time, consideration of what flow will minimizeidle time at the central processor etc." (Blunt, 1965,p. 15).

"Project MAC is using a display to show thedynamic activity of jobs within its schedulingalgorithm. Watching this display, one can see jobsmoving to higher and lower priority queues as timepasses." (Sutherland, 1965, p. 13).

7.38 "A great advance also is the wide applica-'tion of digital computers for the simulation ofrecognizing and adaptive systems. Digital simula-tion extraordinarily facilitates and acceleratescoaducting experiments in this realm by permittingextremely effective experimental investigationswithout expenditure of materials and with littletime spent." (Kovalevsky, 1965, p. 42),

Page 134: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

"Third, computer simulation may serve as aheuristic in the search fer models. The effort ofgetting a computer to perfoi An a given task may leadto illuminating p.9ychological hypotheses, even ifno behavioral evidence has been taken into account.

128

Moreover, a program which solves problems is bythat sole virtue a candidate for a model and deservesthe psychologists' attention. After all, provingtheorems or recognizing patterns was until recentlyuniquely human or animal." (F'rijda, 1967, p. 59).

Page 135: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Appendix B. Bibliography

Abbas, S. A., H. F, Koehler, T. C. Kwei, H. 0. Leilich andR. H. Robinson, Design Considerations for the Chain MagneticStorage Array, IBM J. Res. & Dev. 11, 302-311 (May 1967).

Abraham, C., G. N. Lance and T. Pearcey, Multiplexing of SlowPeripherals, Commun. ACM 9, No. 12, 877-878 (Dec. 1966).

Abrahams, P. W., J. A. Barnett, E, Book, D. Firth, S. L. Kameny,C. Weissman, L. Hawkinson, M. I. Levin and R. A. Saunders,The LISP 2 Programming Language and System, AFIPS Proc.Fall Joint Computer Conf., Vol. 29, San Francisco, Calif.,Nov. 7-10, 1966, pp. 661-676 (Spartan Books, Washington,D.C., 1966).

Ackoff, R. L., Systems, Organizations, and InterdisciplinaryResearch, in Systems: Research and Design, Proc. FirstSystems Symp. at Case Institute of Technology, Cleveland, 0.,Apr. 1960, Ed. D. P. Eckman, pp. 26-42 (Wiley, New York,1961).

Adams, C. W., Responsive Time-Shared Computing in Business -Its Significance and Implications, AFIPS Proc. Fall JointComputer Conf., Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30-Dec. 1, 1965, pp. 483-488 (Spartan Books, Washington, D.C.,1965).

Aines, A. A., The Convergence of the Spearheads, in Toward aNational Information System, Second Annual NationalColloquium on Information Retrieval, Philadelphia, Pa.,April 23-24, 1965, Ed. M. Rubinoff, pp. 3-13 (Spartan Books,Washington, D.C., 1965).

Alberga, C. N., String Similarity and Misspellings, Commun.ACM 10, No. 5, 302-313 (May 1967).

Alt, F. L. The Standardization of Programming Languages,Proc. 19th National Conf., ACM, Philadelphia, Pa., Aug. 25-27,1964, pp. B 2-1 to B 2-6 (Assoc. for Computing Machinery,1964).

Alt, F. L. and M. Rubinoff, Eds., Advances in Computers, Vol. 7,303 p. (Academic Press, New York, 1966).

Amdahl, G. M., Multi-Computers Applied to On-Line Systems,in On-Line Computing Systems, Proc. Symp. sponsored bythe Univ. of California, Los Angeles, and Informatics, Inc.,Los Angeles, Calif., Feb. 2-4, 1965, Ed. E. Burgess, pp. 38-42(American Data Processing, Inc., Detroit, Mich. 1965).

Ammon, G. J. and C. Neitzert, An Experimental 65-NanosecondThin Film Scratchpad Memory System, AFIPS Proc. FallJoint Computer Conf., Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30-Dec. r, 1965, pp. 649-659 (Spartan Books, Washington, D.C.,1965).

Anacker, W., G. F. Bland, P. Pleshko and P. E. Stuckert, On theDesign and Performance of a Small 60-Nsec Destructive Read-out Magnetic Film Memory, IBM J. Res. & Dev. 10, No. 1,41-50 (Jan. 1966).

Anderson, R., E. Marden and B. Marron, File Organization for aLarge Chemical Information System, NBS Tech. Noto 285,17 p. (U.S. Govt. Print. Off., Washington, D.C., Apr. 1966).

Andrews, D. H., An Array Processor Using Large Scale Integra.tion, Computer Design 8, No. 1, 34-43 (Jan. 1969).

Andrews, M. C., Multifont Print Recognition, in Optical CharacterRecognition, Ed. G. L. Fischer, Jr., et al., pp. 287-304 (SpartanBooks, Washington, D.C., 1962).

Applebaum, E. L., implications of the National Register of Micro-film Masters, as Part of a National Preservation Program, Lib.Res. & Tech. Serv. 9, 489-494 (1965).

Armstrong, J: A., Fresnel Holograms: Their Imaging Propertiesand Aberrations, IBM J. Res. & Dev. 9, No. 3, 171-178 (May1965).

Armstrong, R., H. Conrad, P. Ferraiolo and P. Webb, SystemsRecovery from Main Frame Errors, AFIPS Proc. Fall Joint

Computer Conf., Vol. 31, Anaheim, Calif., Nov. 14-16, 1967,pp. 409-411 (Thompson Books, Washington, D.C., 1967).

Aron, J. D. Real-Time Systems in Perspective, IBM Sys. J. 6,494,7 (1467).

Aron, J. D., Information Systems in Perspective, in IBM Proc.Inf. Systems Symp., Washington, D.C., Sept. 4-6, 1968,pp. 3-37 (IBM Corp., 1968).

Arora, B. M., D. L. Bitzer, H. G. Slottow and R. H. Wilson, ThePlasma Display Panel- A New Device for Information Displayand Storage, Rept. No. R-346, 24 p. (Coordinated ScienceLaboratory, Univ. of Illinois, Urbana, DI., April 1967).

Aschenbrenner, R. A., M. J. Flynn and G. A. Robinson, IntrinsicMultiprocessing, AFIPS Proc. Spring Joint Computer Conf.,Vol. 30, Atlantic City, N.J., Apr. 18-20, 1967, pp. 81-86(Thompson Books, Washington, D.C., 1967).

Asendorf, R. H., The Remote Reconnaissance of ExtraterrestrialEnvironments, in Pictorial Pattern Recognition.. Proc. Symp.on Automatic Photointerpretation, Washington, D.C., May 31-June 2, 1967, Ed. G. C. Cheng et al., pp. 223-238 (ThompsonBook Co., Washington, D.C., 1968).

Ash, W. L. and E. H. Sibley, TRAMP: An Interpretive Associa-tive Processor with Deductive Capabilities, Proc. 23rd NationalConf. ACM, Las Vegas, Nev., Aug. 27-29, 1968, pp. 143-156(Brandon/Systems Press, Inc., Princeton, N.J., 1968).

Auerbach Corporation, Source Data Automation, Final Rept.1392-200-TR-1, 1 v. (Philadelphia, Pa., Jan. 16, 1967).

Austin, C. J., Dissemination of Information and Problems ofGraphic Presentation, Proc. 1965 Congress F.I.D. 31st Meetingand Congress, Vol. II, Washington, D.C., Oct. 7-16, 1965,pp. 241-245 (Spartan Books, Washington, D.C., 1966).

Aviiienis, A., Design of Fault-Tolerant Computers, AFIPS Proc.Fall Joint Computer Conf., Vol. 31, Anaheim, Calif., Nov.14-16, 1967, pp. 733-743 (Thompson Books, Washington,D.C., 1967).

Aviiienis, A., A Digital Computer with Automatic Self-Repairand Majority Vote, Computers & Automation 17, No. 5, 13(May 1968).

Avram, H. D., R. S. Freitag and K. D. Guiles, A Proposed Formatfor a Standardize(' Machine-Readable Catalog Record, ISSPlanning Memo. , Preliminary Draft, 110 p. (Library ofCongress, Washingwii, D.C., June 1965).

Baker, C. E.' and A. D. Rugari, A Large-Screen Real-Time DisplayTechnique, Inf. Display 3, No. 2, 37-46 (Mar./Apr. 1966).

Baker, F. B., Latent Class Analysis as an Association Model forInformation Retrieval, in Statistical Association Methods forMechanized Documentation, Symp. Proc. Washington, D.C.,March 17-19, 1964, NBS Misc. Pub. 269, Ed. M. E. Stevenset al., pp. 149-155 (U.S. Govt. Print. Off. Washington, D.C.,Dec. 15, 1965).

Baker, F. T. and W. E. Triest, Advanced Computer Organization,Rept. No. RADC- -TR -66 -148, 1 v. (Rome Air DevelopmentCenter, Griffiss Air Force Base, N.Y., May 1966).

Baker, J. D., From the Diet of Worms to the Bucket of Worms:A Protest Concerning Existing Display Dogma for InformationSystems, in Second Congress on the Information SystemSciences, The Homestead, Hot Springs, Va., Nov. 1964,Ed. J. Spiegel and D. E. Walker pp. 429-433 (Spartan Books,Washington, D.C., 1965).

Baker, J. U. and W. A. Kane, Make EDP Controls Work for Yr- -I,in Data Processing, Vol. X, Proc. 1966 International DataProcessing Conf., Chicago, DI., June 21 -24, 1966, pp. 97-100(Data Processing Management Assoc. 1966).

Balzer, R. M., EXDAMS-EXtendable Debugging and MonitoringSystem, AFIPS Proc. Spring Joint Computer Conf., Vol. 34,

129

Page 136: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Boston, Mass., May 14-16, 1969, pp. 567-580 (AFIPS Press,Montvale, N.J., 1969).

Baran, P., On Distributed Communications: IV. Priority,Precedence, and Overload, Rept. No. RM-3638-PR, 63 p,(The RAND Corp., Santa Monica, Calif., Aug. 1964).

Baran, P., On Distributed Communications: V. History, Alterna-tive Approaches, and Comparisons, Rept. No. RM-3097-PR,51 p. (The RAND Corp., Santa Monica, Calif., Aug. 1964).

Baran, P., On Distributed Communications: VII. TentativeEngineering Specifications and Preliminary Design for a High-Data-Rate Distributed Network Switching Node, Rept. No.RM-3763-PR, 85 p. (The RAND Corp., Santa Monica, Calif.,Aug. 1964).

Baran, P., On Distributed Communications: IX. Security,Secrecy, and Tamper-Free Considerations, Rept. No. RM-3765-PR, 39 p. (The RAND Corp., Santa Monica, Calif.,Aug. 1964).

Baran, P., On Distributed Communications: VIII. The Multi-plexing Station, Rept. No. RM-3764-PR, 103 p. (The RANDCorp., Santa Monica, Calif., Aug. 1964).

Barnum, A. R., Reliability Central Data Management System,in Toward a National Information System, Second AnnualNational Colloquium on Information Retrieval, Philadelphia,Pa., April 23-24, 1965, Ed. M. Rubinoff, pp. 45-61 (SpartanBooks, Washington, D.C., 1965).

Barton, R. S., A Critical Review of the State of the ProgrammingArt, AFIPS Proc. Spring Joint Computer Conf., Vol. 23, Detroit,Mich., May 1963, pp. 169-177 (Spartan Books, Baltimore, Md.,1963).

Baruch, J. J., A Medical Information System: Some GeneralObservations, in Information System Science and Technology,papers prepared for the Third Cong., Scheduled for Nov. 21-22,1966, Ed. D. E. Walker, pp. 145-150 (Thompson Book Co.,Washington, D.C., 1967).

Bauer,/ W. F., On-Line Systems -Their Characteristics andMotl,vations, in On-Line Computing Systems, Proc. Symp.Los/ Angeles, Calif., Feb. 2-4, 1965, Ed. E. Burgess, pp. 14-24(American Data Processing, Inc., Detroit, Mich., 1%5).

Becker, C. H., UNICON Computer Mass Memory System,AMPS Proc. Fall Joint Computer Conf., Vol. 29, San Francisco,Calif., Nov. 7-10, 1966, pp. 711-716 (Spartan Books, Washing-ton, D.C., 1966).

Becker, J. and R. M. Hayes, Information Storage and Retrieval:Tools, Elements, Theories, 448 p. (John Wiley & Sons, NewYork, N.Y., 1963).

Berner, R. W. and A. L. Ellison, Software Instrumentation Sys-tems for Optimum Performance, Proc. IFIP Congress 68,Edinburgh, Scotland, Aug. 5-10, 1968, Booklet C, pp. C 39-C 42 (North-Holland Pub. Co., Amsterdam, 1968).

Bennett, E., Flexibility of Military Information Systems, in Mili-tary Information Systems -The Design of Computer-AidedSystems for Command, Ed. E. Bennett et al., pp. 88-109(Frederick A. Praeger, Pub., N.Y., 1964).

Bennett, E., J. Degan and J. Spiegel, Eds., Military Informationi Systems -The Design of Computer-Aided Systems for Com-' mand, 180 p. (Frederick A. Praeger, Pub., N.Y., 1964).

Bennett, E., E. C. Haines and J. K. Summers, AESOP: A Proto-type for On-Line User Control of Organizational Data Storage,Retrieval and Processing, AFIPS Proc. Fall Joint ComputerConf., Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30 -Dec. 1, 1965,pp. 435-455 (Spartan Books, Washington, D.C., 1965).

Bennett, S. J. and J. W. C. Gates, Holography of Diffusely Re-" fleeting Objects Using a Double Focus Lens, Nature 221,

1234-1235 (Mar. 29, 1969).Bernstein, M. I. and T. G. Williams, An Interactive Programming

System for the Casual User, Computers and Automation 17,26-29 (Feb. 1968).

Bernstein, M. I. and T. G. Williams, A Two Dimensional Pro-gramming System, Proc. IFIP Congress 68, Edinburgh,Scotland, Aug. 5-10, 1968, Booklet C, pp. C 84-C 89 (North-Holland Pub. Co., Amsterdam, 1968).

Bernstein, W. A. and J. T. Owens, Debugging in a Time-SharingEnvironment, AFIPS Proc. Fall Joint Computer Conf., Vol. 33,Pt. 1, San Francisco, Calif., Dec. 9-11, 1968, pp. 7-14 (Thomp-son Book Co., Washington, D.C., 1968).

Bialer, M., A. A. Hastbacka and T. J. Matcovich, Chips AreDown in New Way to Build Large Microsystems, Electronics38, No. 20, 102-109 (Oct. 4, 1965).

Bilous, 0., I. Feinberg and J. L. Langdon, Design of MonolithicCircuit Chips, IBM J. Res. & Dev. 10, No. 5, 370-376 (Sept.1966).

Birch, R. L., Packaging, Labeling, and Finding Evaluated Tech-nical Data, in Data/Information Availability, Ed. R. I. Cole,pp. 163-175 (Thompson Book Co., Washington, D.C., 1966).

Birmingham, D. J., Planning for Data Communications, DataProc. Mag. 6, No. 10, 36-38 (Oct. 1964).

Bittman, E. E., A 16k-Word, 2-MC Magnetic Thin-Film Memory,AFIPS Proc. Fall Joint Computer Conf., Vol. 26, San Francisco,Calif., Oct. 1964, pp. 93-106 (Spartan Books, Baltimore, Md.,1964).

Bleier, R. E. and A. H. Vorhaus, File Organization in the SDCTime-Shared Data Management System (TDMS), Proc. IFIPCongress 68, Edinburgh, Scotland, Aug. 5-10, 1968 Booklet F,pp. F 92-F 97 (North-Holland Pub. Co., Amsterdam, 1968).

Bloembergen, N., New Horizons in Quantum Electronics. , IEEESpectrum 4, 83-86 (July 1967).

Bloom, A. L., Gas Lasers, Proc. IEEE 54, 1262-1276 (Oct. 1966).Blunt, C. R., An Information Retrieval System Model, Rept. No.

352. 14-R-1, 150 p. (HRB-Singer, Inc., State College, Pa.,Oct. 1965).

Blunt, C., R. Duquet and P. Lnekie, Simulation of InformationSystems -Some Advantages, Limitations and An Example of aGeneral Information Systems Simulator, in Levels of Inter-action Between Man and Information, Proc. Am. Doc. Inst.Annual Meeting, Vol. 4, New York, N.Y., Oct. 22-27, 1967.pp. 75-79 (Thompson Book Co., Washington, D.C., 1967).

Bobrow, D. G., Problems in Natural Language Communicationwith Computers, Rept. No. Scientific-5, BBN-1439, AFCRL66-620, 19 p. (Bolt, Beranek and Newman, Inc., Cambridge,Mass., Aug. 1966).

Bonin, E. L. and J. R. Baird, What's New in SemiconductorEmitters and Sensors, Electronics 38, 98-104 (Nov. 1965).

Bonn, T. H., Mass Storage; A Broad Review, Proc. IEEE 54,1861-1870 (Dec. 1966).

Borko, H., Design of Information Systems and Services, inAnnual Review of Information Science and Technology, Vol. 2,Ed. C. A. Cuadra, pp. 35-61 (Interscience Pub., New York,1967).

Borko, H., Ed., Automated Language Processing, 386 p. (Wiley,New York, 1967).

Bowers, D. M., W. T. Lennon, Jr., W. F. Jordan, Jr., and D. G.Benson, TELLERTRON - A Real-Time Updating and Trans-action Processing System for Savings Banks, 1962, IRE Int.Cony. Rec., Pt. 4, pp. 101-113.

Boyd, R., Monitor Controlled Computer Processing of Bookwork,in Institute of Printing, Computer Typesetting Conf., Rept. ofProc., London Univ., July 1964, pp. 152, 153, 157 (Pub. London,1965).

Brick, D. B. and G. G. Pick, Microsecond Word-RecognitionSystem, IEEE Trans. Electron. Computers EC-13, 57-59(Feb. 1964).

Briley, B. E., Picoprogramming: A New Approach to InternalComputer Control, AFIPS Proc. Fall Joint ComputerConf., Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30 -Dec. 1, 1965,pp. 93-98 (Spartan Books, Washington, D.C., 1965).

Brookner, E., M. Kolker and R. M. Wilmotte, Deep-Space OpticalCommunications, IEEE Spectrum 4, 75-82 (Jan. 1967).

Brooks, F. P., Jr., The Future of Computer Architecture, inInformation Processing 1965, Proc. IFIP Congress 65, Vol. 1,New York, N.Y., May 24-29, 1965, Ed. W. A. Kalenich, pp.87-91 (Spartan Books, Washington, D.C., 1965).

Brown, B. R. and A. W. Lohmann, Computer-Generated BinaryHolograms, IBM J. Res. & Dev. 13, No. 2, 160-168 (Mar. 1969).

Brown, G. W., J. G. Miller and T. A. Keenan, Eds., EDUNET-Report of the Summer Study on Information Networks Con-ducted by the Interuniversity Communications Council(EDUCOM), 440 p. (Wiley, New York, 1967).

Brown, R. M., An Experimental Study of an On-Line Man-Computer System, IEEE Trans. Electron. Computers EC-14,82-85 (1965).

Bryant, E. C., Redirection of Research into Associative Retrieval,in Parameters of Information Science, Proc. Am. Doc. Inst.

130

Page 137: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

S

Annual Meeting, Vol. 1, Philadelphia, Pa., Oct. 5-8, 1964,pp. 503-505 (Spartan Books, Washington, D.C., 1964).

Bucci, W.,-PCM: A Global Scramble for Systems Compatibility,Electronics 42, No. 13, 94-102 (June 23, 1969).

Buck, C. P., R. F. Pray, III and G. W. Walsh, Investigation andStudy of Graphic-Semantic Composing Techniques, Rept.No. RADC-TR-61-58, Final Rept. Contract AF 30(602)2091,

v, (Syracuse Univ., Research Inst., June 1961).Buckland, L. F., Machine Recording of Textual Information

During the Publication of Scientific Journals, Research Pro-posal to the National Science Foundation, 1 v. (Inforonics, Inc.,Maynard, Mass., Dec. 16, 1963).

Bujnoski, F., On-Line Memory Integrity Evaluation and Improve-ment, Computer Design '7, No. 12, 31-34 (Dec. 1968).

Burdick, D. S. and T. H. Naylor, Design of Computer SimulationExperiments for Industrial Systems, Commun. ACM 9,329-339 (May 1966).

Burge, W. H., A Reprogramming Machine, Commun. ACM 9,60-66 (Feb. 1966).

Burger, J. B., High Speed Display of Chemical Nomenclature,Molecular Formula and Structural Diagram, Rept. No. C105-R-4, 19 p. (General Electric Co., Huntsville, Ala., Dec. 3L1964):

Burgess, E., Ed., On-Line Computing Systems, Proc. Symp.sponsored by the Univ. of California, Los Angeles, andInformatics, Inc., Los Angeles, Calif., Feb. 2-4, 1965, 152 p.(American Data Processing, Inc., Detroit, Mich., 1965).

Burkhardt, W. H., Universal Programming Languages andProcessors: A Brief Survey and New Concepts, AFIPS Proc.Fall Joint Computer Conf., Vol. 27, Pt. 1, Las Vegas, Nev.,Nov. 30-Dec. 1, 1965, pp. 1-21 (Spartan Books, Washington,D.C., 1965).

Bush, G. P., Cost Considerations, in Electronics in Management,Ed. L. H. Hattery and G. P. Bush, pp. 101-111 (The UniversityPress of Washington, Washington, D.C., 1956).

Butler, E. J. Microfilm and Company Survival Plans, Proc.Eleventh Annual Meeting and Convention, Vol. XI, Washing-ton, D.C., Apr. 25-27, 1962, Ed. V. D. Tate, pp. 57-68 (TheNational Microfilm Assoc., Annapolis, Md., 1962).

Cain, A. M. and I. H. Pizer, The SUNY Biomedical Communica-tion Network: Implementation of an On-Line, Real-Time,User-Oriented System, in Levels of Interaction Between Manand Information, Proc. Am. Doc. Inst. Annual Meeting, Vol. 4,New York, N.Y., Oct. 22-27 1967, pp. 258-262 (ThompsonBook Co., Washington, D.C., 1967).

Cameron, S. H., D. Ewing and M. Liveright, DIALOG: A Con-versational Programming System with a Graphical Orientation,Commun. ACM 10, 349-357 (June 1967).

Campbell, D. J. and W. J. Heffner, Measurement and Analysis ofLarge Operating Systems During System Development, AFIPSProc. Fall Joint Computer Conf., Vol, 33, Pt. 1, San Francisco,Calif., Dec. 9-11, 1968, pp. 903-914 (Thompson Book Co.,Washington, D.C., 1968).

Cantrell, H. N. and A. L. Ellison, Multiprogramming SystemPerformance Measurement and Analysis, AFIPS Proc. SpringJoint Computer Conf., Vol. 32, Atlantic City, N.J., April 30-May 2, 1968, pp. 213-221 (Thompson Book Co., Washington,D.C., 1968).

Caracciolo di Forino, A., Linguistic Problems in ProgramMingTheory, in Information Processing 1965, Proc. IFIP Congress65, Vol. 1, New York, N.Y., May 24-29, 1965, Ed. W, A.Kalenich, pp. 223-228 (Spartan Books, Washington, D.C.,1965).

Caracciolo di Forino, A., Special Programming Languages,Centro Studi Calcolatrici Elettroniche, Universita di Pisa,Italy, 1965, 21 p.

Carlson, C. 0. and H. D. Ives, Some Considerations in the Designof a Laser Thermal Microimage Recorder, 1968 WESCONTechnical Papers, 16/1, Aug. 1968, 8 p.

Carlson, G., Techniques for Replacing Characters That AreGarbled on Input, AFIPS Proc. Spring Joint Computer Conf.,Vol. '8, Boston, Mass., April 1966, pp. 189-192 (SpartanBoor- ,, Washington, D.C., 1966).

Carr, J. W., III and N. S. Prywes, Satellite Computers as Inter-preters, Electronics 38, No. 24, 87-89 (1965).

Castleman, P. A., An Evolving Special-Purpose Time-SharingSystem,' Computers & Automation 16, 1.6-18 (Oct. 1967).

131

Catt, I., E. C. Garth and D. E. Murray, A High-Speed IntegratedCircuit Scratchpad Memory, AFIPS Proc. Fall Joint ComputerConf., Vol. 29, San Francisco, Calif., Nov. 7-10, 1966, pp.315-33,1 (Spartan Books, Washington, D.C., 1966).

Chang, J. T., J. F. Dillon, Jr. and U. F. Gianola, Magneto-OpticalVariable Memory Based upon the Properties of a TransparentFerrimagnetic Garnet at its Compensation Temperature, Proc.10th Conf. on Magnetism and Magnetic Materials, Minneapolis,Minn., Nov. 16-19, 1964, Ed. I. S. Jacobs and E. G. Spencer,Appeared as Part 2 of the Journal of Applied Physics 36,No. 3, 1110-1111 (Mar. 1965).

Chapman, R. E. and M. J. Fisher, A New Technique for RemovableMedia, Read-Only Memories, AFIPS Proc. Fall Joint Com-puter Conf., Vol. 31, Anaheim, Calif., Nov, 14-16, 1967,pp. 371-379 (Thompson Books, Washington, D.C., 1967).

Chase, E. N., Computer-Driven Line-Drawing Displays- A User'sView, Data Proc. Mag. 10, 24-27 (Jan. 1968).

Cheng, G. C., R. S. Ledley, D. K. Pollock and A. Rosenfeld, Eds.,Pictorial Pattern Recognition,,Proc. Symp. on AUtOMElde Photo-interpretation, Washington, D.C., May 31-June 2, 1967, 521 p.(Thompson Book Co., Washington, D.C., 1968).

Cheydleur, B. F., SHIEF: A Realizable Form of AssociativeMemory, Am. Doc. 14, No. 1, 56-57 (Jan. 1963).

Cheydleur, B. F., Ed., Colloquium on Technical Preconditionsfor Retrieval Center Operations, Proc. National Colloquium onInformation Retrieval, Philadelphia, Pa., April 24-25, 1964,156 p. (Spartan Books, Washington, D.C., 1965).

Chong, C. F., R. Mosenkis and D. K. Hanson, Engineering Designof a Mass Random Access Plated Wire Memory, AFIPS Proc.Fall Joint Computer Conf., Vol. 31, Anaheim, Calif., Nov. 14-16,1967, pp. 363-370 (Thompson Books, Washington, D.C., 1967).

Christensen, C. and E. N. Pinson, Multi-Function Graphics for aLarge Computer System, AFIPS Proc. Fall Joint ComputerConf., Vol. 31, Anaheim, Calif., Nov. 14-16, 1967, pp. 697-711(Thompson Books, Washington, D.C., 1967).

Chu, Y., A Destructive-Readout Associative Memory, IEEETrans. Electron. Computers EC-14, No. 4, 600-605 (Aug. 1965).

Clancy, J. C. and M. S. Fineberg, Digital Simulation Languages:A Critique and A Guide, AFIPS Proc. Fall Joint ComputerConf., Vol. 27, Pt. 1, Las Vegas, Nev. Nov. 30-Dec. 1, 1965,pp. 23-36 (Spartan Books, Washington,D.C,., 1965).

Clapp, L. C., Some Brainware Problems in Information Systemsand Operations Analysis, in Information System Science andTechnology, papers prepared for the Third Cong., Scheduledfor Nov. 21-22, 1966, Ed. D. E. Walker, pp. 3-6 (ThompsonBook Co., Washington, D.C., 1967).

Clem, P. L., Jr., AMTRAN- A Conversational-Mode ComputerSystem for Scientists and Engineers, in Proc. IBM ScientificComputing Symp. on Computer-Aided Experimentation, York-town Heights, N.Y., Oct. 11-13, 1965, pp. 115-150 (IBM Corp.,White Plains,, N.Y., 1966).

Climenson, W. D., File Organization and Search Techniques, inAnnual Review of Information Science and Technology,Vol. 1, Ed. C. A. Cuadra, pp. 107-135 (Interscience Pub.,New York, 1966).

Clippinger, R. F., Programming Implications of Hardware Trends,IFIP Congress 65, Vol. 1, New York, N.Y., May 24-29, 1965,Ed. W. A. Kalenich, pp. 207-212 (Spartan Books, Washington,D.C., 1965).

Coffman, E. G., Jr., and 0. Kleinrock, Computer SchedulingMethods and Their Countermeasures, AFIPS Proc. SpringJoint Computer Conf., Vol. 32, Atlantic City, N.J. Apr. 30-May 2, 1968, pp. 11-21 (Thompson Book Co., `Washington,D.C., 1968).

Coggan, B. B., The Design of a Graphic Display System, Rept.No. 67-36, 135 p. (Dept. of Engineering, Univ. of California,Los Angeles, Aug. 1967).

Cohen, J., A Use of Fast and Slow Memories in List-ProcessingLanguages, Commun. ACM 10, No. 2, 82-6 (Feb. 1967).

Cohen, M. I. B. A. Unger and J. F. Milkosky, Laser Machiningof Thin Films and Integrated Circuits, Bell Sys. Tech. J. 47,385-405 (Mar. 1968).

Cohler, E. U. and H. Rubenstein, A Bit-Access Computer in aCommunication System, AFIPS Proc. Fall Joint ComputerConf., Vol. 26, San Francisco, Calif., Oct. 1964, pp. 175-185(Spartan Books, Baltimore, Md., 1964).

Page 138: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Cole, R. I., Ed., Data/Information Availability, 183 p. (ThompsonBook Co., Washington, D.C., 1966).

Collier, R. J., Some Current Views on Holography, IEEE Spec-trum 3, 67-74 (July 1966).

Col lila, R. A., Time-Sharing and Multiprocessing Terminology,Datamation 12, No. 3, 49-51 (Mar. 1966).

Connolly, R., Itek EDP System Utilizes Laser, Emulsion Tech-niques, Electronic News 10, No. 475, 4 (Feb. 15.:x.965).

Constantine, L. L., Integral Hardware/Software Design, Part 1:The Characteristics of a Modern Data System, Modern DataSystems 1, No. 2, 50-59 (Apr. 1968).

Conway, M. E. and L. M. Spandorfer, A Computer System De-signer's View of Large Scale Integration, AFIPS Proc. FallJoint Computer Conf., Vol. 33, Pt. 1, San Francisco, Calif.,Dec. 9-11, 1968, pp. 835-845 (Thompson Book Co., Washing-ton, D.C., 1968).

Conway, R. W., B. M. Johnson and W. I. Maxwell, Some Problemsof Digital Systems Simulation, Management Sci. 6, No. 1,92-110 (Jan. 1959).

Conway, R. W., J. J. Delfausse, W. L. Maxwell and W. E. Walker,CLP The Cornell List Processor, Commun. ACM 8, No. 4,215-216 (Apr. 1965).

Cooper, B., Optical Communications in the Earth's Atmosphere,IEEE Spectrum 3, 83-88 (July 1966).

Cooper, W. W., H. J. Leavitt and M. W. Shelly, II, Eds., NewPerspectives in Organization Research, 606 p. (Wiley, NewYork, 1964).

Corbato, F. J, and V. A. Vyssotsky, Introduction and Overviewof the Multics System, AFIPS Proc. Fall Joint Computer Conf.,Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30Dec. 1, 1965, pp. 185196 (Spartan Books, Washington, D.C., 1965).

Cornew, R. W., A Statistical Method of Spelling Correction, Inf.& Control 12, 79-93 (Feb. 1968).

Craig, J. A., S. C. Berezner, H. C. Carney and C. R. Longyear,DEACON: Direct English Access and Control, AFIPS Proc.Fall Joint Computer Conf., Vol. 29, San Francisco, Calif.,Nov. 7-10, 1966, pp. 365-380 (Spartan Books, Washington,D.C., 1966).

Crawford, D. J., R. L. Moore, J. A. Parisi, J. K. Picciano andW. D. Pricer, Design Considerations for a 25-NanosecondTunnel Diode Memory, AFIPS Proc. Fall Joint ComputerConf., Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30Dec. 1, 1965,pp. 627-636 (Spartan Books, Washington, D.C., 1965).

Crosby, W. S., A Systems Design for the Integrated BibliographyPilot Study, in Integrated Bibliography Pilot Study ProgressReport, Ed. L. Sawin and R. Clark, pp. 33-49 (Univ. of Colorado,Boulder, June 1965).

Croxton, F. E., Identification of Technical Reports, in TheProduction and Use of Technical Reports, Proc. Workshop onthe Production and Use of Technical Reports, conducted at theCatholic Univ. of America, April 13-18, 1953, Ed. B. M. Fryand J. J. Kortendick, pp. 121-135 (The Catholic Univ. ofAmerica Press, Washington, D.C., 1955).

Crutcher, W. C., Common Creation and Ownership of MassiveData Banks, Computers & Automation 18, No. 5, 24-26(May 1969).

Cuadra, C. A., Ed., Annual Review of Information Science andTechnology, Vol. 1, 389 p. (Interscience Pub., New York, 1966).

Cuadra, C. A., Ed., Annual Review of Information Science andTechnology, Vol. 2, 484 p. (Interscience Pub., New York, 1967).

Cunningham, J. F., The Need for ADP Standards in the FederalCommunity, Datamation 15, No. 2, 26-28 (Feb. 1969).

Cutrona, L. J. Recent Developments in Coherent Optical Tech-nology, in

J.,and Electro-Optical Information Processing,

Ed. J. R. Tippett et al., pp. 83-123 (M.I.T., Press, Cambridge,Mass., 1965).

Dahl, 0. J. and K. Nygaard, SIMULA An ALGOL-BasedSimulation Language, Commun. ACM 9, No. 9, 671-678 (Sept.1966).

Dahm, D. M., F. H. Gerbstadt and M. M. Pacelli, A SystemOrganization for Resource Allocation, Commun. ACM 10,No. 12, 772-779 (Dec. 1967).

Daley, R. C. and J. B. Dennis, Virtual Memory, Processes, andSharing in MULTICS, Commun. ACM 11, No. 5, 306-312(May 1968).

Daly, J., R. D. Joseph and P. M. Kelly, Self-Organizing LogicL;ystems, 27 p. (Astropower, Inc., Costa Mesa, Calif., Jan. 1962).

Damron, S., J. Lucas, J. Miller, E, Salbo and M. Wilmnann,A Random Access Terabit Magnetic Memory, AFIPS Proc.Fall Joint Computer Conf., Vol. 33, Pt. 2, San Francisco, Calif.,Dec. 9-11, 1968, pp. 1381-1387 (Thompson Book Co., Washing-ton, D.C., 1968).

Dantine, D. J., Communications Needs of the User for Manage-ment Information Systems, AFIPS Proc. Fall Joint ComputerConf. Vol. 29, San Francisco, Calif., Nov. 7-10, 1966, pp. 403411 (Spartan Books, Washington, D.C., 1966).

Davenport, W. C. CAS Computer-Based Information Services,Datamation 14:No. 3, 33-39 (Mar. 1968).

Davenport, W. P. Efficiency and Error Control in Data Communi-cations, Pt. 2, Elam Proc. Mag. 8,30-35 (Sept. 1966).

David, E. E., Jr., Prologue Computers for All Seasons, in TheHuman Use of Computing Machines, Proc. Symp. concernedwith Diverse Ways of Enhancing Perception and Intuition,Bell Telephone Laboratories, Murray Hill, N.J., June 20-21,1966, pp. 1-3 (Bell Telephone Labs., 1966).

David, E. E., Jr., Sharing a Computer, Int. Sci. Tech. 54, 38-47(1966).

Davis, R. M., Information Control in Command-Control Systems,in New Perspectives in Organization Research, Ed. W. W.Cooper et al., pp. 464-478 (Wiley, New York, 1964).

Davis, R. M., Military Information Systems Design Techniques,in Military Information Systems The Design of Computer-Aided Systems for Command, Ed. E. Bennett et al., pp. 19-28(Frederick A. Praeger, Pub, New York, 1964).

Davis, R. M., Classification and Evaluation of InformationSystem Design Techniques, in Second Cong. on the Informa-tion System Sciences, The Homestead, Hot Springs, Va.,Nov. 1964, Ed. J. Spiegel and D. E. Walker, pp. 77-83 (SpartanBooks, Washington, D.C., 1965).

Davis, R. M., Man-Machine Communication, in Annual Reviewof Information Science and Technology, Vol. 1, Ed. C. A.Cuadra, pp. 221-254 (Interscience Pub., New York, 1966).

Davis, R. M., Information Control In An information System,draft of lecture delivered to the Washington, D.C., Chapter,The Institute of Management Sciences, Oct. 18, 1967, 49 p.

Davis, R. M., Communication Technology and Planning inSupport of Network Implementation, unpublished memo-randum, 1968, 6 p.

Denning, P. J., The Working Set Model for Program Behavior,Commun. ACM 11, No. 5, 323-333 (May 1968).

Denning, P. J., Thrashing: Its Causes and Prevention, AFIPSProc. Fall Joint Computer Conf., Vol. 33, Pt. 1, San Francisco,Calif., Dec. 9-11, 1968, pp. 915-922 (Thompsoct Book Co.,Washington, D.C., 1968).

Dennis, J. B., Segmentation and the Design of MultiprogrammedComputer Systems, J. ACM 12,589-602 (Oct. 1965).

Dennis, J. B., A Position Paper on Computing and Communi-cations, Commun. ACM 11, No. 5, 370-377 (May 1968).

Dennis, J. B. and E. L. Glaser, The Structure of On-Line Informa-tion Processing Systems, in Second Cong. on the InformationSystem Sciences, Hot Springs, Va., Nov. 1964, Ed. J. Spiegeland D. E. Walker, pp. 5-14 (Spartan Books, Washington, D.C.,1965).

Dennis, J. B. and E. C. Van Horn, Programmed Semantics forMultiprogrammed Computations, Rept. No. MACTR-23, 46 p.(Massachusetts Institute of Technology, Cambridge, Dec. 1965).

Dent, J., Diagnostic Engineering, IEEE Spectrum 4, 99-104(July 1967):

DeParis, J. R., Random Access, Data Proc. Mag. 7, No. 2, 30-31(Feb. 1965).

DeParis, J. R., Desk Top Teleprinter, Data Proc. Mag. 7, No. 10,48-49 (Oct. 1965).

Dertouzos, M. L., PHASE PLOT: An On-Line Graphical DisplayTechnique, IEEE Trans. Electron. Computers EC-16, 203-209(April 1967).

Dillon, et al., 1964 See Chang et a1.,1964.Dimeff, J., W. D. Gunter, Jr., and R. J. Hruby, Spectral Dependence

of Deep-Space Communications Capability, IEEE Spectrum 4,No. 9, 98-104 (Sept. 1967).

Dodd, G. G., APL A Language for Associative Data Handlingin PL/1, AFIPS Proc. Fall Joint Computer Conf., Vol. 29,

132

Page 139: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

San Francisco, Ca lif.,, Nov. 7-10, 1966, pp. 677-684 (SpartanBooks, Washington, D.C., 1966).

Dorff, E. K., Computers and Communications: ComplementingTechnologies, Computers & Automation 18, No 5, 22-23(May 1969).

Dorion, G. H., R. W. Roth, J. J. Stafford and G. Cox, CRTPhosphor Activation of Photochromic Film, Inf. Display 3,56-58 (Mar./Apr. 1966).

Drew, D. L., R. K. Summit, R. I. Tanaka and R. B. Whitely,An On-Line Technical Library Reference Retrieval System,in Information Processing 1965, Proc. IFIP Congress 65,Vol. 2, New York, N.Y., May 24-29, 1965, Ed. W. A. Kalenich,pp. 341-342 (Spartan Books, Washington, D.C., 1966). Alsoin Am. Doc. 17, No. 1, 3-7 (Jan. 1966).

Dreyer, L., Principles of a Two-Level Memory Computer, Com-puters & Automation 17, No. 5,40 -42 (May 1968).

Duffy, G. F. and W. D. Timberlake, A Business-Oriented Time-Sharing System, AFIPS Proc. Spring Joint Computer Conf.,Vol. 28, Boston, Mass., April 1966, pp. 265-275 (Spartan Book9,Washington, D.C., 1966).

Duggan, M. A., Software Protection, Datamation 15, No. 6,113, 116 (June 1969).

Duncan, C. J., General Comment, in Advances in ComputerTypesetting, Proc. Int. Computer Typesetting Conf., Sussex,England, July 14-18, 1966, Ed. W. P. Jaspert, pp. x-xi (TheInstitute of Printing, London, 1967).

Duncan, C. J., Advanced Computer Printing Systems, The Pen-rose Annual 59, 254-267 (1966).

Dunn, E. S., Jr., The Idea of a National Data Center and theIssue of Personal Privacy, The Amer. Statistician 21, 21-27(Feb. 1967).

Dunn, R. S., The Case for Bipolar Semiconductor Memories,AFIPS Proc. Fall Joint Computer Conf., Vol. 31, Anaheim,Calif., Nov. l4 -16, 1967, pp. 596-598 (Thompson Books,Washington, D.C., 1967).

Ebersole, J. L., North American Aviation's National OperatingSystem, in Toward a National Information System, SecondAnnual National Colloquium on Information Retrieval, Phila-delphia, Pa., April 23-24, 1965, Ed. M. Rubinoff, pp. 169-198(Spartan Books, Washington, D.C., 1965).

Ebersole, J. L., An Operating Model of a National InformationSystem, Am. Doc. 17, No. 1, 33-40 (Jan. 1966).

Eckman, D. P., Ed., Systems: Research and Design, Proc. FirstSystems Symp. at Case Institute of Technology, Cleveland,Ohio, April 1960, 310 p. (Wiley, New York, 1961).

Edwards, A. W. and R. L. Chambers, Can A Priori ProbabilitiesHelp in Character Recognition, J. ACM 11, No. 4, 465-470(Oct. 1964).

Edwards, W., Probabilistic Information Processing System forDiagnosis and Action Selection, in Second Cong. on theInformation System Sciences, Hot Springs, Va., Nov. 1964,Ed. J. Spiegel and D. E. Walker, pp. 141-155 (Spartan Books,Washington, D.C., 1965).

Emerson, M., The 'Small' Computer versus Time-Shared Systems,Computers & Automation 14, No. 9, 18-20 (Sept. 1965).

Estrin, G., D. Hopkins, B. Coggan and S. D. Crocker, SNUPERCOMPUTER - A Computer in Instrumentation Automation,AFIPS Proc. Spring Joint Computer Conf., Vol. 30, AtlanticCity, N.J., April 18-20, 1967, pp. 645-656 (Thompson Books,Washington, D.C., 1967).

Estrin, G. and L. Kleinrock, Measures, Models and Measurementsfor Time-Shared Computer Utilities, Proc. 22nd National Conf.,ACM, Washington, D.C., Aug. 29-31, 1967, pp. 85-96 (Thomp-son Book Co., Washington, D,C., 1967).

Evans, G. J., Jr., Experience Gained from the American AirlinesSABRE System Control Program, Proc. 22nd National Conf.,ACM, Washington, D.C., Aug. 29-31, 1967, pp. 77-83 (Thomp-son Book Co., Washington, D.C., 1967).

Evans, T. G. and D. L. Darley, DEBUG -An Extension to CurrentOnline Debugging Techniques, Commun. ACM 8, No. 5,321-326 (May 1965).

Evans, T. G. and D. L. Darley, On-Line Debugging Techniques:A Survey, AFIPS Proc. Fall Joint Computer Conf., Vol. 29,San Francisco, Calif Nov. 7-10, 1966, pp. 37-50 (SpartanBooks, Washington, D.C., 1966).

133

Fano, R. M. and F. J. Corbato, Time-Sharing on Computers,Scient. American 215, No. 3, 129-140 (1966).

Fedde, G. A., Plated Magnetic Cylindrical Thin Film MainMemory Systems, AMPS Proc. Fall Joint Computer Conf.,Vol. 31, Anaheim, Calif., Nov. 14-16, 1967, pp. 594-596(Thompson Books, Washington, D.C., 1967).

Feldman, J. A., Aspects of Associative Processing, Tech. Note1965-13, 47 p. (Lincoln Laboratory, M.I.T., Lexington, MassApr. 21, 1965).

Fernbach, S., Computers in the U.S.A.-Today and Tomorrow,in Information Processing 1965, Proc. IFIP Congress 65,Vol. 1, New York, N.Y., May 24-29, 1965, Ed. W. A. Kalenich,pp. 77-85 (Spartan Books, Washington, D.C., 1965).

Fife, D. W, and R. S. Rosenberg, Queuing in a Memory SharedComputer, Proc. 19th National Conf., ACM, Philadelphia, Pa.,Aug. 25-27, 1964, pp. H1-1 to H1-12 (Assoc. for ComputingMachinery, New York, 1964).

Fine, G. H., C. W. Jackson and P. V. McIsaac, Dynamic ProgramBehavior Under Paging, Proc. 21st National Conf. ACM,Los Angeles, Calif., Aug. 30 -Sent, 1, 1966, pp. 223-228(Thompson Book Co., Washington, D.C., 1966). Also in Rept.No, SP-2397, 19 p. (System Development Corp., Santa Monica,Calif., Jerre 1966).

Fischer, G. L., Jr., D. K. Pollock, B. Radack and M. E. Stevens,Eds., Optical Character Recognition, 412 p. (Spartan Books,Washington, D.C., 1962).

Fischler, M. A. and A. Reiter, Variable Topology Random AccessMemory Organization, AFIPS Proc. Spring Joint ComputerConf., Vol. 34, Boston, Mass., May 14-16, 1969, pp. 381-391(AFIPS Press, Montvale, N.J., 1969).

Fisher, D. L., Data, Documentation and Decision Tables, Commun.ACM 9, No. 1, 26-31 (Jan. 1966).

Fleet, J. J., How Far Away Are Cheap Mass Memories, Data &Control 3, No. 8, 28-29 (Aug. 1965).

Fleisher, A., P. Pengelly, J. Reynolds, R. Schools and G. Sincer-box, An Optically Accessed Memory Using the LippmanProcess for Information Storage, in Optical and Electro-Optical Information Processing, Ed. J. R. Tippett et al., pp.1-30 (M.I.T. Press, Cambridge, Mass., 1965).

Flynn, M. J., A Prospectus on Integrated Electronics and Com-puter Architecture, AFIPS Proc. Fall Joint Computer Conf.,Vol. 29, San Francisco, Calif Nov. 7-10, 1966, pp. 97-103(Spartan Books, Washington, D.C., 1966).

Forbes, R. E., D. H. Rutherford, C. B. Stieglitz and L. H. Tung,A Self-Diagnosable Computer, AFIPS Proc. Fall Joint Com-puter Conf., Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30-Dec. 1,1965, pp. 1073-1086 (Spartan Books, Washington, D.C., 1965).

Forgie, J. W., A Time- and Memory-Sharing Executive Programfor QuickResponse On-Line Applications, AFIPS Proc. FallJoint Computer Conf., Vol. 27, Pt. 1, Las Vegas,'Nev., Nov. 30-Dec. 1, 1965, pp. 599-609 (Spartan Books, Washington, D.C.,1965).

Fossum, E. G. and G. Kaskey, Optimization of InformationRetrieval Language and Systems, Final Rept. under ContractAF 49(638)1194, 87 p. (UNIVAC, Blue Bell, Pa., Jan. 28, 1966).

Foster, J. M., List Processing, 54 p. (American Elsevier Pub.,New York, 1967).

Fox, R. S., R. L. McDaniel, C. N. Mooers and W. J. Sanders, ATechnique for Overcoming Lack of Standardization in In-formation Network Operation, in Progress in InformationScience and Technology, Proc. Am. Doc. Inst. Annual Meeting,Vol. 3, Santa Monica, Calif., Oct. 3-7, 1966, pp. 157-165(Adrianne Press, 1966).

Frank, A. L., B-Line, Bell Line Drawing Language, AFIPS Proc.Fall Joint Computer Conf., Vol. 33, Pt. 1, San Francisco,Calif., Dec. 9-11, 1968, pp. 179-191 (Thompson Book Co.,Washington, D.C., 1968).

Frank, W. L., On-Line CRT Displays: User Technology andSoftware, in On-Line Computing Systems, Proc. Symp.sponsored by the Univ. of California, Los Angeles, and in-formatics, Inc., Feb. 1965, Ed. E. Burgess, pp. 50-62 (AmericanData Proc., Inc., Detroit, Mich., 1965).

Franks, E. W., A Data Management System for Time-SharedFile Processing Using a Cross-Index File and Self-DefiningEntries, AFIPS Proc. Spring Joint Computer Conf., Vol. 28,Boston, Mass., April 1966, pp. 79-86 (Spartan Books, Wash-ington, D.C., 1966).

Page 140: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

French, M. B., Disk and Drum Memories, Modern Data 2, No. 5,42-45, 48-50 (May 1969).

Frijda, N. H., Problems of Computer Simulation, Behay. Sci.12, 59-67 (Jan. 1967).

Fry, B, M. and J, J, Kortendick, Eds., The Production and Useof Technical Reports, Proc. Workshop on the Production andUse of Technical Reports, conducted at the Catholic Univ. ofAmerica, April 13-18, 1953,, 175 p. (The Catholic Univ. ofAmerica Press, Washington, D.C., 1955).

Fubini, E. G., The Opening Address, in Second Cong. on theInformation System Sciences, Hot Springs, Va., Nov. 1964,Ed. J. Spiegel and D. E. Walker, pp. 1-4 (Spartan Books,Washington, D.C., 1965).

Fuller, R. H., Content-Addressable Memory Systems, Rept. No,63:725, 2 v. (Dept. of Engr., Univ. of Calif., Los Angeles, Calif.,1963).

Fuller, R. H. and R. M. Bird, An Associative Parallel Processorwith Application to Picture Processing, AFIPS Proc. FallJoint Computer Conf., Vol. 27, Pt. 1, Le,'4 Vegas, Nev., Nov. 30-Dec. 1, 1965, pp. 105-116 (Spartan Books, Washington, D.C.,1965).

Fulton, R. L., Visual Input to Computers, Datamation 9, No. 8,37-40 (Aug. 1963).

Gabor, D., Associative Holographic Memories, IBM J. Res. &Dev. 13, No. 2, 156-159 (Mar. 1969).

Gaines, R. S. and C. Y. Lee, An Improved Cell Memory, IEEETrans. Electron. Computers EC-14, No. 1, 72-75 (Feb. 19651.

Gall, R. G., Hybrid Associative Computer Study, Rept. No.RADC-TR-65-445, Vol. 1, 109 p. (Rome Air DevelopmentCenter, Griffiss Air Force Base, N.Y., July 1966).

Gallenson, L, and C. Weissman, Time-Sharing Systems: Real andIdeal, Rept. No. SP-1272- 20 p. (System Development Corp.,Santa Monica, Calif., Mar. 1965).

Gamblin, R. L., Some Problems of a High Capacity Read-OnlyDigital Information Storage System, 1968 WESCON TechnicalPapers, 16/3, Aug. 1968, 6 p.

Garvin, P. L. Ed., Natural Language and the Computer, 398 p.(McGraw-Hill Book Co., Inc., New York, 1963).

Gayer, D. P., Jr., Probability Models for MultiprogrammingComputer Syst ems, J. ACM 14, No. 3, 423-438 (July 1967).

Geldermans, P., H. 0. Leilich and T. R. Scott, Characteristicsof the Chain Magnetic Film Storage Element, IBM J. Res. &Dev. 11, 291-301 (May 1967).

Gelernter, H. L., Realization of a Geometry Theoreir ProvingMachine, in UNESCO Information Processing, Proc. Int. Conf.,Paris, June 15-20, 1959, pp. 273-282 (Oldenbourg, Munich;Butterworths, London, 1960).

Gentle, E. C., Jr., Data Communications in Risiness, 163 p.(American 1 elephone and Telegraph Co., New York, 1965).

Gibbs, G. and J. MacPhail, Philco Corporation, session on opticalcharacter page readers, in Research and Engineering Councilof the Graphic Arts Industry, Inc., Proc. 14th Annual Conf.,Rochester, N.Y., May 18-20, 1964, pp. 95-106 (Washington,D.C., 1964).

Gibson, D. H., Considerations in Block-Oriented Systems Design,AFIPS Proc. Spring Joint Computer Conf., Vol. 30, AtlanticCity, N.J., April 18-20, 1967, pp. 75-80 (Thompson Books,Washington, D.C., 1967).

Gill, S., The Changing Basis of Programming, in InformationProcessing 1965, Proc. IFIP Congress 55, Vol. 1, New York,N.Y., May 24-29, 1965, Ed. W. A. Kalenich, pp. 201-206(Spartan Books, Washington, D.C., 1965).

Glaser, E. L., J. F. Couleur and G. A. Oliver, System Design of aComputer for Time Sharing Applications, AFIPS Proc.Fall Joint Computer Conf., Vol. 27, Pt. 1, Las Vegas, Nev.,Nov. 30-Dec. 1, 1965, pp. 197-202 (Spartan Books, Wash-ington, D.C., 1965).

Glaser, E., D. Rosenblatt and M. K. Wood, The Design of aFederal Statistical Data Center, The American Statistician21, No. 1, 12-20 (Feb. 1967).

Gluck, S. E., Impact of Scratchpads in Design: MultifunctionalScratchpad Memories in the Burroughs B8500, AFIPS Proc.Fall Joint Computer Conf., Vol. 27, Pt. 1, Las, Vegas, Nev.,Nov. 30 -Dec. 1, 1965, pp. 661-666 (Spartan Books, Wash-ington, D.C., 1965).

Goettel, H. J Bell System Business Communications Seminar,in Data Processing, Vol. X, Proc. 1966 International DataProcessing Conf., Chicago, Ill., June 21-24, 1966, pp. 188-197(Data Processing Management Assoc., 1966).

Gordon, J. P., Optical Communication, Int. Sci. & Tech. 44,60-69 (Aug. 1965).

Gorn, S., Advanced Programming and the Aims of Standardiza-tion, Commun. ACM 9, No. 3, 232 (Mar. 1966),

Gould, R. L., GPSS/360- An Improved General Purpose Simu-lator, IBM Sys. J. 8, No. 1, 16-27 (1969).

Grocer, F, and R, A. Myers, Graphic Computer-Assisted Designof Optical Filters, IBM J. Res. & Dev. 13, No, 2, 172-178(Mar. 1969).

Graham, R. F., Semiconductor Memories; Evolution or Revolu-tion?, Datamation 15, No. 6, 99-101, 103-104 (June 1969).

Graham, R. M., Protection in an Information Processing Utility,Commun. ACM 11, No. 5, 365-369 (May 1968).

Greenberger, C. B., The Automatic Design of a Data ProcessingSystem, in Information Processing 1965, Proc. IFIP Congress65, Vol. 1, New York, N.Y., May 24-29, 1965, Ed. W. A.Kalcnich, pp. 277-282 (Spartan Books, Washington, D.C.,1965).

Greenberger, M., A Multipurpose System for the StructuringOn-Line of Information and Decision Processes, in Informa-tion Processing 1965, Proc. IFIP Congress 65, Vol. 2, NewYork, N.Y., May 24-29, 1965, Ed. W. A. Kalenich, pp. 347-348(Spartan Books, Washington, D.C., 1966).

Griffith, R. L., Data Recovery in a Photo-Digital Storage System,IBM J. Res. & Dev. 13, No. 4, 456-467 (July 1969).

Gross, W. A., Information Storage and Retrieval, A State-of-the-Art Report, Ampex READOUT, special issue, 9 p. (AmpexCorp., Redwood City, Calif., 1967).

Giinther, A., Microphotography in the Library, UNESCO Bull.Lib. 16,1 -22. (1962).

Gurk, H. M. and J. Minker, The Design and Simulation of anInformation Processing System, J. ACM 8, 260-270 (1961).

134

Hagan, T. G., R. J. Nixon and L. J. Schaefer, The Adage GraphicsTerminal, AFIPS Proc. Fail Joint Computer Conf., Vol. 33,Pt. 1, San Francisco, Calif., Dec. 9-11, 1968, pp. 747-755(Thompson Book Co., Washington, D.C., 1968).

Halpern, M., The Case for Natural-Language Programming,Proc. Fall Joint Computer Conf., Vol. 29, San Francisco, Calif.,Nov. 7-10, 1966, pp. 639-649 (Spartan Books, Washington,D.C., 1966).

Halpern, M., The Foun rations of the Case for Natural-LanguageProgramming, AFIPS Proc. Fall Joint Computer Conf., Vol. 29,San Francisco, Calif., Nov. 7-10, 1966, pp. 639-649 (SpartanBooks, Washington, D.C., 1966). Another version appears inIEEE Spectrum 4, 140-149 (Mar. 1967).

Hanlon, A. G., W. C. Myers and C. 0. Carlson, PhotochromicMicro-Image Technology, A Review, paper present d at theEquipment Manual Symp., jointly sponsored by the AmericanOrdnance Assoc., and the Army Material Command, Detroit,Mich., Nov. 30-Dec. 2, 1965, 35 p. (The National Cash RegisterCo., Hawthorne, Calif., 1965).

Hansen, M. H. and J. L. McPherson, Potentialities and Problemsof Electronic Data Processing, in Electronics in Management,Ed. L. H. Hattery and G. P. Bush, pp. 53-66 (The Univ. Pressof Washington, Washington, D.C., 1956).

Harder, E. L., The Expanding World of Computers, Commun.ACM 1 1, 231-239 (Apr. 1968).

Haring, D. R., The Beam Pen: A Novel High Speed Input/OutputDevice for Cathode-Ray-Tube Display Systems, AFIPS Proc.Fall Joint Computer Conf., Vol. 27, Pt. 1, Las Vegas, Nev.,Nov. 30-Dec. 1, 1965, pp. 847-855 (Spartan Books, Washington,D.C., 1965).

Haring, D. R., A Display Console for an Experimental Computer-Based Augmented Library Catalog, Proc. 23rd National Conf.,ACM, Las Vegas, Nev., Aug. 27-29, 1968, pp. 35-43 (Brandon/Systems Press, Inc., Princeton, N.J., 1968).

Haring, D. R., Computer-Driven Display Facilities for an Experi-mental Computer-Based Library, AFIPS Proc. Fall JointComputer Conf., Vol. 33, Pt. 1, San Francisco, Calif., Dec. 9-11, 1968, pp. 255-265 (Thompson Book Co., Washington, D.C.,1968).

Page 141: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Hart, U. L., Photochromic Materials for Data Storage andDisplay, Second Annual Report on Contract Nonr 4583(00),36 p. (UNIVAC Defense Systems Division, Sperry Rand Corp,St. Paul, Minn., May 15, 1966).

Hartsuch, P. J., Graphics Arts Progress . , . in 1967, GraphicArts Monthly 40, 52-57, 61 (Jan. 1968).

Haskell, J. W., Design of a Printed Card Capacitor Read-OnlyStore, IBM J. Res. & Dev. 10, No. 2, 142-157 (Mar. 1966).

Hattery, L. H. and G. P. Bush, Eds., Electronics in Management,207 p. (The University Press of Washington, Washington, D.C.,1956).

Hayes, R. M., Mathematical Models for Information Retrieval,in Natural Language and the Computer, Ed. P. L. Garvin,pp. 268-309 (McGraw-Hill, New York, 1963).

Hayes, R. M., Information Retrieval: An Introduction, Data-mation 14, No. 3, 22-26 (Mar. 1968).

Head, R. V., The Programming Gap in Real-Time Systems,Datamation 9, 39-41 (Feb. 1963).

Heiner, J. A., Jr., and R. 0. Leishman, Generalized RandomExtract Device, Proc. 21st National Conf., ACM, Los Angeles,Calif., Aug. 30-Sept. 1, 1966, pp. 339-345 (Thompson Book Co.,Washington, D.C., 1966).

Hellerman, H., Some Principles of Time-Sharing SchedulerStrategies, IBM Sys. J. 8, No, 2, 94-117 (1969).

Henle, R. A. and L. 0. Hill, Integrated Computer Circuits -Past, Present, and Future, Proc. IEEE 54, 1849-1860 (Dec.1966).

Hennis, R. B., Recognition of Unnurtured Characters in a Multi-font Application, Tech. Pub. 07.212, 13 p. (IBM SystemsDevelopment Div., Lab., Rochester, Minn., May 9, 1967).

Henry, W. R., Hierarchical Structure for Data Management,IBM Sys. J. 8, No. 1, 2-15 (1i969).

Hershey, A. V., Calligraphy for Computers, NWL Rept. No.2101, 1 v. (U.S. Naval Weapons Lab., Dahlgren, Va., Aug. 1,1967).

Hickey, P. R., The A-1200: A Narrow Band System Test Vehicle,in Data Processing, Vol. X, Proc. 1966 Int. Data Proc. Conf.,Chicago, Ill., June 21-24, 1966, pp. 175-187 (Data Proc.Management Assoc., 1966).

Hillegass, J. R., NCR's New Bid: The Century Series, Data Proc.Mag. 10, No. 4, 96-49, 52-54 (Apr. 1968).

Hillegass, J. R. and L. F. Melick, A Survey of Data CollectionSystems, Data Proc. Mag. 9, No. 6, 50-56 (June 1967).

Hirsch, P. M., J. A. Jordan, Jr. and L. B. Lesem, Digital Con-struction of Holograms, Proc. IFIP Congress 68, Edinburgh,Scotland, Aug. 5-10, 1r...68,.Booklet H, pp. H 104.-H 109 (North-Holland Pub. Co., Amsterdam, 1968).

Hoagland, A. S., Storing Computer Data, Int. Sci. & Tech. No.37, 52-58 (Jan. 1965).

Hoagland, A. S., Mass Storage Revisited, Proc. 22nd NationalConf., ACM, Washington, D.C., Aug. 29-31, 1967, pp. 255-260(Thompson Book Co., Washington, D.C., 1967).

Hobbs, L. C., The Impact of Hardware in the 1970's, Datamation12, No. $, 36-44 (Mar. 1966).

Hobbs, L, C., Display Applications and Technology, Proc.IEEE 54,1870-1884 (Dec. 1966).

Hoffman, A., The Information and Data Exchange ExperimentalActivities (IDEEA) Program and Its Relation to the NationalInterests, in Toward a National Information System, SecondAnnual National Colloquium on Information Retrieval,Philadelphia, Pa., April 23-24, 1965, Ed. M. Rubinoff, pp.87-104 (Spartan Books, Washington, D.C., 1965).

Holland, G., Reactive Terminal Service from ITT, Data Proc.11, No. 1, 65-67 (Jan.-Feb. 1969).

Holland, J., A Universal Computer Capable of Executing anArbitrary Number of Sub-Programs Simultaneously, Poc.Eastern ,Joint Computer Conf., Vol. 16, Boston, Mass., Dec.1-3, 1959, pp. 108-113 (Eastern Joint Computer Conf., 1959).

Hormann, A. M., Introduction to ROVER, an Information Proc-essor, Rept. No. FN-3487, 51 p. (System Development Corp.,Santa Monica, Calif., Apr. 25, 1960).

Horne, W. H., A. S. Sabin and J. D. Wells, Random AccessCommunications for the Safety Services, in Law EnforcementScience and Technology, Vol. 1, Proc. First National Symp.on Law Enforcement Science and Technology, Chicago, Ill.,March 1967, Ed. S. A. Yefsky, pp. 115-123 (Thompson BookCo., Washington, D.C., 1967).

135

Horvath, V. V., J, M. Holeman and C. Q. Lemmond, FingerprintRecognition by Holographic Techniques, in Law EnforcementScience and Technology, Vol. 1, Proc. First National Symp.on Law Enforcement Science and Technology, Chicago,March 1967, Ed. S. A. Yefsky, pp. 485-492 (Thompson BookCo., Washington, D.C., 1967).

Housden, R. J. W., The Definition and Implementation (,)f LSIXin BCL, The Computer J.12, No. 1, 15-23 (Feb. 1969).

Howe, W., High-Speed Logic Circuit Considerations, AFIPSProc. Fall Joint Computer Conf., Vol. 27, Pt. 1, Las Vegas,Nev., Nov. 30-Dec. 1, 1965, pp. 505-510 (Spartan Books,Washington, D.C., 1965).

Huang, T. S. and 0. J. Tretiak, Research in Picture Processing,in Optical and Electro-Optical Information Processing, Ed.J. R. Tippett et al., pp. 45-57 (M.I.T. Press, Cambridge, Mass.,1965).

Hudson, D. M., The Applications and Implications of Large-Scale Integration, Computer Design 7, 38-42, 47-48 (June1968).

Huesmann, L. R. and R. P. Goldberg, Evaluating ComputerSystems Through Simulation, The Computer J. 10, 150-156(Aug. 1967).

Hughes Dynamics, Inc., Methodologies for System Design, FinalReport for Contract No, AF30(602)-2620, Rept. No. RADC-TDR-63-486, Vol. 1, 1 v. (Los Angeles, Calif., Feb. 24, 1964).

Hughes Dynamics, Inc., The Organization of Large Files,Part 1-VI, (Advance Inf. System Div., Hughes Dynamics, Inc.,Sherman Oaks, Calif., Apr. 1964).

Huskey, H. D., On-Line Computing Systems: A Summary, inOn-Line 1.omputing Systems, Proc. Symp. sponsored by theUniv, of California, Los Angeles, and Informatics, Inc., Feb.1965, Ed. E. Burgess, pp. 139-142 (American Data Proc., Inc.,Detroit, Mich., 1965).

Hutchinson, G. K. and J. N. Maguire, Computer Systems Designand Analysis Through Simulation, AFIPS Proc. Fall JointComputer Conf., Vol. 27, Pt. 1, Las Vega;;, Nev., Nov. 30-Dec. 1, 1965, pp. 161-167 (Spartan Books, Washington, D.C.,1965).

IBM System/360, Data Proc. 7, No. 5, 290-302 (Sept.-Oct. 1965).The Impact of integrated Circuits on the Computer Field,

Computers & Automation 14, No. 7, 9-10 (1965).Impact of LSI on the Next Generation of Computers (Panel

Discussion), Participants: J. P. Eckert, G. Hollander,M. Palevsky and B. Pollard, Computer Design 8, No. 6, 48,50-59 (June 1969).

Investigation of Inorganic Phototropic Materials as a Bi-OpticElement Applicable in High Density Storage ComputerMemories, Rept. No. ASD-TDR-62-305, 50 p. (AeronauticalSystems Div., Electronic Technology Lab., Wright-PattersonAFB, Ohio, Apr. 1962).

Israel, D. R., System Engineering Experience with AutomatedCommand and Control Systems, in Information System Scienceand Technology, papers prepared for the Third Cong., sched-uled for Nov. 21-22, 1966, Ed. D. E. Walker, pp. 193-213(Thompson Book Co., Washington, D.C., 1967).

Jacobellis, B. R., Impact of Computer Technology on Communi-cations, Proc. ACM 19th National Conf., Philadelphia, Pa.,Aug. 25-27, 1964, pp. N2. 1-1 to N2. 1-4 (Assoc. for ComputingMachinery, New York, N.Y., 1964).

Jacobs, I. S. and E. G. Spencer, Eds., Proc. 10th Conference onMagnetism and Magnetic Materials, Minneapolis, Minn., Nov.16-19, 1964. Appeared as Part 2 of the Journal of AppliedPhysics 36, No. 3, 877-1280 (Mar. 1965).

Jacobs, J. F., Communication in the Design of Military Informa-tion Systems, in Military Information Systems, Ed. E. Bennettet al., pp. 29-45 (Frederick A. Praeger, Pub., New York, 1964).

Jacoby, K. Isolation of Control Malfunctions in a Digital Com-puter, preprints of summaries of papers presented at the 14thNational Meeting, ACM, Cambridge, Mass., Sept. 1-3, 1959,pp. 7-1 to 7-4 (Assoc. for Computing Machinery, New York,1959).

Jaspert, W. P., Ed., Advances in Computer Typesetting, Proc.Int. Computer Typesetting Conf., Sussex, England, July 14-18,1966, 306 p. (The Institute of Printing, London, 1967).

Page 142: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Jensen, P. A., A Graph Decomposition Technique for StructuringData, Rept. No. 106-3, 1 v. (Computer Command and ControlCo., Washington, D.C., Sept. 1, 1967).

Johnson, H. R., Computers and the Public Welfare, Law Enforce-merit, Social Services and Data Banks, preprint, paper forUCLA Conf. on Computers and Communications, 21 p.,(Office of Telecommunications Management, ExecutiveOffice of the President, Washington, D.C., 1967).

Jones, A. H., Time - Sharing -An Accepted Technique, DataProc. 11, No. 1, 68-69 (Jan.-Feb. 1969).

Jones, J. D., Electronic Data Communications, in Data ProcessingYearbook 1965, pp. 63-66 (American Data Processing, Inc.,Detroit, Mich., 1964).

Jones, R. D The Public Planning Information System and theComputer Utility, Proc. 22nd National Conf., ACM, Washing-ton, D.C., Aug. 29-31, 1967, pp. 553-564 (Thompson Book Co.,Washington, D.C., 1967).

Jones, R. H. and E. E. Bittmann, The B8500-Microsecond Thin-Film Memory, AFIPS Proc. Fall Joint Computer Conf., Vol.31, Anaheim, Calif. Nov. 14-16, 1967, pp. 347-352 (ThompsonBooks, Washington, D.C., 1967).

Joseph, E. C., Computers: Trends Toward the Future, Proc.IFIP Congress 68, Edinburgh, Scotland, Aug. 5-10, 1968,Vol. 1, Invited Papers, pp. 145-157 (North-Holland Pub. Co.,Amsterdam, 1968).

Joyce, J. D. and M. J. Cianciolo, Reactive Displays: ImprovingMan-Machine Graphical Communication, AFIPS Proc. FallJoint Computer Conf., Vol. 31, Anaheim, Calif., Nov. 14-16,1967, pp. 713-721 (Thompson Books, Washington, D.C., 1967).

Justice, B. and F. B. Liebold, Jr., Photochromic Glass- A NewTool for the Display System Designer, Inf. Display 2, 23-28(Nov.-Dec. 1965).

Kalenich, W. A., Ed Information Processing 1965, Proc. IFIPCongress 65, Vol. 1, New York, N.Y., May 24-29, 1965, 304 p.(Spartan Books, Washington, D.C., 1965).

Kalenich, W. A., Ed., Information Processing 1965, Proc. IFIPCongress 65, Vol. 2, New York, N.Y., May 24-29, 1965. pp.305-648 (Spartan Books, Washington, D.C., 1966).

Kaplan, D. C. and C. F. Kooi, Magnetostatic Echoes, Tech. Rept.No. 8 (Lockheed Palo Alto Research Las., Palo Alto, Calif.,Aug. 1, 1966).

Karush, W., On the Use of Mathematics in Behavioral Research,in Natural Language and the Computer, Ed. P. L. Garvin,pp. 67-83 (McGraw-Hill, New York, 1963).

Kaufman, B. A., P. B. Ellinger and H. J. Kuno, A RotationallySwitched Rod Memory with a 100-Nanosecond Cycle Time,AFIPS Proc. Fall Joint Computer Conf., Vol. 29, San Francisco,Calif., Nov. 7-10, 1966, pp. 293-304 (Spartan Books, Washing-ton, D.C., 1966).

Kautz, W. H., K. N. Levitt and A. Waksman, Cellular Intercon-nection Arrays, IEEE Trans. Computers C-17, 443-451 (May1968).

Kay, M., F. Valadez and T. Ziehe, The Catalog Input/OutputSystem, Memo. RM-4540-PR, 64 p. (The RAND Corp., SantaMonica, Calif., Mar. 1966).

Kay, R. H., The Management and Organization of Large ScaleSoftware Development Projects, AFIPS Proc. Spring JointComputer Conf., Vol. 34, Boston, Mass., May 14-16, 1969,pp. 425-433 (AFIPS Press, Montvale, N.J., 1969).

Kent, A., Ed., Information Retrieval and Machine Translation,Pt. I, Proc. Int. Conf. for Standards on a Common Languagefor Machine Searching and Translation, Western ReserveUniv., Cleveland, Ohio, Sept. 6-12, 1959, 686 p. (InternationalPub. Inc., New York, 1960).

Kesselman, M. L., How Do We Stand on the Big Board?, AFIPSProc. Fall Joint Computer Conf., Vol. 31, Anaheim, Calif.,Nov. 14-16, 1967, pp. 161-167 (Thompson Books, Washington,D.C., 1967).

Kessler, M. M., An Experimental Communication Center forScientific and Technical Information, 19 p. (Lincoln Lab.,Massachusetts Institute of Technology, Lexington, Mar. 31,1960).

King, G. W., Ed., Automation and the Library of Congress, asurvey sponsored by the Council on Library Resources, Inc.,88 p. (Library of Congress, Washington, D.C., 1963).

King, G. W., The Library of Congress Project, Lib. Res. & Tech.Serv. 9, No. 1, 90-93 (Winter 1965).

Knowlton, K. C., A Programmer's Description of Lo, Commun.ACM 9, No. 8, 616-625 (Aug. 1966).

Knuth, D. E., and J. E, McNeley, SOL: A Symbolic Language forGeneral Purport' Systems Simulation, IEEE Trans. Electron.Computers EC-13, No. 4, 401 (Aug. 1964).

Kochen, M., Ed., Some Problems in Information Science, 309 p.(The Scarecrow Press, Inc., New York, 1965).

Kohn, G., Future of Magnetic Memories, in Information Process-ing 1965, Proc. IFIP Congress 65, Vol. 1, New York, N.Y., May24-29, 1965, Ed. W. A. Kalenich, pp. 131-136 (SpartanBooks,, Washington, D.C., 1965).

Kohn, G., W. Jutzi, Th. Mohr and D. Seitzer, A Very-High-Speed, Nondestructive-Read Magnetic Film Memory, IBMJ. Res. & Dev. 1 1 , No. 2, 162-168 (Mar. 1967).

Kosonocky, W. F. and R. H. Cornely, Lasers for Logic Circuits,1968 WESCON Technical Papers, 16/4, Aug. 1968, 10 p.

Kovalevsky, V. A., Present and Future of Pattern-RecognitionTheory, in Information Processing 1965, Proc. IFIP Congress65, Vol. 1, New York, N.Y., May 24-29, 1965, Ed. W. A.Kalenich, pp. 37-43 (Spartan Books, Washington, D.C., 1965).

Kozmetsky, G. and P. Kircher, Electronic Computers and Man-agement Control, 296 p. (McGraw-Hill, New York, 1956).

Kroger, M. G., Introduction to Tactical Information Systems,in Second Cong. on the Information System Sciences, HotSprings, Va., Nov. 1964, Ed. J. Spiegei and D. E. Walker, pp.267-273 (Spartan Books, Washington, D.C., 1965).

Kump. H. J. and P. T. Chang, Thermostrictive Recording onPermailoy Films, IBM J. Res. & Dev. 10, No. 3, 255-260(May 1966).

Kuney, J. H., Analysis of the Role of the Computer in the Repro-duction and Distribution of Scientific Papers, in Automationand Scientific Communication, Short Papers, Pt. 2, paperscontributed to the Theme Sessions of the 26th Annual Meeting,Am. Doc. Inst., Chicago, Ill., Oct. 6-11, 1963, Ed. H. P. Luhn,pp. 249-250 (Am. Doc. Inst., Washington, D.C., 1963).

Kuney, J. H. and B. G. Lazorchak, Machine-Set ChemicalStructures, in Parameters of Information Science, Proc. Am.Doc. Inst. Annual Meeting, Vol. 1, Philadelphia, Pa., Oct.5-8, 1964, pp. 303-305 (Spartan Books, Washington, D.C.,1964).

Lang, C. A., R. B. Polansky and D. T. Ross, Some Experimentswith an Algorithmic Graphical Language, Rept. No. ESL-TM-220, 55 p. (Massachusetts Institute of Technology, Cam-bridge, Aug. 1965).

Larsen, R. P. and M. M. Mano, Modeling and Simulation ofDigital Networks, Commun. ACM 8, No. 5, 308-312 (May1965).

"Laser Applied to Molecular Kinetics Studies", NBS Tech.News Bull. 52, No. 11, 242-257 (Nov. 1968).

Laurance, N., A Compiler Language for Data Structures, Proc.23rd National Conf., ACM, Las Vegas, Nev., Aug. 27-29,1968, pp. 387-394a (Brandon/Systems Press, Inc., Princeton,N.J., 1968).

The Lavish Laser, Vectors (Hughes Aircraft Co.) VIII, 15-17,Fourth Quarter, 1966.

Lawson, H. W., Jr., PL/I List Processing, Commun. ACM 10,358-367 (June 1967).

Ledley, R. S., J. Jacobsen and M. Belson, BUGSYS: A Program-ming System for Picture Processing -Not for Debugging,Commun. ACM 9, 79-84 (Feb. 1966).

Lee, R. W., and R. W., Worral, Eds., Electronic Composition inPrinting, Proc. Symp. Gaithersburg, Md., June 15-16, 1967,NBS Special Pub. 295, 128 p. (U.S. Govt. Print. Off., Washing-ton, D.C., Feb. 1968).

Lehrer, N. H. and R. D. Ketchpel, Recent Progress on a High-Resolution, Meshless, Direct-View Storage Tube, AFIPS Proc.Fall Joint Computer Conf., Vol. 29, San Francisco, Calif.,Nov. 7-10, 1966, pp. 531-539 (Spartan Books, Washington,D.C., 1966).

Leith, E. N., A. Kozma and J. Upatnieks, Coherent OpticalSystems for Data Processing, Spatial Filtering, and WavefrontReconstruction, in Optical and Electro-Optical InformationProcessing, Ed., J. R. Tippett et al., pp. 143-158 (M.I.T. Press,Cambridge, Mass., 1965).

136

Page 143: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Lesem, L. B., P. M. Hirsch and J. A. Jordan, Jr., HolographicDisplay of Digital Images, Proc. 22nd National Conf., ACM,Washington, D.C., Aug. 29-31, 1967, pp. 41-47 (ThompsonBook Co., Washington, D.C., 1967).

Lesem, L. B., P. M. Hirsch and J. A. Jordan, Jr., The Kinoform: ANew Wavefront Reconstruction Device, IBM J. Res. & Dev.13, No. 2, 150-155 (Mar. 1969).

Levien, R. and M. E. Maron, Relational Data File: A Tool forMechanized Inference Execution and Data Retrieval, Rept.No. RM-4793-PR, 89 p. (The RAND Corp., Santa Monica,Calif., Dec. 1965).

Lewin, M. H., H. R. Beelitz and J. Guarracini, Fixed Resistor-Card Memory, IEEE Trans. Electron. Computers EC-14,428-434 (1965).

Licklider, J. C. R., Artificial Intelligence, Military Intelligence,and Command and Control, in Military Information Systems,Ed. E. Bennett et al., pp. 118-133 (Frederick A. Praeger, Pub.,New York, 1964).

Licklider, J. C. R., Man-Computer Interaction in InformationSystems, in Toward a National Information System, SecondAnnual National Colloquium on Information Retrieval, Phila-delphia, Pa., April 23-24, 1965, Ed. M. Rubinoff, pp. 63-75(Spartan Books, Washington, D.C., 1965).

Licklider, J. C. R., Libraries of the Future, 219 p. (M.I.T. Press,Cambridge, Mass., 1965).

Licklider, J. C. R., Man-Computer Partnership, Int. Sci. & Tech.41, 18-26 (1965).

Licklider, J. C. R., Interactive Information Processing, inComputer and Information Sciences-II, Proc. 2nd Symp. onComputer and Information Science;;, Columbus, Ohio., Aug.22-24, 1966, Ed. J. T. Tou, pp. 1-13, (Academic Press, NewYork, 1967).

Licklider, J. C. R. and W. E. Clark, On-Line Man-ComputerCommunications, AFIPS Proc. Spring Joint Computer Conf.,Vol. 21, San Francisco, Calif., May 1-3, 1962, pp. 113-128National Press, Palo Alto, Calif., 1962).

Linde, R. R. and P. E. Chaney, Operational Management ofTime-Sharing Systems, Proc. 21st National Conf., ACM, LosAngeles, Calif., Aug. 30-Sept. 1, 1966, pp. 149-159 (Thomp-son Book Co., Washington, D.C., 1966).

Lindquist, A. B., R. R. Seeber and L. W. Comeau, A Time-Sharing System Using an Associative Memory, Proc. IEEE 54,1774-1779 (Dec. 1966).

Lo, A. W., High-Speed Logic and Memory -Past, Present andFuture, AFIPS Proc. Fall Joint Computer Conf., Vol. 33, Pt. 2,San Francisco, Calif., Dec. 9-11, 1968, pp. 1459-1465 (Thomp-son Book Co., Washington, D.C., 1968).

Lock; K., Structuring Programs for Multiprogram Time-SharingOn-Line Applications, AFIPS Proc. Fall Joint Computer Conf.,Vol. 27, Pt. 1, Las Vegas, Nev., Nov., 30-Dec. 1, 1965, pp.457-472 (Spartan Books, Washington, D.C., 1965).

Loomis, H. H., Jr., Graphical Manipulation Techniques Usingthe Lincoln TX-2 Computer, Rept. No. 51-G-0017, 27 p.(Linco1'n Lab., Massachusetts Institute of Technology, Lex-ington, Nov. 10, 1960).

Luhn, H. P. Ed., Automation and Scientific Communication,Short Papers, Pt. 2, papers contributed to the Theme Sessionsof the 26th Annual Meeting, Am. Doc. Inst., Chicago, Ill.,Oct. 6-11, 1963, pp. 129-352 (Am. Doc. Inst., Washington,D.C., 1963).

Lynch, W. C., Description of a High Capacity, Fast TurnaroundUniversity Computing Center, Commun. ACM 9, No. 2, 117-123 (Feb. 1966).

MacDonald, N., A Time Shared Computer System -The Dis-advantages, Computers and Automation 14, No. 9, 21-22(Sept. 1965).

Machover, C., Graphic CRT Terminals - Characteristics ofCommercially Available Equipment, AFIPS Proc. Fall JointComputer Conf., Vol. 31, Anaheim, Calif., Nov. 14-16, 1967,pp. 149-159 (Thompson Books, Washington, D.C., 1967).

Madnick, S. E., Multi-Processor Software Lockout, Proc. 23rdNational Conf., ACM, Las Vegas, Nev., Aug. 27-29, 1968,pp. 19-24 (Brandon/Systems Press, Inc., Princeton, N.J.,1968).

Magnino, J. J., Jr IBM Technical Information Retrieval Center-Normal Text Techniques, in Toward a National Information

System, Second Annual National Colloquium on InformationRetrieval, Philadelphia, Pa., April 23-24, 1905, Ed. M. Rubinoff,pp. 199-215 (Spartan Books, Washington, D.C., 1965).

Mahan, R. E., A State of the Art Survey of the Data Display Field,Rept. No. BNWL-725, AEC Research and DevelopmentReport, Battelle Northwest, Richland, Wash., May 1968, 1 v.

Maiman, T. H., Stimulated Optical Radiation in Ruby, Nature187, 493-494 (Aug. 1960).

Malbrain, J. P., Automated Computer Design, in Preprints ofSummaries of Papers Presented at the 14th National Meeting,ACM, Cambridge, Mass., Sept. 1-3, 1959, pp. 4-1 to 4-2(Assoc. for Computing Machinery, New York, 1959).

Marill, T. and L. G. Roberts, Toward a Cooperative Network ofTime-Shared Computers, AFIPS Proc. Fall Joint ComputerConf., Vol. 29, San Francisco, Calif., ,Nov. 7-10, 1966, pp.425-431 (Spartan Books, Washington, D.C., 1966).

Markus, J. V., State of the Art of Published Indexes, Am. Doc.13, No. 1, 15-30 (Jan. 1962).

Markuson, B. E., Ed., Libraries and Automation, Proc. Conf.held at Airlie Foundation, Warrenton, Va., May 26-30, 1963,368 p., under sponsorship of the Library of Congress, theNational Science Foundation, and the Council on LibraryResources (Library of Congress, Washington, D.C., 1964).

Markuson, B. E., Automation in Libraries and InformationCenters, in Annual Review of Information Science and Tech-nology, Vol. 2, Ed. C.A. Cuadra, pp. 255-284 (IntersciencePub., New York, 1967).

Matick, R. E., P. Pleshko, C. Sie and L. M. Terman, A High-Speed Read-Only Store Using Thick Magnetic Films, IBMJ. Res. & Dev. 10, No. 4, 333-342 (July 066).

Mazzarese, N. J., Experimental System Provides Flexibility andContinuity, Data Proc. Mag. 7, No. 5, 68-70 (1965).

McCallister, J. P. and C. F. Chong, A 500-Nanosecond MainComputer Memory Utilizing Plated-Wire Elements, AFIPSProc. Fall Joint Computer Conf., Vol. 29, San Francisco, Calif.,Nov. 7-10, 1966, pp. 305-314 (Spartan Books, Washington,D.C., 1966).

McCamy, C. S., Photographic Standardization and Research atthe National Bureau of Standards, Applied Optics 6, No. 1,27-30 (Jan. 1967).

McCarthy, J., Problems in the Theory of Computation, in Infor-mation Processing 1965, Proc. IFIP Congress 65, Vol. 1, NewYork, N. Y., May 24-29, 1965, Ed. W. A. Kalenich, pp. 219-222 (Spartan Books, Washington, D.C., 1965).

McCarthy, J., S. Boilen, E. Fredkin and J. C. R. Licklider, ATime-Sharing Debugging System for a Small Computer, AFIPSProc. Spring Joint Computer Conf., Vol. 23, Detroit, Mich.,May 1963, pp. 51-57 (Spartan Books, Baltimore, Md., 1963).

McClure, R. M., TMG- A Syntax Directed Compiler, Proc. 20thNational Conf., ACM, Cleveland, Ohio, Aug. 24-26, 1965, pp.262-274 (Lewis Winner, New York, 1965).

McDermid, W. L. and H. E. Petersen, A Magnetic AssociativeMemory System, IBM J. Res. & Dev. 5, No. 1, 59-62 (Jan.1961).

McFarland, K. and M. Hashiguchi, Laser Recording Unit for HighDensity Permanent Digital Data Storage, AFIPS Proc. FallJoint Computer Conf., Vol. 33, Pt. 2, San Francisco, Calif.,Dec. 9-11, 1968, pp. 1369-1380 (Thompson Book Co., Wash-ington, D.C., 1968).

McGee, W. C., File Structures for Generalized Data Management,Proc. IFIP Congress 68, Edinburgh, Scotland, Aug. 5-10,1968. Booklet F. pp. F 68-F 73 (North-Holland Pub. Co.,Amsterdam, 1968).

McGee, W. C. and H. E. Petersen, Microprogram Control forthe Experimental Sciences, AFIPS Proc. Fall Joint ComputerConf., Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30-Dec. 1, 1965,pp. 77-91 (Spartan Books, Washington, D.C., 1965).

McMains, H. J., Electrical Communications in the Future,Datamation 12, No. 11, 28-30 (Nov. 1966).

Meadow, C. T., The Analysis of Information Systems: A Pro-grammer's Introduction to Information Retrieval, 301 p.Wiley, New York, 1967).

Meddaugh, S. A. and K. L. Pearson, A 200-Nanosecond ThinFilm Main Memory System, AFIPS Proc. Fall Joint ComputerConf., Vol. 29, San Francisco, Calif., Nov. 7-10, 1966, pp.281-292 (Spartan Books, Washington, D.C., 1966).

137

Page 144: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Me lick, L. F., The Computer Talks Back, Data Proc. Mag. 8,58-62 (Oct. 1966).

Menkhaus, E, J., The Ways and Means of Moving Data, Bus.Automation 14, 30-37 (Mar. 1967).

Merryman, J. D., Making Light of the Noise Problem, Electronics3 8, No. 15, 52-56 (1965).

Miller, H. S., Resolving Multiple Responses in an AssociativeMemory, IEEE Trans. Electron. Computer EC-13, No. 5,614-616 (Oct. 1964).

Miller, L., J. Miuker, W. G. Reed and W. E. Shindle, A Multi-Level File Structure for Information Processing, Proc. WesternJoint Computer Conf., Vol. 17, San Francisco, Calif., May3-5, 1960, pp. 53-59 (Pub. by Western Joint Computer Conf.,San Francisco, Calif., 1960).

Mills, R. G., Communications Implication of the Project MACMultiple-Access Computer System, 1965 IEEE InternationalCony. Rec., Pt. 1, pp. 237-241.

Mills, R. G. Man-Computer Interaction-Present and Future,1966 IEEE International Cony. Rec., Pt. 6, pp. 196-198.

Mills, R. G. Man-Machine Communication and Problem Solving,in Annual Review of Information Science and Technology,Vol. 2, Ed. C. A. Cuadra, pp. 223-254 (Interscience Pub., NewYork, 1967).

Minker, J. and J. Sable, File Organization and Data Management,in Annual Review of Information Science and Technology,Vol. 2, Ed. C. A. Cuadra, pp. 123-160 Interscience Pub.,New York, 1967).

Minnick, R. C., A Survey of Microcellular Research, J. ACM 14,No. 2, 203-241 (Apr. 1967).

Mooers, C. N., Information Retrieval Selection Study, Part II:Seven System Models, Rept. No. ZTB-133, Part II, 39 p.(Zator Co., Cambridge, Mass., Aug. 1959).

Mooers, C. N., TRAC, A Procedure-Describing Language for theReactive Typewriter, Commun. ACM 9, No. 3, 215-219(Mar. 1966).

Mooers, C. N. and L. P. Deutsch, TRAC, A Text, HandlingLanguage, Proc. 20th National Conf., ACM, Cleveland, Ohio,Aug. 24-26, 1965, pp. 229-246 (Lewis Winner, New York,1965).

Moravec, A. F. Basic Concept.; for Planning an Electronic DataProcessing

F.,AFIPS Proc. Fall Joint Computer Conf.,

Vol. 27, Pt. 1, Las Vegas, Nev., Nov. 30-Dec. 1, 1965, pp.169-184 (Spartan Books, Washington, D.C., 1965).

Morenoff, E. and J. B. McLean, A Code for Non-numeric Informa-tion Processing Applications in Online Systems, Commun.ACM 10, No. 1, 19-22 (Jan. 1957).

Morenoff, E. and J. B. McLean, On the Standardization of Com-puter Systems, Rept. No. RADC-TR-67-165, 14 p. (Rome AirDevelopment Center, Griffiss AFB., New York, Mar. 1967).

Morris, D., F. H. Sumncr and M. T. Wyld, An Appraisal of theAtlas Supervisor, Proc. 22nd National Conf., ACM, Wash-ington, D.C., Aug., 29-31, 1967, pp. 67-75 (Thompson BookCo., Washington, D.C., 1967).

Moulton, P. G. and M. E. Muller, DITRAN- A Compiler Empha-sizing Diagnostics, Commun. ACM 10, No. 1, 45-52 (Jan.1967).

Muckier, F. A. and R. W. Obermayer, Information Display, Int.Sci. & Tech. 44, 34-40 (Aug. 1965).

Murrill, D. P., Microfilming and Encoding Laboratory Notebooksat the Philip Morris Research Center, in Progress in Informa-tion Science and Technology, Proc. Am. Doc. Inst. AnnualMeeting, Vol. 3, Santa Monica, Calif., Oct. 3-7, 1966, pp.51-56 (Adrianne Press, 1966).

Narasimhan, R., Syntax-Directed Interpretation of Classes ofPictures, Commun. ACM 9, No. 3, 166 -173 (March 1966).

Nathan, M. I., Semiconductor Lasers, Proc. IEEE 54,1276-1290(Oct. 1966).

Naur, P., The Place of Programming in a World of Problems,Tools and People, in Information Processing 1965, Proc. IFIPCongress 65, Vol. 1, New York, N.Y., May 24-29, 1965, Ed.W. A. Kalenich, pp. 195-199 (Spartan Books, Washington,D.C., 1965).

Newberry, S. P., An Electron Optical Technique for Large-Capacity Random-Access Memories, AFIPS Proc. Fall JointComputer Conf., Vol. 29, San Francisco, Calif Nov. 7-10,1966, pp. 717-728 (Spartan Books, Washington, D.C., 1966).

138

Newell, A., The Search for Generality, in Information Processing1965, Proc. IFIP Congress 65, Vol. 1, New York, N.Y., May24-29, 1965, Ed. W. A. Kalenich, pp. 17-24 (Spartan Books,Washington, D.C., 1965),

Newell, A. and H. A. Simon, The Simulation of Human Thought,Rept. No. RM-2506, 40 p. (The RAND Corp., Santa Monica,Calif., Dec, 28, 1959).

Newman, S. M., R. W. Swanson and K. C. Knowlton, A NotationSystem for Transliterating Technical and Scientific Texts forUse in Data Processing Systems, in Information Retrievaland Machine Translation, Pt. I, Proc. Int. Conf. for Standardson a Common Language for Machine Searching and Transla-tion, Western Reserve Univ Cleveland, Ohio, Sept. 6-12, 1959,Ed. A. Kent, pp. 345-376 (International Pub., Inc., New York,1960).

Newman, W. M., A System for Interactive Graphical Program.ming, AFIPS Proc. Spring Joint Computer Conf., Vol. 32,Atlantic City, N.J., Apr. 30-May 2, 1968, pp. 47-54 (ThompsonBook Co., Washington, D.C., 1968).

Ninke, W. H., Graphic 1- A Remote Graphical Display Console,AFIPS Proc. Fall Joint Computer Conf., Vol. 27, Pt. 1, LasVegas, Nev., Nov. 30-Dec. 1, 1965, pp. 839-855 (Spartan Books,Washington, D.C., 1965).

Nisenoff, N., Hardware for Information Processing Systems:Today and in the Future, Proc. IEEE 54, 1820-1835 (Dee.1966).

Nolan, J. and L. Yarbrough, An On-Line Computer Drawing andAnimation System, Proc. IFIP Congress 68, Edinburgh,Scotland, Aug. 5-10, 1968, Booklet C, pp. C 103-C 108 (North-Holland Pub. Co., Amsterdam, 1968).

North, A., Typewriter-to-Computer Roster Publication andMaintenance, in Electronic Composition in Printing, Proc.Symp., Gaithersburg, Md., June 15-16, 1967, NBS SpecialPub. 295, Ed. R. W. Lee and R. W. Worral, pp. 107-111 (U.S.Govt. Print. Off., Washington, D.C., Feb. 1968).

Nugent, W. R., A Machine Language for Documentation andInformation Retrieval, in Preprints of Summaries of PapersPresented at the ,14th National Meeting, ACM, Cambridge,Mass., Sept. 1-3, 1959, pp. 15-1 to 15-4 (Assoc. for ComputingMachinery, New York, 1959).

Nugent, 'W. R., A Machine Language for Document Translitera-tion, Preprint of paper presented at the 14th National Meeting,ACM, Cambridge, Mass., Sept. 1-3, 1959, 33 p.

Oettinger,, A. G., Automatic Processing of Natural and FormalLanguages, in Information Processing 1965, Proc. IFIPCongress 65, Vol. 1, New York, N.Y., May 24-29, 1965, Ed.W. A. Kalenich, pp. 9-16 (Spartan Books, Washington, D.C.,1965).

Ohlman, H., State-of-the-Art: Remote Interrogation of StoredDocumentary Material, in Automation and Scientific Com-munication, Short Papers, Pt. 2, papers contributed to theTheme Sessions of the 26th Annual Meeting, Am. Doc. Inst.,Chicago, Ill., Oct. 6-11, 1963, Ed. H. P. Luhn, pp. 193-194(Am. Doc. Inst., Washington, D.C., 1963).

Ohringer, L., Accumulation of Natural Language Text forComputer Manipulation, in Parameters of Information Science,Proc. Am. Doc. Inst., Annual Meeting, Vol. 1, Philadelphia,Pa., Oct. 5-8, 1964, pp. 311-313 (Spartan Books, Washington,D.C., 1964).

Oldham, I. B., R. T. Chien and D. T. Tang, Error Detection andCorrection in a Photo-Digital Storage System, IBM J. Res.& Dev. 12, No. 6, 422-430 (Nov. 1968).

Olsen, T. M., Philco/IBM Translation at Problem-Oriented,Symbolic and Binary Levels, Commun. ACM 8, No. 12,762-768 (Dec. 1965).

Ophir, D., B. J. Shepherd and R. J. Spinrad, Three-DimensionalComputer Display, Commun. ACM 12, No. 6, 309-310(June 1969).

Opler, A., Dynamic Flow of Programs and Data Through Hier-archical Storage, in Information Processing 1965, Proc. IFIPCongress 65, Vol. 1, New York, N.Y., May 24-29, 1965, Ed.W. A. Kalenich, pp. 273-276 (Spartan Books, Washington,D.C., 1965).

Opler, A., Procedure-Oriented Language Statements to FacilitateParallel Processing, Commun. ACM 8, No. 5, 306-307 (May1965).

Page 145: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Op ler, A., Requirements for Real-Time Languages, Commun.ACM 9, No. 3, 196-199 (Mar. 1966).

Op ler, A., New Directions in Software 1960-1966, Proc. IEEE54, 1757-1763 (Dec. 1966).

Oppenheimer, G. and N. Weiler, Resource Management for aMedium Scale Time-Sharing Operating System, Commun.ACM 11, No. 5, 313-322 (May 1968).

Orthard-Hays, W., Operating Systems for Job-to-Job Runningand for Special Applications: Differences and Similarities, inInformation Processing 1965, Proc. 1FIP Congress 65, Vol. 1,New York, N.Y., May 24-29, 1965, Ed. W. A. Kalenich, pp.237-242 (Spartan Books, Washington, D.C., 1965).

O'Sullivan, T. C., Exploiting the Time-Sharing Environment,Proc. 22nd National Conf., ACM, Washington, D.C., Aug.29-31, 1967, pp. 169-175 (Thompson Book Co., Washington,D.C., 1967).

O'Toole, G., Preparing for Data Communications, Computers &Automation 18, No. 5, 33-35 (May 1969).

Pankhurst, R. J., GULP -A Compiler-Compiler for Verbal andGraphic Languages, Proc. 23rd National Conf., ACM, LasVegas, Nev., Aug. 27-29, 1968, pp. 405-421 (Brandon/SystemsPress, Inc., Princeton, N.J., 1964

Paris, D. P., Digital Simulation of Image-Forming Systems, IBMJ. Res. & Dev. 10, No. 5, 407-411 (Sept. 1966).

Parker, D. C. and M. F. Wolff, Remote Sensing, Int. Sci. Tech.43, 20-31 (1965).

Parnas, D. L., A Language for Describing the Functions ofSynchronous Systems, Commun. ACM 9, No. 2, 72-76 (Feb.1966).

Patrick, R. L., So You Want to Go Online?, Datatnation 9, No. 10,25-27 (Oct. 1963),

Patrick, R. L. and D. V. Black, Index Files: Their Loading andOrganization for Use, in Libraries and Automation, Proc. Conf.on Libraries and Automation, Airlie Foundation, Warrenton,Va., May 26-30, 1963, Ed. I. E. Markuson, pp 29-48 (Libraryof Congress, Washington, D.C., 1964).

Pedler, C. S., New Variables in the Data Processing Equation,Computers & Automation 18, No. 5, 28-30 (May 1969).

Penner, A. R., Problems of Basic Research in the IntegratedBibliography Pilot Study, in Integrated Bibliography PilotStudy Progress Report, Ed. L. Sawin and R. Clark, pp. 18-22(Univ. of Colorado, Boulder, June 1965),.

Perlis, A. J., Construction of Programming Systems Using Re-mote Editing Facilities, Abstract in Information Processing1965, Proc. IFIP Congress 65, Vol. 1, New York, N.Y., May24-29, 1965, Ed. W. A. Kalenich, p. 229 (Spartan Books,Washington, D.C., 1965).

Perlis, A. J., Procedural Languages, in Second Cong. on theInformation System Sciences, Hot Springs, Va., Nov. 1964,Ed. J. Spiegel and D. E. Walker, pp. 189-210 (Spartan Books,Washington, D.C., 1965).

Perlis, A. J., The Synthesis of Algorithmic Systems, Proc. 21stNational Conf., ACM, Los Angeles, Calif., Aug. 30-Sept. 1,1966, pp. 1-6 (Thompson Book Co., Washington, D.C., 1966).Also in J. ACM 14, No. 1, 1-9 (Jan. 1967).

Perlman, J. A., Digital Data Transmission: The User's View,Proc. Eastern Joint Computer Conf., Vol. 20, Washington,'D.C., Dec 12-14, 1961, pp. 209-212 (Macmillan Co., New York,19611.

Perry, M. N., Handling Very Large Programs, in InformationProcessing 1965; Proc. IFIP Congress 65, Vol. 1, New York,N.Y., May 24-29, 1965, Ed. W. A. Kalenich, pp. 243-247(Spartan Books, Washington, D.C., 1965).

Petersen, H. E. and R. Turn, System Implications of InformationPrivacy, AFIPS Proc. Spring Joint Computer Conf., Vol. 30,Atlantic City, N.J., April 18-20, 1967, pp. 291-300 (ThompsonBooks, Washington, D.C., 1967).

Petritz, R. L., Technological Foundations and Future Directionsof Large-Scale Integrated Electronics, AFIPS Proc. Fall JointComputer Conf., Vol. 29, San Francisco, Calif., Nov. 7-10,1966, pp. 65-87 (Spartan Books, Washington D.C., 1966).

Petritz, R. L., Current Status of Large Scale Integration Tech-nology, Proc. 22nd National Conf., ACM, Washington, D.C.,Aug. 29-31, 1967, pp. 65-85 (Thompson Book Co., Washington,D.C., 1967).

139

376-411 0 - 70 - 10

Petschauer, R. J., Magnetics-Still the Best Choice for Com-puter Main Memory, AFIPS Proc. Fall Joint Computer Conf.Vol. 31, Anaheim, Calif., Nov. 14-16, 1967, pp. 598-600(Thompson Books, Washington, D.C., 1967).

Pfaltz, J. L., J. W. Snively, Jr. and A. Rosenfeld, Local andGlobal Picture Processing by Computer, in Pictorial PatternRecognition, Proc. Symp. on Automatic Photointerpretation,Washington, D.C., May 31-June 2, 1967, Ed. G. C. Chenget al., pp. 353-371 (Thompson Book Co., Washington, D.C.,1968).

Pick, G. G. and D. B. Brick, A Read-Only Multi-Megabit ParallelSearch Associative Memory, in Automation and ScientificCommunication, Short Papers, Pt. 2, papers contributed to theTheme Sessions of the 26th Annual Meeting, Am. Doc. Inst.Chicago, III., Oct. 6-11, 1963, Ed. H. P. Lulu'', pp. 245-246(Am. Doc. Inst., Washington, D.C., 1963).

Pick, G. G., S. B. Gray and D. B. Brick, The Solenoid Array- ANew Computer Element, IEEE Trans. Electron. ComputersEC-13, 27-35 (Feb. 1964).

Porter, J. W. and L. E. Johnson, United Air Lines' ElectronicInformation System (EIS), in Data Processing, Vol. X, Proc.1966 Int. Data Processing Conf., Chicago, 111., June 21-24,1966, pp. 74-82 (Data Processing Management 966).

Potter, R. J. and A. A. Axelrod, Optidal Input/OutputAssoc.,

gystemsfor Computers, 1968 WESCON Technical Papers, 16/2,Aug. 1968, 6 p.

Pravikoff, V. V., Program Documentation, Data Proc. Mag. 7,44-45 (Oct. 1965).

Press, L. I. and M. S. Rogers, IDEA-A Conversational, HeuristicProgram for Inductive Data Exploration and Analysis, Proc.22nd National Conf., ACM, Washington, D.C., Aug. 29-31,1967, pp. 35-40 (Thompson Book Co., Washington, D.C., 1967).

Prince, M. D., Man-Computer Graphics for Computer-AidedDesign, Proc. IEEE 54, 1698-1708 (Dec. 1966).

Probst, L. A., Communications Data Processing Systems:Design Considerations, Computers & Automation 17, No.5, 18-21 (May 1968).

Prywes, N. S., A Storage and Retrieval System for Real-TimeProblem Solving, Rept. No. 66-05, 47 p. (Moore School ofEngineering, Univ. of Pennsylvania, Philadelphia, June 1,1965).

Pugh, E. W., V. T. Shahan and W. T. Siegle, Device and ArrayDesign for a 120-Nanosecond Magnetic Film Main Memory,IBM J. Res. & Dev. 11, 169-178 (Mar. 1967).

Pyke, T. N., Jr. Computer Technology: A Forward Look, NBSTech. News Bull. 51, No. 8, 161-163 (Aug. 1967). Also in YaleScientific, pp. 14-15, 28, Oct, 1967.

Raffel, J. I. A. H. Anderson, T. S. Crowther, T. 0. Herndon andC E. Woodward, A Progress Report on Large CapacityMagnetic Film Memory Development, AFIPS Proc. SpringJoint Computer Conf., Vol. 32, Atlantic City, N.J. Apr. 30-May 2, 1968, pp. 259-265 (Thompson Book Co., `Washington,D.C., 1968).

Rajchman, T. A., Integrated Magnetic and SuperconductiveMemories- A Survey of Techniques, Results and Prospects,in Information Processing 1965, Proc. IFIP Congress 65,Vol. 1, New York, N.Y., May 24-29, 1965, Ed. W. A. Kalenich,pp. 123-129 (Spartan Books, Washington, D.C., 1965).

Ramsey, K. and J. C. Strauss, A Real Time Priority Schedule,Proc. 21st National Conf., ACM, Los Angeles, Calif., Aug.30-Sept. 1, 1966, pp. 161-166 (Thompson Book Co., Wash-ington, D.C., 1966).

Raphael, B., The Structure of Programming Languages, Commun.ACM 9, No. 2, 67-71 (Feb. 1966).

Rath, G. J. and D. J. Werner, Infosearch: Studying the RemoteUse of Libraries by Medical Researchers, in Levels of Inter-action Between Man and Information, Proc, Am. Doc. Inst.Annual Meeting, Vol. 4, New York, N.Y., Oct. 22-27, 1967,pp. 58-62 (Thompson Book Co., Washington, D.C., 1967).

Ray, L. C., Keypunching Instructions for Total Text Input, inMachine Indexing: Progress and Problems, Proc. ThirdInstitute on Information 'Storage and Retrieval, Washington,D.C., Feb. 13-17, 1961, pp. 50-57 (American Univ., Wash-ington, D.C., 1962).

"R and D for Tomorrow's Computers", Data Systems, 52-53(Mar. 1969).

Page 146: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Reagan, F. H., Jr., Data Communications -What It's All About,Data Proc. Mag. 8, 20-24, 26, 66-67 (Apr. 1966).

Reagan, F. H., Jr., Viewing the CRT Display Terminals, DataProc. Mag. 9, 32-37 (Feb. 1967).

Reed, D, M. and D. J. Hillman, Document Retrieval Theory,Relevance, and the Methodology of Evaluation, Rept. No. 4,Canonical Decomposition, 33 p. (Center for the InformationSciences, Lehigh Univ., Bethleham, Pa., Aug. 1966).

Reich, A. and G. H. Dorion, Photochromic, High-Speed, LargeCapacity, Semirandom Access Memory, in Optical and Electro-Optical Information Processing, Ed., J. R. Tippett et al., pp.567-580 (M.I.T. Press, Cambridge, Mass., 1965).

Reitnann, 0. A., On All-Optical Computer Techniques, in Opticaland Electro-Optical Information Processing. Ed. J. R. Tippettet al., pp. 247-252 (M.I.T. Press, Cambridge, Mass., 1965).

Reimann, 0. A. and W. F. Kosonocky, Progress in Optical Com-puter Research, IEEE Spectrum 2, No. 3, 181-195 (Mar. 1965).

The (R)evolution in Book Composition, Special Report, Part I,Computers Are Here -What Now? Book Production Mag. 79,54 00 (Feb. 1964); Part II, Computers in '64. Year of Trans-ition from Theory to Practice, 79, 44-50 (Mar. 1964); Part III,What's Ahead for Computers, 79, 55-61 (Apr. 1964); Part IV,The Systems Concept -Key to Computer Profits, 79,67-73(May 1964).

Rich, D. S., Multiplying the Software Dollar, Software Age 2,30-35 (Mar. 1968).

Rider, B., Data Integrity in Communications Circuits, in LawEnforcement Science and Technology, Vol. 1, Proc. FirstNational Symp. on Law Enforcement Science and Technology,Chicago, III., March 1967, Ed. S. ,A. Yefsky, pp. 133-138(Thompson Book Co., Washington, D.C., 1967).

Rifenburgh, R. P., A Glimpse at the Future of Data Communica-tions, Computers & Automation 18, No. 5, 36, 53 (May 1969).

Riley, W. B., Time-Sharing: One Machine Serving Many Masters,Electronics 38, No. 24, 72-78 (1965).

Ring, E. M., H. L. Fox and L. C. Clapp, A Quantum OpticalPhenomenon: Implications for Logic, in Optical and Electro-Optical Information Processing, Ed. J. R. Tippett et al., pp.31-43 (Iv.I.T. Press, Cambridge, Mass., 1965).

Rippy, D. E., D. E. Humphries and J. A. Cunningham, MAGIC-A Machine for Automatic Graphics Interface to a Computer,AFIPS Proc. Fall Joint Computer Conf., Vol. 27, Pt. 1, LasVegas, Nev., Nov. 30-Dec. 1, 1965, pp. 819-830 (SpartanBooks, Washington, D.C., 1965).

Roberts, L. G., Graphical Communication and Control Languages,in Second Cong. on the Information System Sciences, HotSprings, Va., Nov. 1964, Ed. J. Spiegel and D. E. Walker, pp.211-217 (Spartan Books, Washington, D.C., 1965).

Roberts, L. G., A Graphical Service System with VariableSyntax, Commun. ACM 9, No. 3, 173-176 (Mar. 1966).

Rogers, K. T. and J. Kelly, High-Information-Density StorageSurfaces, Ninth Quarterly Report, Rept. No. ECOM-01261-9,26 p. (Stanford Research Institute, Menlo Park, Calif., Oct.1967).

Roos, D., An Integrated Computer System for EngineeringProblem Solving, AFIPS Proc. Fall Joint Computer Conf.,Vol. 27, Pt. 1, Las Vegas, Nev, Nov., 30-Dec. 1, 1965, pp.423-433 (Spartan Books, Washington, D.C., 1965).

Rosa, J., Command Control Display Technology Since 1962, inSecond Cong. on the Information System Sciences, HotSprings, Va., Nov. 1964, Ed. J. Spiegel and D. E. Walker, pp.411-414 (Spartan Books, Washington, D.C., 1965).

Rose, G. A., "Light-Pen" Facilities for Direct View StorageTubes -An Economical Solution for Multiple Man-MachineCommunication, IEEE Trans. Electron. Computers EC-14,637-639 (Aug. 1965).

Rosen, S., Programming Systems and Languages: A HistoricalSurvey, AFIPS, Proc. Spring Joint Computer Conf., Vol. 25,Washington, D.C., April 1964, pp. 1-15 (Spartan Books,Baltimore, Md., 1964).

Rosen, S., Hardware Design Reflecting Software Requirements,AFIPS Proc. Fall Joint Computer Conf., Vol. 33, Pt. 2, SanFrancisco, Calif., Dec. 9-11, 1968, pp. 1443-1449 (ThompsonBook Co., Washington, D.C., 1968).

Rosin, R. F., An Approach to Executive System Maintenance inDisk-Based Systems, The Computer J. 9, 242-247 (Nov. 1966).

140

Ross, D. T. and C. G. Feldman, Verbal and Graphical Languagefor the AED System: A Progress Report, Rept. No. MAC-TR-4,26 p. (Massachusetts Institute of Technology, Cambridge,Mass., May 6, 1964).

Rothery, B., The Check Digit, Data Proc. Mag. 9, No. 6, 58-59(June 1967).

Rothman, S., Centralized Government Information Systems andPrivacy, paper prepared for the President's Crime Commission,TRW Systems, Sept. 22, 1966, 18 p.

Rubinoff, M., Ed., Toward a National Information System,Second Annual National Colloquium on Information Retrieval,Philadelphia, Pa., April 23 -24,, 1965, 242 p. (Spartan Books,Washington, D.C., 1965),

Rux, P. T., Evaluation of Three Content-Addressable MemorySystems Using Glass Delay Lines, Rept. No. CC-67-9, 58 p.(Computer Center, Oregon State Univ., Corvallis, Oregon,July 13, 1967).

Sackman, H., Time-Sharing versus Batch Processing: TheExperimental Evidence, AFIPS Proc. Spring Joint ComputerConf., Vol. 32, Atlantic City, N.J., Apr. 30-May 2, 1968, pp.1-10 (Thompson Book Co., Washington, D.C., 1968).

Sackman, H., Current Methodological Research, [position paperfor sessions on managing the economics of computer program.ming], Proc. 23rd National Cong., ACM, Las Vegas, Nev., Aug.27-29, 1968, pp. 349-352 (Brandon/Systems Press, Inc.,Princeton, N.J., 1968).

Sackman, H., W. J. Erikson and E. E. Grant, Exploratory Experi-mental Studies Comparing Online and Offline ProgrammingPerformance, Commun. AGM 11, No. 1, 3-11 (Jan. 1968).

Salton, G., Data Manipulation and Programming Problems inAutomatic Information Retrieval, Commun. ACM 9, No. 3,204,210 (Mar. 1966).

Saltzer, J. H., Traffic Control in a Multiplexed Computer System,Rept. No. MAC-TR-30, 79 p. (Massachusetts Institute ofTechnology, Cambridge, Mass., July 1966).

Samuel, A. L., Time-Sharing on a Multiconsole Computer, Rept.No. MAC-TR-17, 23 p. (Massachusetts Institute of Technology,Cambridge, Mass., Mar. 1965).

Sass, A. R. W. C. Stewart and L. S. Cosentino, CryogenicRandom-Access Memories, IEEE Spectrum 4, 91-98 (July1967).

Savitt, D. A., H. H. Love, Jr., and R. E. Troop, ASP: A NewConcept in Language and Machine Organization, AFIPS Proc.Spring Joint Computer Conf., Vol. 30, Atlantic City, N.J.,April 18-20, 1967, pp. 87-102 (Thompson Books, Washington,D.C 1967).

Sawin, L., The Integrated Bibliography Pilot Study: A ProgressReport, in Integrated Bibliography Pilot Study ProgressReport, Ed. L. Sawin and R. Clark, pp. 87-105 (Unlv. ofColorado, Boulder, 1965).

Sawin, L. and R. Clark, Eds., Integrated Bibliography PilotStudy Progress Report, 153 p. (Univ. of Colorado, Boulder,June 1965).

Sayer, J. S., Do Present Information Services Serve the Engineer?Data Proc. Mag. 7, 24-25, 64-65 (Feb. 1965).

Sayer, J. S., The Economics of a National Information System, inToward a National Information System, Second AnnualNational Colloquium on Information Retrieval, Philadelphia,Pa., April 23-24, 1965,, Ed. M. Rubinoff, pp. 135-146 (SpartanBooks, Washington, D.C., 1965).

Scarrott, G. G., The Efficient Use of Multilevel Storage, inInformation Processing 1965, Proc. IFIP Congress 65, Vol. 1,New York, N.Y., May 24-29, 1965, Ed. W. A. Kalenich, pp.137-141 (Spartan Books, Washington, D.C., 1965).

Schatzoff, M., R. Tsao and R. Wiig, An Experimental Comparisonof Time Sharing and Batch Processing, Commun. ACM 10,No. 5, 261-265 (May 1967).

Schecter, G., Ed., Information Retrieval- A Critical View, 282 p.(Thompson Book Co., Washington, D.C., 1967).

Scherr, A. L., An Analysis of Time-Shared Computer Systems,Ph. D. Dissertation, Rept. No. MAC-TR-18, 178 p. (Mass.Inst. of Tech., Cambridge, June 1965).

Schon, D. A. The Clearinghouse for Federal Scientific andTechnical

A.,in Toward a National Information

System, Second Annual National Colloquium on Information

Page 147: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Retrieval, Philadelphia, Pa., April 23-24, 1,965, Ed. M. Rubinoff,pp. 27-34 (Spartan Books, Washington, D.C., 1965).

Schultz, L., Language and the Computer, in Automated LanguageProcessing, Ed. H. Borko, pp. 11-31 (Wiley, New York, 1967).

Schwartz, J. I., E. G. Coffman and C. Weissman. Potentials of aLarge-Scale Time-Sharing System, in Second Cong. on theInformation System Sciences, Hot Springs, Va., Nov. 1964,Ed. J. Spiegel and D. E. Walker, pp. 15-32 (Spartan Books,Washington, D.C., 1965).

Schwartz, J. I. and C. Weissman, The SDC Time-SharingSystem Revisited, Proc. 22nd National Conf., ACM, Washing-ton, D.C., Aug, 29-31, 1967, pp. 263-271 (Thompson Book Co.,Washington, D.C., 1967).

Seaman, P. H., On Teleprocessing System Design, Part VI- TheRole of Digital Simulation, IBM Sys. J. 5, No. 3, 175-189 (1966).

Seltzer, D., An Experimental Word Decode and Drive Systemfor a Magnetic Film Memory with 20-ns READ-Cycle Time,IEEE Trans. Electron. Computers EC-16, 172-179 (Apr. 1967).

Senzig, D. N. and R. V. Smith, Computer Organization for ArrayProcessing, AFIPS Proc. Fall Joint Computer Conf., Vol. 27,Pt. 1, Las Vegas, Nev., Nov. 30-Dec. 1, 1965, pp. 117-128(Spartan Books, 'Washington, D.C., 1965).

Serchuk, A., Laser, Hologram: Storage Team, Electronic News,April 17, 1967, p. 34.

Shafritz, A. B., The Use of Computers in Message SwitchingNetworks, Proc. 19th National Conf., ACM, Philadelphia,Pa., Aug. 25-27, 1964, pp. N2.3-1 to N2.3-6 (Assoc. forComputing Machinery, New York, 1964).

Shah, B. R. and K. L. Konnerth, Optical Interconnections inComputers, 1968 WESCON Technical Papers, 16/5, Aug.1968, 6 p.

Sharp, D. S. and J. E. McNulty, On a Study of InformationStorage and Retrieval, 18 p. (The Moore School of ElectricalEngr., Univ. of Pennsylvania, Philadelphia, May 15, 1964).

Sharp, J. R., Content Analysis, Specification, and Control, inAnnual Review of Information Science and Technology, Vol. 2,Ed. C.A. Cuadra, pp. 87-122 (Interscience Pub., New York,1967).

Shaw, A. C., A Formal Picture Description Scheme as a Basisfor Picture Processing Systems, Inf. & Control 14, No. 1,9-52 (Jan. 1969).

Shaw, R. R., Parameters for Machine Handling of AlphabeticInformation, Am. Doc. 13, No. 3, 267-269 (July 1962).

Shively, R., A Silicon Monolithic Memory Utilizing a New StorageElement, AFIPS Proc. Fall Joint Computer Conf., Vol. 27,Pt. 1, Las Vegas, Nev., Nov. 30 -Dec. 1, 1965, pp. 637-647(Spartan Books, Washington, D.C., 1965).

Short, R. A., The Attainment of Reliable Digital Systems Throughthe Use of Redundancy- A Survey, Computer Group News,IEEE 2, 2-17 (Mar. 1968).

Sibley, E. H., R. W. Taylor and D. G. Gordon, Graphical SystemsCc mmunication: An Associative Memory Approach, AFIPSProc. Fall Joint Computer Conf., Vol. 33, Pt. 1, San Francisco,Calif., Dec. 9-11, 1968, pp. 545-555 (Thompson Book Co.,Washington, D.C., 1968).

Simkins, Q. W., Planar Magnetic Film Memories, AFIPS Proc.Fall Joint Computer Conf., Vol. 31, Anaheim, Calif., Nov. 14-16, 1967, pp. 593-594 (Thompson Books, Washington, D.C.,1967).

Simmons, R. F., Storage and Retrieval of Aspects of Meaningin Directed Graph Structures, Commun. ACM 9, No. 3,211-215 (March 1966).

Simms, R. L., Jr., Trends in Computer/Communication Systems,Computers & Automation 17, No. 5, 22-25 (May 1968).

Simpson, W. D., Design of a Small Multiturn Magnetic ThinFilm Memory, AFIPS Proc. Fall Joint Computer Conf., Vol.33, Pt. 2, San Francisco, Calif., Dec. 9-11, 1968, pp. 1291-1223 (Thompson Book Co., Washington, D.C., 1968).

Sklar, B. and R. Shively, IBP- A Tool for Memory System Evalu-ation, Computer Design 8, No. 1, 54, 56-57 (Jan. 1969).

Slade, A. E. and H. 0. McMahon, A Cryotron Catalogue MemorySystem, Proc. Eastern Joint Computer Conf., Vol. 10, Theme:New Developments in Computers, New York, N.Y., Dec. 10-12,1956, pp. 115-120 (American Inst. of Electrical Engineers,New York, 1957).

Slotnick, D. L., W. C. Borck and R. C. McReynolds, The SolomonComputer, AFIPS Proc. Fall Joint Computer Conf., Vol. 22,

141

Philadelphia, Pa., Dec. 1962, pp. 97-107 (Spartan Books,Washington, D.C., 1962).

Smith, F. R., and S. 0. Jones, Five Years in Focus -The DouglasAircraft Company Mechanized Information System, in Progressin Information Science and Technology, Proc. Am. Doc. Inst.,Annual Meeting, Vol. 3, Santa Monica, Calif., Oct. 3-7, 1966,pp. 185-191 (Adrianne Press, 1966).

Smith, G. P., Chameleon in the Sun- Photochromic Glass, IEEESpectrum 3, No. 12, 39-47 (Dec. 1966).

Smith, M. G. and W. A. Notz, Large-Scale Integration from theUser's Point of View, AFIPS Proc. Fall Joint Computer Conf.,Vol. 31, Anaheim, Calif., Nov. 14-16, 1967, pp. 87-94 (Thomp-son Books, Washington, D.C., 1967).

Smith, W. V., Computer Applications of Lasers, Proc. IEEE54, 1295-1300 (Oct. 1966).

Soref, R. A. and D. H. McMahon, Bright Hopes for DisplaySystems: Flat Panels and Light Deflectors, Electronics, 38,No. 24, 56-62 (1965).

Sparks, D. E., M. M. Chodrow, G. M. Walsh and L. L. Laine, AMethodology for the Analysis of Information Systems, Rept.No. R-4003-1, Final Rept. on Contract NSF-C-370, 1 v.(Information Dynamics Corp., Wakefield, Mass., May 1965),Appendices, May 1965, 1 v.

Spiegel, J. and D. E. Walker, Eds., Second Congress on theInformation System Sciences, Hot Springs, Va., Nov. 1964,525 p. (Spartan Books, Washington, D.C., 1965).

Steel, T. B., Jr., The Development of Very Large Programs, inInformation Processing 1965, Proc. IFIP Congress 65, NewYork, N.Y., May 24-29, 1965, Vol. 1, Ed. W. A. Kalenich, pp.231-235 (Spartan Books, Washington, D.C., 1965).

Steinbuch, K. and U. A. W. Piske, Learning Matrices and TheirApplications, IEEE Trans. Electron. Computers EC-12,846-862 (Dec. 1963).

Stephens, E. D., Application of High-Speed Holographic Photo-microscopy, Research Rev., Office of Aerospace Research6, No. 8, 25-26 (Aug. 1967).

Stephenson, A., Planning Interactive Communication Systems,Modern Data 1, No. 9, 54-56 (Nov. 1968).

Stevens, D. F., System Evaluation on the Control Data 6600,Proc. IFIP Congress 68, Edinburgh, Scotland, Aug. 5-10,1968, Booklet C, pp. C34-C38 (North-Holland Pub. Co.,Amsterdam, 1968).

Stevens, M. E., Nonnumeric Data Processing in Europe: A FieldTrip Report, August-October 1966, NPS Tech. Note 462, 63 p.(U.S. Govt. Print. Off., Washington, D.C., Nov. 1968).

Stevens, M. E., V. E. Giuliano and L. B. Heilprin, Eds., StatisticalAssociation Methods for Mechanized Documentation, Symp.Proc. Washington, D.C., March 17-19, 1964, NBS Misc. Pub.269, 261 p. (U.S. Govt. Print. Off., Washington, D.C., Dec. 15,1965).

Stevens, M. E. and J. L. Little, Automatic Typographic-QualityComposition Techniques: A State-of-the-Art Report, NBSMonograph 99, 98 p. (U.S. Govt. Print. Off., Washington, D.C.,Apr. 7,, 1967).

Stone, H.S., Associative Processing for General Purpose Com-puters Through the Use of Modified Memories, AFIPS Proc.Fall Joint Computer Conf., Vol. 33, Pt. 2, San Francisco, Calif.,Dec. 9-11, 1968, pp. 949-955 (Thompson Book Co., Washing-ton, D.C., 1968).

Stotz, R. H., Directions in Time-Sharing Terminals, ComputerGroup News, IEEE 2, 12-18 (May 1968).

Stratton, W. D., Investigation of an Analog Technique to De-crease Pen-Tracking Time in Comuter Displays, Rept. No.MAC-TR-25, 74 p. (Mass. Inst. of Tech., Cambridge, Mass.,March 1966).

Stroke, G. W., Lensless Photography, Int. Sci. & Tech. 41,52-60 (1965).

Strom, R., Methodology for Research in Concept-Learning, inSome Problems in Information Science, Ed., M. Kochen, pp.105-116 (The Scarecrow Press, Inc., New York, 1965).

Sutherland, I. E., The Future of On-Line Systems, in On-LineComputing Systems, Proc. Symp. Sponsored by the Univ. ofCalifornia, Los Angeles, and Informatics, Inc., Los Angeles,Calif., Feb. 2-4., 1965, ,Ed. E. Burgess, pp. 9-13 (AmericanData Processing, Inc., Detroit, Mich., 1965).

Su. torland, W. R., Language Structure and Graphical Man-Machine Communication, in Information System Science and

Page 148: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

Technology, papers prepared for the Third Cong., scheduledfor Nov. 2122, 1966, Ed. D. E. Walker, pp. 29-31 (ThompsonBodc Co., Washington, D.C., 1967).

Sutherland, W. R., J. W. Forgie and M. V. Morello, Graphics inTime-Sharing: A Summary of the TX-2 Experience, AFIPSProc. Spring Joint Computer Conf., Vol. 34, Boston, Mass.,May 14-16, 1969, pp. 629-636 (AFIPS Press, Montvale, N.J.,1969).

Swanson, R. W., Move the Information . . . A Kind of Mis-sionary Spirit, Rept. No. AFOSR 67-1247, 912 p. (Office ofAerospace Research, USAF, Arlington, Va., June 1967).

Swanson, R. W., Information System Networks -Let's Profitfrom What We Know, in Information Retrieval - A CriticalView, Ed. G. Schecter, pp. 1-152 (Thompson Book Co.,Washington, D.C., 1967).

System Development Corporation, Research & TechnologyDivision Report for 1967, Rept. No. TM- 530/011/00,1 v. (SantaMonica, Calif., 1968).

Tang, D. T. and R. T. Chien, Coding for Error Control, IBM Sys.J. 8, No. 1, 48-86 (1969).

Tate, V. D., Ed., Proceedings of the Eleventh Annual Meetingand Convention, Vol. XI, Washington, D.C., Apr. 25-27, 1962,360 p. (The National Microfilm Assoc., Annapolis, Md., 1962).

Tauber, A. S., Document Retrieval System Analysis and Design,in Progress in Information Science and Technology, Proc. Am.Doc. Inst., Annual Meeting, Vol. 3, Santa Monica, Calif.,Oct. 3-7, 1966, pp. 273-281 (Adrianne Press, 1966).

Tauber, A. S. and W. C. Myers, Photochromic Micro-Images:A Key to Practical Microdocument Storage and Dissemination,Proc. Eleventh Annual Meeting and Convention, Vol. XI,Washington, D.C., Apr. 25-27, 1962, Ed. V. D. Tate, pp.257-269 (The National Microfilm Assoc., Annapolis, Md.,1962). Also in Am. Doc. 13, No. 4, 403-409 (Oct. 1962).

Teichroew, D. and J. F. Lubin, Computer Simulation-Discussionof the Technique and Comparison of Languages, Commun.ACM 9, No. 10, 723-741 (Oct. 1966).

Teitelman, W., PILOT: A Step Toward Man-Computer Symbiosis,Project MAC, Rept.No. MAC-TR-32, 193 p. (Mass. Inst. ofTech., Cambridge, Sept. 1966).

Tell, B. V., Auditing Procedures for information RetrievalSystems, Proc. 1965 Congress F.I.D., 31st Meeting andCongress, Vol. II, Washington, D.C., Oct. 7-16, 1965, pp.119-124 (Spartan Books, Washington, D.C., 1966).

Terlet, R. H., The CRT Display Subsystem of the IBM 1500Instructional System, AFIPS Proc. Fall Joint ComputerConf., Vol. 31, Anaheim, Calif., Nov. 14-16, 1967, pp. 169-176(Thompson Books, Washington, D.C., 1967).

Thomas, R. B. and M. Kassler, Advanced Recognition 'Tech-niques Study, Rept. No. 1, First Quarterly Progress Report 1Jan. 1963 to 31 Mar. 1963, Contract No. DA 36-039 AMC-00112(E), 48 p., plus Appendix (RCA, Data Systems Center, Bethesda,Md., 1963).

Thomas, R. B. and M. Kassler, Character Recognition in Context,Inf. & Control 10, No. 1, 43-64 (Jan. 1967).

Thompson, F. B., English for the Computer, AFIPS Proc. FallJoint Computer Conf., Vol. 29, San Francisco, Calif., Nov.7-10, 1966, pp. 349-356 (Spartan Books, Washington, D.C.,1966).

Thornley, R. F. M., A. V. Brown and A. J. Speth, Electron BeamRecording of Digital Information, IEEE Trans. Electron.Computers EC-13, 36-40 (Feb. 1964).

Tinker, J. F., Imprecision in Indexing. Part II. Am. Doc. 19,No. 3, 322-330 (July 1968).

Tippett, J. R., D. A. Berkowitz, L. C. Clapp, C. J. Koester andA. Vanderburgh, Jr., Eds., Optical and Electro-Optical Infor-mation Processing, 780 p. (M.I.T. Press, Cambridge, Mass.,1965).

Tou, J. T., Ed. Computer and Information Sciences-II, Proc.2nd Symp. on Computer and Information Sciences, Columbus,0., Aug. 22-24, 1966, 368 p, (Academic Press, New York,1967).

Travis, L. E., Analytic Information Retrieval, in Natural Languageand the Computer, Ed., P. L. Garvin, pp. 310-353 (McGraw-Hill Book Co., New York, 1963),

Trimble, G. R., Jr., Using a Computer to Simulate a Computer,Data Proc. Mag. 7, 18-23 (Oct. 1965).

142

Trueswell, R. W., A Quantitative Measure of User CirculationRequirements and Its Possible Effect on Stack Thinning andMultiple Copy Determination, Am. Doc. 16, No. 1, 20-25(Jan. 1965).

Tukey, J. W. and M. B. Wilk, Data Analysis and Statistics: AnExpository Overview, AFIPS Proc. Fall Joint ComputerConf., Vol. 29, San Francisco, Calif., Nov. 7-10, 1966, pp.695-709 (Spartan Books, Washington, D.C., 1966).

Turoff, M., Immediate Access and the User Revisited, Datama-tion 15, No. 5, 65-67 (May 1969).

Uber, G. T., P. E. Williams, B. L. Hisey and R. G. Siekert, TheOrganization and Formatting of Hierarchical Displays for theOn-Line Input of Data, AFIPS Proc. Fall Joint Computer Conf.,Vol. 33, Pt. 1, San Francisco, Calif., Dec. 9-11, 1968, pp.219-226 (Thompson Book Co., Washington, D.C., 1968).

Valassis, J. G., Modular Computer Design With PicoprogrammedControl, AFIPS Proc. Fall Joint Computer Conf., Vol. 31,Anaheim, Calif., Nov. 14-16, 1967, pp. 611-619 (ThompsonBooks, Washington, D.C., 1967).

Van Dam, A., Computer Driven Displays and Their Use in Man/Machine Interaction, in Advances in Computers, Vol. 7, Ed.F. L. Alt and M. Rubinoff, pp. 239-290 (Academic Press,New York, 1966).

Van Dam, A. and J. C. Michener, Hardware Developments andProduct Announcements, in Annual Review of InformationScience and Technology, Vol. 2, Ed. C. A. Cuadra, pp. 187-222(Interscience Pub., New York, 1967).

Van Geffen, L. M. H. J., A Review of Keyboarding Skills, inAdvances in Computer Typesetting, Proc. Int. ComputerTypesetting Conf., Sussex, England, July 14-18, 1966, Ed.W. P. Jaspert, pp. 2-11 (The Institute of Printing, London,1967).

Veaner, A. B., Developments in Copying Methods and GraphicCommunication 1965, Lib. Res. & Tech. Serv. 10, 199-210(Spring 1966).

Vilkomerson, D. H R., R. S. Mezrich and D. I. Bostwick, Holo-graphic Read-Only Memories Accessed by Light-EmittingDiodes, AFIPS Proc. Fall Joint Computer Conf., Vol. 33, Pt. 2,San Francisco, Calif., Dec. 9-11, 1968, pp. 1197-1204 (Thomp-son Book Co., Washington, D.C., 1968).

Vlahos, P., The Three-Dimensional Display: Its Cues andTechniques, Inf. Display 2, No. 6, 10-20 (Nov.-Dec. 1965).

Vlannes, P., Requirements for Information Retrieval Networks,in Colloquium on Technical Preconditions for Retrieval CenterOperations, Proc. National Colloquium on InformationRetrieval, Philadelphia, Pa., April 24-25, 1964, Ed. B. F.Cheydleur, pp. 3-6 (Spartan Books, Washington, D.C., 1965).

Vollmer, J., Applied Lasers, IEEE Spectrum 4, 66-70 (June1967).

Vorthmann, E. A. and J. T. Maupin, Solid State Keyboard,AFIPS Proc. Spring Joint Computer Conf., Vol. 34, Boston,Mass., May 14-16, 1969, pp. 149-159 (AFIPS Press, Montvale,N.J., 1969).

Vossler, C. M. and N. M. Branston, The Use of Context forCorrecting Garbled English Text, Proc. 19th National Conf.,ACM, Philadelphia, Pa., Aug. 25-27, 1964, pp. D2.4-1 toD2.4-13 (Assoc. for Computing Machinery, New York, 1964).

Wade, R. D., G. P. Cawsey and R. A. K. Verber, A TeleprocessingApproach Using Standard Equipment, IBM Sys. J. 8, No. 1,28-47 (1969).

Wagner, F. V. and J. Granholm, Design of a General-PurposeScientific Computing Facility, in Information Processing 1965,Proc. IFIP Congress 65, Vol. 1, New York, N.Y., May 24-29,1965, Ed, W. A. Kalenich, pp. 283-289 (Spartan Books, Wash-ington, D.C., 1965).

Waldo, W. H. and M. DeBacker, Printing Chemical StructuresElectronically: Encoded Compounds Searched Genericallywith IBM-702, in National Academy of Sciences-Nationalresearch Council, Proc. Int. Conf. on Scientific Information,Vol. 1, Washington, D.C., Nov. 16-21, 1958, pp. 711-730(NAS-NRC, Washington, D.C., 1959). Preprints, Area 4,1958, pp. 49-68.

Walker, D. E., Ed., Information System Science and Technology,Papers prepared for the Third Congress, Scheduled for Nov.

Page 149: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

21-22, 1966, 406 p. (Thompson Book Co., Washington, D.C.,1967).

Waiter, C. J., A. B. Walter and M. J. Bohl, Setting Characteristicsfor Fourth Generation Computer Systems, Part III - LSI, Com-puter Design 7 , No. 10, 48-55 (Oct. 1968).

Ward, J. E., Systems Engineering Problems in Computer-DrivenCRT Displays for Man-Machine Communication, WEE Trans.Systems Sci. & Cybernetics SSC-3, 47-54 (June 1967).

Weaver, W., Science and Complexity, Amer. Scientist 36,536-544 (Oct. 1948).

Weber, J. H. and L. A. Gimpelson, UNISIM- A Simulation Pro-, gram for Communications Networks, AFIPS Proc. Fall JointComputer Conf., Vol. 261, San Francisco, Calif., Oct. 1964,

, pp. 233-249 (Spartan Books, Baltimore, Md., 1964).Wegner, P., Machine Organization for Multiprngramming, Proc.

22nd National Conf., ACM, Washington, D.C., Aug. 29 -31,1967, pp. 135-150 (Thompson Book Co., Washington, D.C.,1967).

Weinstein, H., L. Onyshkevych, K. Karstad and R. Shahbender,Sonic Film Memory, AFIPS Proc. Fall Joint Computer Conf.,Vol. 29, San Francisco, Calif.,, Nov. 7-10, 1966, pp. 333-347(Spartan, Books, Washington, D.C., 1966).

Weissman, Cr, PROGRAMMING PROTECTION: What Do YouWant to Pay?, SDC Mag. 10, No. 7 & 8, 30-31 (July-Aug.1967).

Weizenhaum, J., ELIZA-A Computer Program for the Study ofNatural Language Communication Between Man and Machine,Commun. ACM 9, No. 1, 36-45 (Jan. 1966).

Werner, G. E., R. M. Whalen, N. F. Lockhart and R. C. Maker,A 110-Nanosecond Ferrite Core Memory, IBM J. Res. & Dev.11, 153-161 (Mar. 1967).

Wetherfield, M. R., A Technique for Program Monitoring byInterruption, The Computer J. 9, 161-166 (Aug. 1966).

Whiteman, I. R., New Computer Languages, Int. Sci. & Tech.52, 62-68 (1966).

Wieder, H., R. V. Pole and P. F. Heindrich, Electron Beam Writ-ing of Spatial Filters, IBM J. Res. & Dev. 13, No. 2, 169-171(Mar. 1969).

Wiesner, J. B., Communication Sciences in a University Environ-ment, IBM ,J. Res. & Dev. 2, 268-275 (Oct. 1958).

Wigington, R. L., A Machine Organization for a General PurposeList Processor, IEEE Trans. Electron. Computers EC-12,707-714 (Dec. 1963).

Wigington, R. L., Graphics as Computer Input and Output, in1966 IEEE Int. Cony. Record, Vol. 14, Pt. 3, Computers,IEEE Int. Cony., New York, N.Y., Mar. 21-25, 1966, pp. 86-90(The Inst. of Electrical and Electronics Engineers, New York,1966).

143

=I.*. to ,- ....-------.-.. ')

Wilkes, M. V., Lists and Why They are Usdul, Proc. 19thNational Conf., ACM, Philadelphia, Pa., Aug. 25-27, 1964,pp. F1-1 to F1-3 (Assoc. for Computing Machinery, NewYork, 19,64).

Wilkes, M. V., Slave Memories and Dynamic Storage Allocation,IEEE Trans. Electron. Computers EC-14, No. 2, 270-271(April 196$).

Wilkes, M. V., The Design of Multiple-Access ComputersSystems,, The Computer J. 10, No. 1, 1-9 (May 1967).

Wilkes, M. V., Computers Then and Now, J. ACM 15, No. 1,1-7 (Jan. 1968).

Wilkes, M. V. and R, M. Needham, The Design of Multiple-Access Computer Systems: Part 2, The Computer J. 10,315-320 (Feb. 1968).

Williams, J. H., Jr., Results of Classifying Documents with Multi-ple Discriminant Functions, in Statistical Association Methodsfnr Mechanized Documentation, Symp. Proc., Washington, .

D.C., March 17-19, 1964, NBS Misc. Pub.. 269, Ed. M. E.Stevens et al., pp. 217-224 (U.S. Govt, Print. Off Washington,D.C., Dec. 15, 1965).

Wilson, D. M. and D. J. Moss., CAT: A 7090-3600 Computer-Aided Translation, Commun. ACM 8, No. 12, 777-781(Dec, 1965).

Witt, B. I., M65MP: An Experiment in OS/360 Multiprocessing,Proc. 23rd National Conf., ACM, Las Vegas, Nev., Aug. 27-29,1968; pp. 691-703 (Brandon/Systems Press, Inc., Princeton,N.J., 1968).

Wooster, II., Long Range Research in the Information Sciences,Rept. No. AFOSR-1571, 24 p. (U.S. Air Force, Office of Scien-tific Research, presentation at the Science and EngineeringSymp., San Francisco, Calif., Oct. 3-4, 1961).

Wunderlich, M. C., Sieving Procedures on a Digital Computer,J. ACM 14, No. 1, 10-19 (Jan 1967).

Wyle, H., and G. J. Burnett, Some Relationships Between FailureDetection Probability and Computer System Reliability, AFIPSProc. Fall Joint Computer Conf., Vol. 31, Anaheim, Calif.,Nov. 14-16, 1967, pp. 745-756 (Thompson Books, Washington,D.C., 1967).

Yefsky, S. A., Ed., Law Enforcement Science and Technology,Vol. 1, Proc. First National Symp. on Law EnforcementScience and Technology, Chicago, Pl., March 1967, 985 p.(Thompson Book Co., Washington, D.C., 1967). ,

Yourdon, E., An Approach to 7.4easuring a Time-Sharing System,Datamation 15, No. 4, 124 -126 (Apr. 1969).

Zucker; M. S., LOCS: An EDP Machine Logic and Control Simu-lator, IEEE Trans, Electron. Computers EC-14, No. 3, 403-416 (June 1965).

U.S. GOVERNMENT PRINTING OFFICE : 1970 01.-376-411

Page 150: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

THE NATIONAL ECONOMIC GOALSustained maximum growth in a freemark et economy, without inflation,under conditions of full employmentand equal opportunity

THE DEPARTMENT OF COMMERCE

The historic mission of the Departmentis "to foster, promote and develop theforeign and domestic commerce" of theUnited States. This has evolved, as aresult of legislative and administrativeadditions, to encompass broadly the re-sponsibility to foster, serve and promotethe nation's economic development andtechnological advancement. The Depart-ment seeks to fulfill this mission throughthese activities:

osslr 01,

4 16a

JP

MISSION ANDFUNCTIONS

OF THEDEPARTMENT OFCOMMERCE

"to foster, serve andpromote the nation'seconomic developmentand technologicaladvancement"

Participating withother governmentagencies in thecreation of nationalpolicy, through thePresident's Cabinetand its subdivisions.

Cabinet Committeeon Economic Policy

Urban AffairsCouncil

EnvironmentalQuality Council

Promoting progressive Assisting states,business policies and communities andgrowth. individuals toward

economic progress.Business andDefense ServicesAdministration

Office of FieldServices

EconomicDevelopmentAdministration

Regional PlanningCommissions

Office of MinorityBusiness Enterprise

NOTE: This schematic is neither an organization chart nor aprogram outline for budget purposes. It is a general statementof the Department's mission in relation to the national goalof economic development.

Strengtheningthe internationaleconomic positionof the UnitedStates.

Bureau ofInternationalCommerce

Office of ForeignCommercialServices

Office of ForeignDirect Investments

United StatesTravel Service

MaritimeAdministration

Assuring effectiveuse and growth of thenation's scientificand technicalresources.

EnvironmentalScience ServicesAdministration

Patent Office

Acquiring, analyzingand disseminatinginformation concern-ing the nation andthe economy to helpachieve increasedsocial and economicbenefit.

Bureau ofthe Census

National Bureau of Office of BusinessStandards Economics

Office ofTelecommunications

Page 151: DOCUMENT RESUME - ERICNational Standard Reference Data. Series. NSRDS provides quantitive data on the physical. and chemical properties of materials, compiled from the world's literature

NBS TECHNICAL PUBLICATIONS

PERIODICALS

JOURNAL OF RESEARCH reports NationalBureau of Standards research and development inphysics, mathematics, chemistry, and engineering.Comprehensive scientific papers give complete detailsof the work, including laboratory data, experimentalprocedures, and theoretical and mathematical analy-ses. Illustrated with photographs, drawings, andcharts.

Published in three sections, available separately:

Physics and Chemistry

Papers of interesc primarily to scientists working inthese fields. This section covers a broad range ofphysical and chemical research, with major emphasison standards of physical measurement, fundamentalconstants, and properties of matter. Issued six timesa year. Annual subscription : Domestic, $9.50; for-eign, $11.75*.

Mathematical Sciences

Studies and compilations designed mainly for themathematician and theoretical physicist. Topics inmathematical statistics, theory of experiment design,numerical analysis, theoretical physics and chemis-try, logical design and programming of computersand computer systems. Short numerical tables.Issued quarterly. Annual subscription: Domestic,$5.00; foreign, $6.25*.

Engineering and Instrumentation

Reporting results of interest chiefly to the engineerand the applied scientist. This section includes manyof the new developments in instrumentation resultingfrom the Bureau's work in physical measurement,data processing, and development of test methods.It will also cover some of the work in acoustics,applied mechanics, building research, and cryogenicengineering. Issued quarterly. Annual subscription :Domestic, $5.00; foreign, $6.25*.

TECHNICAL NEWS BULLETIN

The best single source of information concerning theBureau's research, developmental, cooperative andpublication activities, this monthly publication isdesigned for the industry-oriented individual whosedaily work involves intimate contact with science andtechnologyfor engineers, chemists, physicists, re-search managers, product-development managers, andcompany executives. Annual subscription: Domestic,$3.00; foreign, $4.00*.*Difference in price is due to extra cost of foreign mailing.

Order NBS publications from:

NONPERIODICALS

Applied Mathematics Series. Mathematical tables,manuals, and studies.

Building Science Series. Research xesults, testmethods, and performance criteria of building ma-terials, coniponents, systems, and structures.

Handbooks. Recommended codes of engineeringand industrial practice (including safety codes) de-veloped in cooperation with interested industries,professional organizations, and regulatory bodies.

Special Publications. Proceedings of NBS confer-ences, bibliographies, annual reports, wall charts,pamphlets, etc.

Monographs. Major contributions to the technicalliterature on various subjects related to the Bureau'sscientific and technical activities.

National Standard Reference Data Series.NSRDS provides quantitive data on the physicaland chemical properties of materials, compiled fromthe world's literature and critically evaluated.

Product Standards. Provide requirements for sizes,types, quality and methods for testing various indus-trial products. These standards are developed coopera-tively with interested Government and industry groupsand provide the basis for common understanding ofproduct characteristics for both buyers and sellers.Their use is voluntary.

Technical Notes. This series consists of communi-cations and reports (covering both other agency andNBS-sponsored work) of limited or transitory interest.

Federal Information Processing Standards Pub-lications. This series is the official publication withinthe Federal Government for information on standardsadopted and promulgated under the Public Law89-306, and Bureau of the Budget Circular A-86entitled, Standardization of Data Elements and Codesin Data Systems.

CLEARINGHOUSE

The Clearinghouse for Federal Scientific andTechnical Information, operated by NBS, suppliesunclassified information related to Goverment-gen-erated science and technology in defense, space,atomic energy, and other national programs. Forfurther information on Clearinghouse services, write:

ClearinghouseU.S. Department of CommerceSpringfield, Virginia 22151

Superintendent of DocumentsGovernment Printing OfficeWashington, D.C. 20402

draM01.5.10,4^Ar1M91