Lecture 11 Designing Trusted Operating Systems
Transcript of Lecture 11 Designing Trusted Operating Systems
![Page 1: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/1.jpg)
COMP 424
Lecture 11Designing Trusted Operating Systems
![Page 2: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/2.jpg)
Trust
● An operating system is trusted if we have confidence that it consistently and effectively provides:– Memory protection
– File protection
– General object access control
– User authentication
![Page 3: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/3.jpg)
Policy– Every system can be described by its
requirements.
– This set of requirements is the “policy” of the system.
– This information is used as a basis for whether or not the system is performing as intended.
● E.g. Files must be accessible by only the owner● Must support simultaneous usage.● Must not allow passwords to be compromised
remotely.
![Page 4: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/4.jpg)
Model
● Models are designed as a means to achieve the specifications of the policy.
● Different models can be compared and evaluated as to which satisfys the policy requirements best.
![Page 5: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/5.jpg)
Design
● Once a model has been selected, designs choose a method for implementing the model.
● A design includes both of the concepts of “what is it supposed to do” (policy) and “how will it achieve it.”
![Page 6: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/6.jpg)
Trust
● The level of confidence that we as consumers have that a particular systems satisfys the policy that we believe it should.
![Page 7: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/7.jpg)
Dichotomy of the term “Secure”
● The term secure reflects a dichotomy:– Either something IS secure, or
– It is NOT.
● There is no middle ground● If it is secure then it should withstand all
attacks... now, in the future, and forever.● Typically an unachievable standard.
![Page 8: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/8.jpg)
Quality
● The overall evaluation of how good something is.
● Security is only a single facet of measurement of quality.
● Other factors can be of more importanc than security.
● It is common to have high quality products that are not as secure as we would like.
![Page 9: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/9.jpg)
Trust
● Since tradeoffs may need to be made in order to obtain quality and it is unreasonable to guarantee “Security” most of the time...
● My security professionals prefer to speak in terms of “trust” rather than secure.
![Page 10: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/10.jpg)
Trust
● Not a dichotomy. Provides for a spetrum of “gray” areas.
● A characteristic that often grows (or decreases) over time in accordance with evidence and experience.
● Trust is an evaluation of the recipient of the product and not that of the presenter.
● Viewed in context of use.
![Page 11: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/11.jpg)
Security Policy
● A statement of the security we expect a system to enforce.
● Military usage:– Broken down in to levels of sensitivity
● Unclassified● Restricted● Confidential● Secret● Top Secret
![Page 12: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/12.jpg)
Need To Know basis
● Access to information is restricted to only those individuals that require the knowledge to perform their duties.
● The decision as to whether somebody needs knowledge is controlled by higher authorities in the military context.
![Page 13: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/13.jpg)
Compartments
● In addition to grouping by security level information is group according to compartments.
● “Codenames”● Access is granted to only those members that
are associated with the compartment.
![Page 14: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/14.jpg)
Classification
● The combination of a security level and a compartment is designated as a “classification”
● Denoted by: <rank; compartments> where rank is the security level rank.
● Clearance is an indication that an individual is authorized to access information upto a certain level within a set of compartments.
![Page 15: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/15.jpg)
Domination
● Both people and security can be given a classification. (People appear not different than information then.)
● s<=o iff ranks <= rank
o and comparments
o is a
superset of compartmentss
● Used to limit access to only those individuals that dominated the object being accessed.
![Page 16: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/16.jpg)
Dominance demystified
● Basically in order to have access to information the following is required:
● The clearance level of the subject is at least as high as the object
● The subject has a needtoknow about all compartments to which the information is classified.
● This is equivalent to saying the subject dominates the object.
![Page 17: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/17.jpg)
Commercial Security Policies
● Usually less rigid and less heirarchical than military policies.
● Still have valid worries.● Arguably subject to more crackers as more
obvious targets.
![Page 18: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/18.jpg)
Seurity Levels
● Public: information than any one may have access to.– May also include information that must be
available to the public such as stock performance, insider trading and membership information.
● Proprietary: information that competitors should not be aware or the public does not need.
● Internal: information that should never be known outside of the company.
![Page 19: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/19.jpg)
Lack of Dominance Function
● The decisions as to who has access to what in a commercial environment is typically more fluid.
● Such decisions are made on the spot and the duration of the decision can be highly fluid.
● Thus, there is no well defined dominance function for more commericial security models. (Though your boss will think about “dominance” in less appeals terms)
![Page 20: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/20.jpg)
Integrity and availability
● Classification and heirarchies can be used to control confidentiality.
● Integrity and availability are of equal importance.
● Security Policies do a better job of securing these types of access.
![Page 21: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/21.jpg)
ClarkWilson Policy
● Security is established through the authorizations of transactions of transformation procedures.
● Security is modeled through the use of access triples: <userid, TP
i,{CDI
j, CDI
k, ...}>
● The only way to modify data is to request a transformation procedure to carry out the operation.
![Page 22: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/22.jpg)
Separation of Duty
● A person with authority to perform multiple activities may be able to abuse the intended security.– If you can issue goods, receive goods and also
write checks then you can probably steal stuff.
● Separation of Duty limits the authority to perform actions and distributes the duties required to carry out a complicated task to separate individuals. Having limited abilities prevents abuse but also creates dependencies.
![Page 23: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/23.jpg)
Dependencies
● ClarkWilson triples are stateless. Actions performed by current transformation procedure does not require any knowledge or activity by a prior transformation.
● Separation of Duty can accomplish this. Usually through the use of dual signatures.
● Generally though distinctness is easily implemented if it is stated as a policy requirement.
![Page 24: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/24.jpg)
Chinese Wall Security Policy
● Focuses on commercial needs for information security. Especially for areas that deal with “conflict of interest” items.
● Information objects are grouped– Into Company Groups
– then into Conflict classes.
![Page 25: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/25.jpg)
● Company groups represent those objects that belong to the same company or organization.
● Groups belonging to the same conflict classes as those groups were knowledge of information from one group would conflict with knowledge from another group in the same conflict class
![Page 26: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/26.jpg)
● Subjects are only granted access to objects as long as they have never accessed information from a different company group in the same conflict class. Thus only piece of knowledge can be gained and then access to all other (conflicting) knowledge is prohibited.
● Interesting because of its dynamic nature. At time T
0 you have certain rights and at T
1 your
rightshave been dynamically altered as a result of your previous activities.
![Page 27: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/27.jpg)
Models
● Useful for– Testing a particular policy for completeness and
consistency
– Document a policy
– Help conceptualize and design an implementation
– Check whether requirements have been met.
![Page 28: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/28.jpg)
Lattice model
● A mathematical model– A collection of elements organized by a relation
among them.
– Requires transitive and antisymetric properties
● Represents natural increasing degrees of sensitivity. (dominance function)
● Military Classification Policy is a lattice model.
![Page 29: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/29.jpg)
BellLa Padula Confidentiality
● Used to prevent simultaneous access to information of differing levels of sensitivity.– A computer may need to process both topsecret
information at the same time it is handling Classified data as well.
● There are two properties defined for this model:
![Page 30: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/30.jpg)
BellLa Padula Properties
● Simple security property: A subject s may have read access to an object o only ifC(o) <= C(s). Where C() is the clearance of some object or subject.
● *property: A subject s who has read access to an object o may have write access to object p only if C(o)<=C(p).
● Deals with disclosure of information but doesn't explicitly handle integrity.
![Page 31: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/31.jpg)
Biba Integrity Model
● Supplements BellLa Padula with:● Simple Integrity Property: Subject s can
modify object o obly if I(s)>=I(o)● Integrity *property: If subject s has read
access to object o with integrity level I(o), s can have write access to object p only if I(o)>=I(p)
● Prevents degradation of integrity through untrustworthy sources or information.
![Page 32: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/32.jpg)
Theoretical Models
● Used to establish the feasability of an approach
● Similar to turning machines and the computer science question of computability.
● If we pose a question we want to be able to know if we will ever be able to decide the answer.
![Page 33: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/33.jpg)
GrahamDenning Model
● Model consists of– Subjects S
– Object O
– Rights R
– Access control matric A
![Page 34: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/34.jpg)
● Eight primitive protection rights● Create objects● Create subject● delete object● Delete subject● Read Access right● Grant access right● Delete access right● Transfer access right
![Page 35: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/35.jpg)
HarrisonRuzzoUllman (HRU)
● Model is defined by a list of commands and an access control matrix
● Operations are limited to:● Create subject s● Create object o● Destroy subject s● Destroy object o● Enter right r into A[s,o]● Delete right r from A[s,o]
![Page 36: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/36.jpg)
Results of HSU
● Any policy that can be modelled as HRU exhibits the following two properties:– If commands can be restricted to a single
operation each then it is possible to determine whether a given subject can ever obtain a particular right to an object.
– If commands are not restricted to one operation then it is not always decidable whether a given protection system can confer a given right.
![Page 37: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/37.jpg)
TakeGrant Systems
● Only four primitave operations– Create
– Revoke
– Take
– Grant
● Can be illustrated through the use of graphs.● Assumes that if a subject can grant rights on
an object that they will grant such rights.
![Page 38: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/38.jpg)
TakeGrant results
● Can we decide whether a given subject can share an object with another subject?– Yes. We can.
● Can we decide whether a given subject can steal access to an object from another subject?– Yes. We can.
![Page 39: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/39.jpg)
● Operating systems control the interactions between subject and objects
● Because security appears in every aspect of the operating system the design and implementation of security components cannot be left vague or untested until the rest of the system is working.– It is encredibly difficult to succesfully retrofit
security abilities.
![Page 40: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/40.jpg)
Security Principles
– Least Privileges: only allow the minimum
– Economy of mechanism: keep it simple
– Open Design: avoid security through obscurity
– Complete mediation: check every access
– Permission based: deny by default
– Separation of privilege: multiple checks
– Least common mechanism: avoid sharing
– Ease of use: protection should be easy to use.
![Page 41: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/41.jpg)
Security Features of ordinary Operating Systems
– Authentication of users
– Protection of memory
– File and I/O access control
– Allocation of general objects
– Enforcement of sharing
– Guarantee of fair service
– Interprocess communication and synchronization
– Protection of operating system protection data
![Page 42: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/42.jpg)
Security Features of trusted operating systems
● It is not enough to implement these features.● The system must also be capable of assuring
that the features are functioning correctly
![Page 43: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/43.jpg)
Identification and Authentication
● Identification is at the root of much of computer security.
● It is involves both identifying who the user and verifying that the user is who they claim to be.
● Trusted operating systems require secure identification and each individual must be uniquely identified.
![Page 44: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/44.jpg)
Mandatory and Discretionary Access Control
● Mandatory access control: Decisions about access must beyond the control of the user.
● A central authority handles these decisions● Discretionary access control: leaves a certain
amount of control authority with the user.● Examples are Unix file sharing permissions.
![Page 45: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/45.jpg)
Object Reuse protection
● Operating systems goals include efficiency.● It is often efficient to reuse objects rather than
completely destroy them● Trusted systems must make sure that security
cannot be abused due to the reuse of objects usually by clearing, or zeroing, out any object before it is allocated to the user.
![Page 46: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/46.jpg)
Complete Mediation
● Highly trusted systems perform access checks for each and every access attempt.
● This can become complcated and costly as the number of access paths to a system increases.
![Page 47: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/47.jpg)
Trusted Path
● Trusted Operating systems must provide trusted paths for access to an modification of critical data.
● Users must be able to tell whether they are using a trusted path in order to gain access to an object.
● Windows 2000 password entry is capable of a trusted path. Hit ctrlaltdel twice rapidly. Nothing but the operating sytem can provide this sequence.
![Page 48: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/48.jpg)
Accountability and Audit
● Trusted systems must maintain secure logs documenting the security relevant events that have occurred.
● This provides a mechanism by which audits can occur.
![Page 49: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/49.jpg)
Audit Log Reduction
● Audit logs typically become very large– Needle in haystack type problems arise
● Trusted systems take steps to reduce audit logs to only those events that are meaningful and are the result of some activity that violates, or attempts, to violate the security policies that are in place.
![Page 50: Lecture 11 Designing Trusted Operating Systems](https://reader030.fdocuments.in/reader030/viewer/2022012800/61bd038061276e740b0e7826/html5/thumbnails/50.jpg)
Intrusion Detection
● Trusted systems should be capable of detecting intrusion.
● Capable of alerting and obtaining response to detected intrusion.
● Provide forensic evidence that results from an intrusion.