Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Trusted system
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Trusted systems in classified information== A subset of trusted systems ("Division B" and "Division A") implement [[mandatory access control]] (MAC) labels, and as such, it is often assumed that they can be used for processing [[classified information]]. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel, compartmented, dedicated, and system-high modes. The National Computer Security Center's "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration. Central to the concept of [[United States Department of Defense|U.S. Department of Defense]]-style trusted systems is the notion of a "[[reference monitor]]", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is *tamper-proof *always invoked * small enough to be subject to independent testing, the completeness of which can be assured. According to the U.S. [[National Security Agency]]'s 1983 [[Trusted Computer System Evaluation Criteria]] (TCSEC), or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system. The dedication of significant system engineering toward minimizing the complexity (not ''size'', as often cited) of the [[trusted computing base]] (TCB) is key to the provision of the highest levels of assurance (B3 and A1). This is defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy. An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness". TCSEC has a precisely defined hierarchy of six evaluation classes; the highest of these, A1, is featurally identical to B3—differing only in documentation standards. In contrast, the more recently introduced [[Common Criteria]] (CC), which derive from a blend of technically mature standards from various [[NATO]] countries, provide a tenuous spectrum of seven "evaluation classes" that intermix features and assurances in a non-hierarchical manner, and lack the precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support – even encourage – an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a case can be made that even the seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.{{Citation needed|date=June 2009}} The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, under the technical guidance and financial sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Fort Hanscom, MA), devised the [[Bell–LaPadula model]], in which a trustworthy computer system is modeled in terms of '''objects''' (passive repositories or destinations for data such as files, disks, or printers) and '''subjects''' (active entities that cause information to flow among objects ''e.g.'' users, or system processes or threads operating on behalf of users). The entire operation of a computer system can indeed be regarded as a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows. At the same time, Dorothy Denning at [[Purdue University]] was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a [[partially ordered set]], characterizable as a [[directed acyclic graph]], in which the relationship between any two vertices either "dominates", "is dominated by," or neither.) She defined a generalized notion of "labels" that are attached to entities—corresponding more or less to the full security markings one encounters on classified military documents, ''e.g.'' TOP SECRET WNINTEL TK DUMBO. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, ''Secure Computer System: Unified Exposition and Multics Interpretation''. They stated that labels attached to objects represent the sensitivity of data contained within the object, while those attached to subjects represent the trustworthiness of the user executing the subject. (However, there can be a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself.) The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it ''dominates'' [''is greater than'' is a close, albeit mathematically imprecise, interpretation]) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no read-up" and "no write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository where insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no read-up and no write-down rules rigidly enforced by the reference monitor are sufficient to constrain [[Trojan horse (computing)|Trojan horses]], one of the most general classes of attacks (''sciz.'', the popularly reported [[Computer worm|worms]] and [[viruses]] are specializations of the Trojan horse concept). The Bell–LaPadula model technically only enforces "confidentiality" or "secrecy" controls, ''i.e.'' they address the problem of the sensitivity of objects and attendant trustworthiness of subjects to not inappropriately disclose it. The dual problem of "integrity" (i.e. the problem of accuracy, or even provenance of objects) and attendant trustworthiness of subjects to not inappropriately modify or destroy it, is addressed by mathematically affine models; the most important of which is named for its creator, [[Biba Model|K. J. Biba]]. Other integrity models include the [[Clark-Wilson model]] and Shockley and Schell's program integrity model, "The SeaView Model"<ref>Lunt, Teresa & Denning, Dorothy & R. Schell, Roger & Heckman, Mark & R. Shockley, William. (1990). The SeaView Security Model.. IEEE Trans. Software Eng.. 16. 593-607. 10.1109/SECPRI.1988.8114. [https://www.researchgate.net/publication/220071090_The_SeaView_Security_Model (Source)]</ref> An important feature of MACs, is that they are entirely beyond the control of any user. The TCB automatically attaches labels to any subjects executed on behalf of users and files they access or modify. In contrast, an additional class of controls, termed [[discretionary access control]]s(DACs), '''are''' under the direct control of system users. Familiar protection mechanisms such as [[File system permissions#Numeric notation|permission bits]] (supported by UNIX since the late 1960s and – in a more flexible and powerful form – by [[Multics]] since earlier still) and [[access control list]] (ACLs) are familiar examples of DACs. The behavior of a trusted system is often characterized in terms of a mathematical model. This may be rigorous depending upon applicable operational and administrative constraints. These take the form of a [[finite-state machine]] (FSM) with state criteria, state [[transition constraint]]s (a set of "operations" that correspond to state transitions), and a [[multiple single-level#Cross-domain solutions|descriptive top-level specification]], DTLS (entails a user-perceptible [[interface (computing)|interface]] such as an [[API]], a set of [[system call]]s in [[UNIX]] or [[exit (system call)|system exit]]s in [[mainframe computer|mainframe]]s). Each element of the aforementioned engenders one or more model operations.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)