Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Trusted computing base
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Set of all computer components critical to its security}} {{distinguish|Trusted Computing}} {{textbook|date=February 2020}} [[The]] '''trusted computing base''' ('''TCB''') of a [[computer system]] is the set of all [[Computer hardware|hardware]], [[firmware]], and/or [[software]] components that are critical to its [[computer security|security]], in the sense that [[Software bug|bugs]] or [[Vulnerability (computing)|vulnerabilities]] occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more [[privilege (computer science)|privilege]]s than are granted to them in accordance to the system's [[security policy]]. The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern [[operating system]]s strive to reduce the size of the TCB{{Citation needed lead|date=February 2019}} so that an exhaustive examination of its code base (by means of manual or computer-assisted [[software audit review|software audit]] or [[program verification]]) becomes feasible. ==Definition and characterization== The term goes back to [[John Rushby]],<ref> {{cite conference | first = John | last = Rushby | title = Design and Verification of Secure Systems | book-title = 8th ACM Symposium on Operating System Principles | pages = 12–21 | year = 1981 | location = Pacific Grove, California, US }}</ref> who defined it as the combination of [[operating system kernel]] and trusted [[Process (computing)|processes]]. The latter refers to processes which are allowed to violate the system's access-control rules. In the classic paper ''Authentication in Distributed Systems: Theory and Practice''<ref>B. Lampson, M. Abadi, M. Burrows and E. Wobber, [http://citeseer.ist.psu.edu/lampson92authentication.html Authentication in Distributed Systems: Theory and Practice], [[ACM Transactions on Computer Systems]] 1992, on page 6.</ref> [[Butler Lampson|Lampson]] et al. define the TCB of a [[computer system]] as simply : ''a small amount of software and hardware that security depends on and that we distinguish from a much larger amount that can misbehave without affecting security.'' Both definitions, while clear and convenient, are neither theoretically exact nor intended to be, as e.g. a [[network server]] process under a [[UNIX]]-like operating system might fall victim to a [[security breach]] and compromise an important part of the system's security, yet is not part of the operating system's TCB. The [[Trusted Computer System Evaluation Criteria|Orange Book]], another classic [[computer security]] literature reference, therefore provides<ref>[http://csrc.nist.gov/publications/history/dod85.pdf Department of Defense trusted computer system evaluation criteria], DoD 5200.28-STD, 1985. In the glossary under entry '''Trusted Computing Base (TCB)'''.</ref> a more formal definition of the TCB of a computer system, as : ''the totality of protection mechanisms within it, including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy.'' In other words, trusted computing base (TCB) is a combination of hardware, software, and controls that work together to form a trusted base to enforce your security policy. The Orange Book further explains that : ''<nowiki>[t]</nowiki>he ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of the mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness, and the correct input of parameters related to the security policy.'' In other words, a given piece of hardware or software is a part of the TCB if and only if it has been designed to be a part of the mechanism that provides its security to the computer system. In [[operating system]]s, this typically consists of the kernel (or [[microkernel]]) and a select set of system utilities (for example, [[setuid]] programs and [[Daemon (computer software)|daemons]] in UNIX systems). In [[programming language]]s designed with built-in security features, such as [[Java (programming language)|Java]] and [[E (programming language)|E]], the TCB is formed of the language runtime and standard library.<ref>M. Miller, C. Morningstar and B. Frantz, [http://www.erights.org/elib/capability/ode/ode-linear.html Capability-based Financial Instruments (An Ode to the Granovetter diagram)], in paragraph ''Subjective Aggregation''.</ref> ==Properties== ===Predicated upon the security policy=== As a consequence of the above Orange Book definition, the boundaries of the TCB depend closely upon the specifics of how the security policy is fleshed out. In the network server example above, even though, say, a [[Web server]] that serves a [[multi-user]] application is not part of the operating system's TCB, it has the responsibility of performing [[access control]] so that the users cannot usurp the identity and privileges of each other. In this sense, it definitely is part of the TCB of the larger computer system that comprises the UNIX server, the user's browsers and the Web application; in other words, breaching into the Web server through e.g. a [[buffer overflow]] may not be regarded as a compromise of the operating system proper, but it certainly constitutes a damaging [[exploit (computer security)|exploit]] on the Web application. This fundamental relativity of the boundary of the TCB is exemplified by the concept of the 'target of evaluation' ('TOE') in the [[Common Criteria]] security process: in the course of a Common Criteria security evaluation, one of the first decisions that must be made is the boundary of the audit in terms of the list of system components that will come under scrutiny. ===A prerequisite to security=== Systems that don't have a trusted computing base as part of their design do not provide security of their own: they are only secure insofar as security is provided to them by external means (e.g. a computer sitting in a locked room without a network connection may be considered secure depending on the policy, regardless of the software it runs). This is because, as [[David J. Farber]] et al. put it,<ref>W. Arbaugh, D. Farber and J. Smith, [http://citeseer.ist.psu.edu/article/arbaugh97secure.html A Secure and Reliable Bootstrap Architecture], 1997, also known as the “aegis papers”.</ref> ''<nowiki>[i]n</nowiki> a computer system, the integrity of lower layers is typically treated as axiomatic by higher layers''. As far as computer security is concerned, reasoning about the security properties of a computer system requires being able to make sound assumptions about what it can, and more importantly, cannot do; however, barring any reason to believe otherwise, a computer is able to do everything that a general [[Von Neumann architecture|Von Neumann machine]] can. This obviously includes operations that would be deemed contrary to all but the simplest security policies, such as divulging an [[email]] or [[password]] that should be kept secret; however, barring special provisions in the architecture of the system, there is no denying that the computer ''could be programmed'' to perform these undesirable tasks. These special provisions that aim at preventing certain kinds of actions from being executed, in essence, constitute the trusted computing base. For this reason, the [[Trusted Computer System Evaluation Criteria|Orange Book]] (still a reference on the design of secure operating systems {{As of|2007|lc=on}}) characterizes the various security assurance levels that it defines mainly in terms of the structure and security features of the TCB. ===Software parts of the TCB need to protect themselves=== As outlined by the aforementioned Orange Book, software portions of the trusted computing base need to protect themselves against tampering to be of any effect. This is due to the [[von Neumann architecture]] implemented by virtually all modern computers: since [[machine code]] can be processed as just another kind of data, it can be read and overwritten by any program. This can be prevented by special [[memory management]] provisions that subsequently have to be treated as part of the TCB. Specifically, the trusted computing base must at least prevent its own software from being written to. In many modern [[CPU]]s, the protection of the memory that hosts the TCB is achieved by adding in a specialized piece of hardware called the [[memory management unit]] (MMU), which is programmable by the operating system to allow and deny a running program's access to specific ranges of the system memory. Of course, the operating system is also able to disallow such programming to the other programs. This technique is called [[supervisor mode]]; compared to more crude approaches (such as storing the TCB in [[Read-only memory|ROM]], or equivalently, using the [[Harvard architecture]]), it has the advantage of allowing security-critical software to be upgraded in the field, although allowing secure upgrades of the trusted computing base poses bootstrap problems of its own.<ref>[http://citeseer.ist.psu.edu/article/arbaugh97secure.html A Secure and Reliable Bootstrap Architecture], ''op. cit.''</ref> ===Trusted vs. trustworthy=== As stated [[#A prerequisite to security|above]], [[Trusted system|trust]] in the trusted computing base is required to make any progress in ascertaining the security of the computer system. In other words, the trusted computing base is “trusted” first and foremost in the sense that it ''has'' to be trusted, and not necessarily that it is trustworthy. Real-world operating systems routinely have security-critical bugs discovered in them, which attests to the practical limits of such trust.<ref>[[Bruce Schneier]], [http://www.schneier.com/crypto-gram-0103.html#1 The security patch treadmill] (2001)</ref> The alternative is formal [[software verification]], which uses mathematical proof techniques to show the absence of bugs. Researchers at [[NICTA]] and its spinout [[Open Kernel Labs]] have recently performed such a formal verification of seL4, a member of the [[L4 microkernel|L4 microkernel family]], proving functional correctness of the C implementation of the kernel.<ref Name="Klein_EHACDEEKNSTW_09"> {{ cite conference | first = Gerwin | last = Klein | first2 = Kevin | last2 = Elphinstone | first3 = Gernot | last3 = Heiser | author3-link = Gernot Heiser | first4 = June | last4 = Andronick | first5 = David | last5 = Cock | first6 = Philip | last6 = Derrin | first7 = Dhammika | last7 = Elkaduwe | first8 = Kai | last8 = Engelhardt | first9 = Rafal | last9 = Kolanski | first10 = Michael | last10 = Norrish | first11 = Thomas | last11 = Sewell | first12 = Harvey | last12 = Tuch | first13 = Simon | last13 = Winwood | title = seL4: Formal verification of an OS kernel | book-title = 22nd ACM Symposium on Operating System Principles | pages = 207–220 |date=October 2009 | location = Big Sky, Montana, US | url = http://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf }}</ref> This makes seL4 the first operating-system kernel which closes the gap between trust and trustworthiness, assuming the mathematical proof is free from error. ===TCB size=== Due to the aforementioned need to apply costly techniques such as formal verification or manual review, the size of the TCB has immediate consequences on the economics of the TCB assurance process, and the trustworthiness of the resulting product (in terms of the [[expected value|mathematical expectation]] of the number of bugs not found during the verification or review). In order to reduce costs and security risks, the TCB should therefore be kept as small as possible. This is a key argument in the debate preferring [[microkernel]]s to [[monolithic kernel]]s.<ref>[[Andrew S. Tanenbaum]], [http://www.cs.vu.nl/~ast/reliable-os/ Tanenbaum-Torvalds debate, part II] (12 May 2006)</ref> ==Examples== [[AIX operating system|AIX]] materializes the trusted computing base as an optional component in its install-time package management system.<ref>[https://web.archive.org/web/20160624012547if_/redbooks.ibm.com/pubs/pdfs/redbooks/sg245962.pdf AIX 4.3 Elements of Security], August 2000, chapter 6.</ref> ==See also== * [[Black box]] * [[Trusted Computer System Evaluation Criteria|Orange Book]] * [[Trust anchor]] * [[Hardware security]] ==References== {{Reflist}} {{DEFAULTSORT:Trusted Computing Base}} [[Category:Computer security procedures]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:As of
(
edit
)
Template:Citation needed lead
(
edit
)
Template:Cite conference
(
edit
)
Template:Distinguish
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Textbook
(
edit
)