Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Computer security
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Security by design=== {{Main|Secure by design}} Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature. The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:<ref>{{Cite web |title=Cyber security design principles |url=https://www.ncsc.gov.uk/collection/cyber-security-design-principles/cyber-security-design-principles |access-date=2023-12-11 |website=www.ncsc.gov.uk |language=en}}</ref> # Before a secure system is created or updated, companies should ensure they understand the fundamentals and the context around the system they are trying to create and identify any weaknesses in the system. # Companies should design and centre their security around techniques and defences which make attacking their data or systems inherently more challenging for attackers. # Companies should ensure that their core services that rely on technology are protected so that the systems are essentially never down. # Although systems can be created which are safe against a multitude of attacks, that does not mean that attacks will not be attempted. Despite one's security, all companies' systems should aim to be able to detect and spot attacks as soon as they occur to ensure the most effective response to them. # Companies should create secure systems designed so that any attack that is successful has minimal severity. These design principles of security by design can include some of the following techniques: * The [[principle of least privilege]], where each part of the system has only the privileges that are needed for its function. That way, even if an [[Hacker (computer security)|attacker]] gains access to that part, they only have limited access to the whole system. * [[Automated theorem proving]] to prove the correctness of crucial software subsystems. * [[Code review]]s and [[unit testing]], approaches to make modules more secure where formal correctness proofs are not possible. * [[Defense in depth (computing)|Defense in depth]], where the design is such that more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. * Default secure settings, and design to ''fail secure'' rather than ''fail insecure'' (see [[fail-safe]] for the equivalent in [[safety engineering]]). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure. * [[Audit trail]]s track system activity so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. * [[Full disclosure (computer security)|Full disclosure]] of all vulnerabilities, to ensure that the ''window of vulnerability'' is kept as short as possible when bugs are discovered.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)