Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Data remanence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Complications== ===Inaccessible media areas=== Storage media may have areas which become inaccessible by normal means. For example, [[Magnetic storage|magnetic disks]] may develop new [[bad sector]]s after data has been written, and tapes require inter-record gaps. Modern [[hard disk]]s often feature reallocation of marginal sectors or tracks, automated in a way that the [[operating system]] would not need to work with it. The problem is especially significant in [[Solid-state drive|solid-state drives]] (SSDs) that rely on relatively large relocated bad block tables. Attempts to counter data remanence by [[#Overwriting|overwriting]] may not be successful in such situations, as data remnants may persist in such nominally inaccessible areas. === Advanced storage systems === Data storage systems with more sophisticated features may make [[#Overwriting|overwrite]] ineffective, especially on a per-file basis. For example, [[journaling file system]]s increase the integrity of data by recording write operations in multiple locations, and applying [[Transaction processing|transaction]]-like semantics; on such systems, data remnants may exist in locations "outside" the nominal file storage location. Some file systems also implement [[copy-on-write]] or built-in [[revision control]], with the intent that writing to a file never overwrites data in-place. Furthermore, technologies such as [[RAID]] and [[File system fragmentation|anti-fragmentation]] techniques may result in file data being written to multiple locations, either by design (for [[Fault-tolerant design|fault tolerance]]), or as data remnants. [[Wear leveling]] can also defeat data erasure, by relocating blocks between the time when they are originally written and the time when they are overwritten. For this reason, some security protocols tailored to operating systems or other software featuring automatic wear leveling recommend conducting a free-space wipe of a given drive and then copying many small, easily identifiable "junk" files or files containing other nonsensitive data to fill as much of that drive as possible, leaving only the amount of free space necessary for satisfactory operation of system hardware and software. As storage and system demands grow, the "junk data" files can be deleted as necessary to free up space; even if the deletion of "junk data" files is not secure, their initial nonsensitivity reduces to near zero the consequences of recovery of data remanent from them.{{Citation needed|date=August 2014}} ===Optical media=== As [[Optical disc|optical media]] are not magnetic, they are not erased by conventional [[#Degaussing|degaussing]]. [[Write Once Read Many|Write-once]] optical media ([[CD-R]], [[DVD-R]], etc.) also cannot be purged by overwriting. Rewritable optical media, such as [[CD-RW]] and [[DVD-RW]], may be receptive to [[#Overwriting|overwriting]]. Methods for successfully sanitizing optical discs include [[Delamination|delaminating]] or abrading the metallic data layer, shredding, incinerating, destructive electrical arcing (as by exposure to microwave energy), and submersion in a polycarbonate solvent (e.g., [[acetone]]). ===Data on solid-state drives=== Research from the Center for Magnetic Recording and Research, University of California, San Diego has uncovered problems inherent in erasing data stored on [[solid-state drive]]s (SSDs). Researchers discovered three problems with file storage on SSDs:<ref name="SSD">{{cite journal|date=February 2011|title=Reliably Erasing Data From Flash-Based Solid State Drives|url=http://www.usenix.org/events/fast11/tech/full_papers/Wei.pdf|author1=Michael Wei|author2=Laura M. Grupp|author3=Frederick E. Spada|author4=Steven Swanson}}</ref> {{quote|First, built-in commands are effective, but manufacturers sometimes implement them incorrectly. Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive. Third, none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs.<ref name="SSD"/>{{rp|page=1}} |}} Solid-state drives, which are flash-based, differ from hard-disk drives in two ways: first, in the way data is stored; and second, in the way the algorithms are used to manage and access that data. These differences can be exploited to recover previously erased data. SSDs maintain a layer of indirection between the logical addresses used by computer systems to access data and the internal addresses that identify physical storage. This layer of indirection hides idiosyncratic media interfaces and enhances SSD performance, reliability, and lifespan (see [[wear leveling]]), but it can also produce copies of the data that are invisible to the user and that a sophisticated attacker could recover. For sanitizing entire disks, sanitize commands built into the SSD hardware have been found to be effective when implemented correctly, and software-only techniques for sanitizing entire disks have been found to work most, but not all, of the time.<ref name="SSD"/>{{rp|section 5}} In testing, none of the software techniques were effective for sanitizing individual files. These included well-known algorithms such as the [[Gutmann method]], [[National Industrial Security Program|US DoD 5220.22-M]], RCMP TSSIT OPS-II, Schneier 7 Pass, and Secure Empty Trash on macOS (a feature included in versions OS X 10.3-10.9).<ref name="SSD"/>{{rp|section 5}} The [[Trim (computing)|TRIM]] feature in many SSD devices, if properly implemented, will eventually erase data after it is deleted,<ref>{{Cite journal|last=Homaidi|first=Omar Al|date=2009|title=Data Remanence: Secure Deletion of Data in SSDs|url=https://www.diva-portal.org/smash/record.jsf?dswid=-8239&pid=diva2%3A832529|journal=}}</ref>{{citation needed|reason=This doesn't appear to be a secure method for deletion/sanitization|date=April 2017}} but the process can take some time, typically several minutes. Many older operating systems do not support this feature, and not all combinations of drives and operating systems work.<ref>{{cite web|url=http://forensic.belkasoft.com/en/why-ssd-destroy-court-evidence |title=Digital Evidence Extraction Software for Computer Forensic Investigations |publisher=Forensic.belkasoft.com |date=October 2012 |access-date=2014-04-01}}</ref> === {{Anchor|RAM}}Data in RAM === Data remanence has been observed in [[static random-access memory]] (SRAM), which is typically considered volatile (''i.e.'', the contents degrade with loss of external power). In one study, [[data retention]] was observed even at room temperature.<ref name="skorobogatov">{{cite journal|title=Low temperature data remanence in static RAM|author=Sergei Skorobogatov|publisher=University of Cambridge, Computer Laboratory|date=June 2002|doi=10.48456/tr-536 |url=http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-536.html}}</ref> Data remanence has also been observed in [[dynamic random-access memory]] (DRAM). Modern DRAM chips have a built-in self-refresh module, as they not only require a power supply to retain data, but must also be periodically refreshed to prevent their data contents from fading away from the capacitors in their integrated circuits. A study found data remanence in DRAM with data retention of seconds to minutes at room temperature and "a full week without refresh when cooled with liquid nitrogen."<ref name="Halderman" /> The study authors were able to use a [[cold boot attack]] to recover cryptographic [[key (cryptography)|keys]] for several popular [[full disk encryption]] systems, including Microsoft [[BitLocker Drive Encryption|BitLocker]], Apple [[FileVault]], [[dm-crypt]] for Linux, and [[TrueCrypt]].<ref name="Halderman" />{{rp|page=12}} Despite some memory degradation, authors of the above described study were able to take advantage of redundancy in the way keys are stored after they have been expanded for efficient use, such as in [[key scheduling]]. The authors recommend that computers be powered down, rather than be left in a "[[power management|sleep]]" state, when not in physical control of the owner. In some cases, such as certain modes of the software program BitLocker, the authors recommend that a boot password or a key on a removable USB device be used.<ref name="Halderman">{{cite journal|title=Lest We Remember: Cold Boot Attacks on Encryption Keys|author=J. Alex Halderman|author-link=J. Alex Halderman|date=July 2008|url=https://www.usenix.org/legacy/event/sec08/tech/full_papers/halderman/halderman.pdf|display-authors=etal}}</ref>{{rp|page=12}} [[TRESOR]] is a [[kernel (operating system)|kernel]] [[patch (software)|patch]] for Linux specifically intended to prevent [[cold boot attack]]s on RAM by ensuring that encryption keys are not accessible from user space and are stored in the CPU rather than system RAM whenever possible. Newer versions of the disk encryption software [[VeraCrypt]] can encrypt in-RAM keys and passwords on 64-bit Windows.<ref>https://www.veracrypt.fr/en/Release%20Notes.html VeraCrypt release notes, version 1.24</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)