Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reference counting
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Advantages and disadvantages == The main advantage of the reference counting over [[tracing garbage collection]] is that objects are reclaimed ''as soon as'' they can no longer be referenced, and in an incremental fashion, without long pauses for collection cycles and with clearly defined lifetime of every object. In real-time applications or systems with limited memory, this is important to maintain responsiveness. Reference counting is also among the simplest forms of memory management to implement. It also allows for effective management of non-memory resources such as operating system objects, which are often much scarcer than memory (tracing garbage collection systems use [[finalizer]]s for this,{{Citation needed|date=October 2013}} but the delayed reclamation may cause problems). [[#Weighted reference counting|Weighted reference counts]] are a good solution for garbage collecting a distributed system. [[File:Cassidy.1985.025.gif|thumb|300px|Circular list example from a 1985 Master's Thesis.<ref>{{cite thesis | type=Master's thesis | url=https://commons.wikimedia.org/wiki/File:The_feasibility_of_automatic_storage_reclamation_with_concurrent_program_execution_in_a_LISP_environment._(IA_feasibilityofaut00cass).pdf | author=Kevin G. Cassidy | title=The Feasibility of Automatic Storage Reclamation with Concurrent Program Execution in a LISP Environment | institution=Naval Postgraduate School, Monterey/CA | date=Dec 1985 }} Here: p.25</ref> Rectangles denote [[cons|cons pairs]], with reference counts. Even if the incoming top left pointer is removed, all counts remain >0.]] Tracing garbage collection cycles are triggered too often if the set of live objects fills most of the available memory;{{Citation needed|date=October 2013}} it requires extra space to be efficient.{{Citation needed|date=October 2013}} Reference counting performance does not deteriorate as the total amount of free space decreases.<ref>{{cite conference |first=Paul R. |last=Wilson |title=Uniprocessor Garbage Collection Techniques |year=1992 |book-title=Proceedings of the International Workshop on Memory Management |pages=1β42 |publisher=Springer-Verlag |location=London, UK |url=ftp://ftp.cs.utexas.edu/pub/garbage/gcsurvey.ps|isbn=3-540-55940-X}} Section 2.1.</ref> Reference counts are also useful information to use as input to other runtime optimizations. For example, systems that depend heavily on [[immutable object]]s such as many [[functional programming language]]s can suffer an efficiency penalty due to frequent copies.{{Citation needed|date=October 2013}} However, if the compiler (or [[runtime system]]) knows that a particular object has only one reference (as most do in many systems), and that the reference is lost at the same time that a similar new object is created (as in the string append statement <code>str β str + "a"</code>), it can replace the operation with a mutation on the original object. Reference counting in naive form has three main disadvantages over the tracing garbage collection, both of which require additional mechanisms to ameliorate: * The frequent updates it involves are a source of inefficiency. While tracing garbage collectors can impact efficiency severely via [[context switch]]ing and cache line faults, they collect relatively infrequently, while accessing objects is done continually. Also, less importantly, reference counting requires every memory-managed object to reserve space for a reference count. In tracing garbage collectors, this information is stored implicitly in the references that refer to that object, saving space, although tracing garbage collectors, particularly incremental ones, can require additional space for other purposes. * The naive algorithm described above can't handle ''{{visible anchor|reference cycles|reference cycle}},'' an object which refers directly or indirectly to itself. A mechanism relying purely on reference counts will never consider cyclic chains of objects for deletion, since their reference count is guaranteed to stay nonzero (cf. picture). Methods for dealing with this issue exist but can also increase the overhead and complexity of reference counting β on the other hand, these methods need only be applied to data that might form cycles, often a small subset of all data. One such method is the use of [[weak reference]]s, while another involves using a [[Mark-and-sweep|mark-sweep]] algorithm that gets called infrequently to clean up. * In a concurrent setting, all updates of the reference counts and all pointer modifications must be [[atomic operations]], which incurs an additional cost. There are three reasons for the atomicity requirements. First, a reference count field may be updated by multiple threads, and so an adequate atomic instruction, such as a (costly) compare-and-swap, must be used to update the counts. Second, it must be clear which object loses a reference so that its reference count can be adequately decremented. But determining this object is non-trivial in a setting where multiple threads attempt to modify the same reference (i.e., when data races are possible). Finally, there exists a subtle race in which one thread gains a pointer to an object, but before it increments the object's reference count, all other references to this object are deleted concurrently by other threads and the object is reclaimed, causing the said thread to increment a reference count of a reclaimed object. In addition to these, if the memory is allocated from a free list, reference counting suffers from poor locality. Reference counting alone cannot move objects to improve cache performance, so high performance collectors implement a tracing garbage collector as well. Most implementations (such as the ones in PHP and Objective-C) suffer from poor cache performance since they do not implement copying objects.<ref>{{cite conference | author = Rifat Shahriyar, Stephen M. Blackburn, Xi Yang and Kathryn S. McKinley | year = 2013 | title = Taking Off the Gloves with Reference Counting Immix | conference = [[OOPSLA]] 2013 | book-title = 24th ACM SIGPLAN conference on Object Oriented Programming Systems, Languages and Applications | doi = 10.1145/2509136.2509527 | url = http://research.microsoft.com/pubs/202163/rcix-oopsla-2013.pdf }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)