Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reference counting
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Dealing with inefficiency of updates== Incrementing and decrementing reference counts every time a reference is created or destroyed can significantly impede performance. Not only do the operations take time, but they damage [[CPU cache|cache]] performance and can lead to [[pipeline bubble]]s. Even read-only operations like calculating the length of a list require a large number of reads and writes for reference updates with naive reference counting. One simple technique is for the compiler to combine a number of nearby reference updates into one. This is especially effective for references which are created and quickly destroyed. Care must be taken, however, to put the combined update at the right position so that a premature free can be avoided. The Deutsch-Bobrow method of reference counting capitalizes on the fact that most reference count updates are in fact generated by references stored in local variables. It ignores these references, only counting references in data structures, but before an object with reference count zero can be deleted, the system must verify with a scan of the stack and registers that no other reference to it still exists. Another technique devised by [[Henry Baker (computer scientist)|Henry Baker]] involves '''deferred increments''',<ref>{{cite journal | author = Henry Baker |date=September 1994 | title = Minimizing Reference Count Updating with Deferred and Anchored Pointers for Functional Data Structures | journal = [[ACM SIGPLAN Notices]] | volume = 29 | issue = 9 | pages = 38β43 | doi = 10.1145/185009.185016 |citeseerx=10.1.1.25.955 |s2cid=14448488 |author-link=Henry Baker (computer scientist) }}</ref> in which references which are stored in local variables do not immediately increment the corresponding reference count, but instead defer this until it is necessary. If such a reference is destroyed quickly, then there is no need to update the counter. This eliminates a large number of updates associated with short-lived references (such as the above list-length-counting example). However, if such a reference is copied into a data structure, then the deferred increment must be performed at that time. It is also critical to perform the deferred increment before the object's count drops to zero, to avoid a premature free. A dramatic decrease in the overhead on counter updates was obtained by Levanoni and [[Erez Petrank|Petrank]].<ref name="LevPet1">{{cite conference | author = Yossi Levanoni, [[Erez Petrank]] | year = 2001 | title = An on-the-fly reference-counting garbage collector for java | url = https://www.cs.technion.ac.il/~erez/Papers/refcount.ps | conference = [[OOPSLA]] 2001 | book-title = Proceedings of the 16th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications | pages = 367β380 | doi = 10.1145/504282.504309 | url-access = subscription }}</ref><ref name="LevPet2">{{cite journal | author = Yossi Levanoni, [[Erez Petrank]] | year = 2006 | title = An on-the-fly reference-counting garbage collector for java | url = https://www.cs.technion.ac.il/~erez/Papers/refcount.ps | journal = ACM Trans. Program. Lang. Syst. | pages = 31β69 | doi = 10.1145/1111596.1111597 | volume = 28 | citeseerx = 10.1.1.15.9106 | s2cid = 14777709 }}</ref> They introduce the '''update coalescing method''' which coalesces many of the redundant reference count updates. Consider a pointer that in a given interval of the execution is updated several times. It first points to an object <code>O1</code>, then to an object <code>O2</code>, and so forth until at the end of the interval it points to some object <code>On</code>. A reference counting algorithm would typically execute <code>rc(O1)--</code>, <code>rc(O2)++</code>, <code>rc(O2)--</code>, <code>rc(O3)++</code>, <code>rc(O3)--</code>, ..., <code>rc(On)++</code>. But most of these updates are redundant. In order to have the reference count properly evaluated at the end of the interval it is enough to perform <code>rc(O1)--</code> and <code>rc(On)++</code>. The rest of the updates are redundant. Levanoni and Petrank showed in 2001 how to use such update coalescing in a reference counting collector. When using update coalescing with an appropriate treatment of new objects, more than 99% of the counter updates are eliminated for typical Java benchmarks. Interestingly, update coalescing also eliminates the need to employ [[atomic operations]] during pointer updates in a concurrent setting, this solving reference counting issues in a concurrent setting. Therefore, update coalescing solves the third problem of naive reference counting (i.e., a costly overhead in a concurrent setting). Levanoni and Petrank presented an enhanced algorithm that may run concurrently with multithreaded applications employing only fine synchronization.<ref>{{cite web|url=https://www.cs.technion.ac.il/%7Eerez/Papers/refcount.pdf |title=An On-the-Fly Reference-Counting Garbage Collector for Java |website=Cs.technion.ac.il |access-date=2017-06-24}}</ref> Blackburn and McKinley's ''ulterior reference counting'' method in 2003<ref>{{cite conference |author1=Stephen Blackburn |author2=Kathryn McKinley | year = 2003 | title = Ulterior Reference Counting: Fast Garbage Collection without a Long Wait | url = http://www.cs.utexas.edu/users/mckinley/papers/urc-oopsla-2003.pdf | conference = [[OOPSLA]] 2003 | book-title = Proceedings of the 18th annual ACM SIGPLAN conference on Object-oriented programing, systems, languages, and applications | pages = 344β358 | doi = 10.1145/949305.949336 | isbn = 1-58113-712-5 }}</ref> combines deferred reference counting with a copying nursery, observing that the majority of pointer mutations occur in young objects. This algorithm achieves throughput comparable with the fastest generational copying collectors with the low bounded pause times of reference counting.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)