Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reinforcement
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Simple schedules=== [[File:Schedule of reinforcement.png|thumb|right|A chart demonstrating the different response rate of the four simple schedules of reinforcement, each hatch mark designates a reinforcer being given]] * '''Ratio schedule''' β the reinforcement depends only on the number of responses the organism has performed. * '''Continuous reinforcement (CRF)''' β a schedule of reinforcement in which every occurrence of the instrumental response (desired response) is followed by the reinforcer.<ref name=Miltenberger/>{{rp|86}} Simple schedules have a single rule to determine when a single type of reinforcer is delivered for a specific response. * ''Fixed ratio'' (FR) β schedules deliver reinforcement after every ''n''th response.<ref name=Miltenberger/>{{rp|88}} An FR 1 schedule is synonymous with a CRF schedule. #(ex. Every three times a rat presses a button, that rat receives a slice of cheese) * ''Variable ratio schedule'' (VR) β reinforced on average every ''n''th response, but not always on the ''n''th response.<ref name=Miltenberger/>{{rp|88}} #(ex. Gamblers win 1 out every an 10 turns on a slot machine, however this is an average and they could hypothetically win on any given turn) * ''Fixed interval'' (FI) β reinforced after ''n'' amount of time. # (ex. Every 10 minutes, a rat receives a slice of cheese when it presses a button. Eventually, the rat will learn to ignore the button until each 10 minute interval has elapsed) * ''Variable interval'' (VI) β reinforced on an average of ''n'' amount of time, but not always exactly ''n'' amount of time.<ref name=Miltenberger/>{{rp|89}} # (ie. A radio host gives away concert tickets approximately every hour, but the exact minutes may vary) * ''Fixed time'' (FT) β Provides a reinforcing stimulus at a fixed time since the last reinforcement delivery, regardless of whether the subject has responded or not. In other words, it is a non-contingent schedule. * ''Variable time'' (VT) β Provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not. Simple schedules are utilized in many differential reinforcement<ref>{{cite journal | vauthors = Vollmer TR, Iwata BA | title = Differential reinforcement as treatment for behavior disorders: procedural and functional variations | journal = Research in Developmental Disabilities | volume = 13 | issue = 4 | pages = 393β417 | date = 1992 | pmid = 1509180 | doi=10.1016/0891-4222(92)90013-v}}</ref> procedures: * ''Differential reinforcement of alternative behavior'' (DRA) - A conditioning procedure in which an undesired response is decreased by placing it on [[Extinction (psychology)|extinction]] or, less commonly, providing contingent punishment, while simultaneously providing reinforcement contingent on a desirable response. An example would be a teacher attending to a student only when they raise their hand, while ignoring the student when he or she calls out. * ''Differential reinforcement of other behavior'' (DRO) β Also known as omission training procedures, an instrumental conditioning procedure in which a positive reinforcer is periodically delivered only if the participant does something other than the target response. An example would be reinforcing any hand action other than nose picking.<ref name="Miltenberger" />{{rp|338}} * ''Differential reinforcement of incompatible behavior'' (DRI) β Used to reduce a frequent behavior without [[punishment (psychology)|punishing]] it by reinforcing an incompatible response. An example would be reinforcing clapping to reduce nose picking * ''Differential reinforcement of low response rate'' (DRL) β Used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior. * ''Differential reinforcement of high rate'' (DRH) β Used to increase high rates of responding. It is like an interval schedule, except that a minimum number of responses are required in the interval in order to receive reinforcement. ====Effects of different types of simple schedules==== * Fixed ratio: activity slows after reinforcer is delivered, then response rates increase until the next reinforcer delivery (post-reinforcement pause). * Variable ratio: rapid, steady rate of responding; most resistant to [[Extinction (psychology)|extinction]]. * Fixed interval: responding increases towards the end of the interval; poor resistance to extinction. * Variable interval: steady activity results, good resistance to extinction. * Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar. * Variable schedules produce higher rates and greater resistance to [[extinction (psychology)|extinction]] than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE). * The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (for example, the behavior of [[gambler]]s at [[slot machine]]s). * Fixed schedules produce "post-reinforcement pauses" (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement.<ref>{{cite journal | vauthors = Derenne A, Flannery KA | date = 2007 | title = Within Session FR Pausing. | journal = The Behavior Analyst Today | volume = 8 | issue = 2 | pages = 175β86 | doi=10.1037/h0100611}}</ref> ** The PRP of a fixed interval schedule is frequently followed by a "scallop-shaped" accelerating rate of response, while fixed ratio schedules produce a more "angular" response. *** fixed interval scallop: the pattern of responding that develops with fixed interval reinforcement schedule, performance on a fixed interval reflects subject's accuracy in telling time. * Organisms whose schedules of reinforcement are "thinned" (that is, requiring more responses or a greater wait before reinforcement) may experience "ratio strain" if thinned too quickly. This produces behavior similar to that seen during extinction. ** Ratio strain: the disruption of responding that occurs when a fixed ratio response requirement is increased too rapidly. ** Ratio run: high and steady rate of responding that completes each ratio requirement. Usually higher ratio requirement causes longer post-reinforcement pauses to occur. * Partial reinforcement schedules are more resistant to extinction than continuous reinforcement schedules. ** Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones. ** Momentary changes in reinforcement value lead to dynamic changes in behavior.<ref>{{cite journal |last1=McSweeney |first1=Frances K. |last2=Murphy |first2=Eric S. |last3=Kowal |first3=Benjamin P. | name-list-style = vanc |title=Dynamic changes in reinforcer value: Some misconceptions and why you should care. |journal=The Behavior Analyst Today |date=2001 |volume=2 |issue=4 |pages=341β349 |doi=10.1037/h0099952}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)