Dynamic changes in reinforcement contingencies of a choice situation: Steady state concurrent performance is required?

Main Article Content

CARLOS F. APARICIO
IGNACIO A. BARAJAS

Abstract

The present experiment evaluated the idea that concurrent performances adjust rapidly to rapid changes in reinforcement contingencies. A concurrent schedule with two random interval components provided food according to a switching-lever procedure. The overall rate of reinforcement was constant in all conditions, but different probabilities of reinforcement were associated with the two operative levers to produce seven reinforcement ratios. Every day, a different reinforcement ratio provided seventy food pellets contingent upon lever pressing. Across ratios, the distribution of responses on the levers favoured the alternative associated with the highest reinforcement probability. The generalized matching law successfully described response ratios as a function of reinforcement ratios. Sensitivity to reinforcement gradually increased with the successive rein-forcers obtained in each component. In the last block of sessions, sensitivity to reinforcement was greater than 1.0, indicating overmatching. This finding suggests that experience in dynamic reinforcing environments contributes to the control of concurrent performance. We conclude that both local and molar analyses are required to account for all aspects of concurrent performance.

Article Details

How to Cite
APARICIO, C. F., & BARAJAS, I. A. (2011). Dynamic changes in reinforcement contingencies of a choice situation: Steady state concurrent performance is required?. Mexican Journal of Behavior Analysis, 28(1), 67–90. https://doi.org/10.5514/rmac.v28.i1.23552