Schedule Combinations and Choice: Experiment and Theory

Main Article Content

J. E. R. Staddon

Abstract

This chapter has been about implicit and explicit choice. Implicit choice refers to the processes that determine the proportions of time that animals spend on dif­ferent activities, the factor s that maintain that distribution, and the effects of dis­turbing 'it by blocking activities or making access to one activity contingent on the performance of another. Explicit choice refers to special experimental procedures that pit two similar responses, such as pecking Left and pecking Right, against one another.

The first part of the chapter discussed temporal and stimulus control in the context of explicit choice between complex concurrent (choice) schedules. The first section showed how temporal control (in the form of proportional or scalar timing) and stimulus control combine in well-trained animal s to produce the effect known as conditioned reinforcement on chain schedules. 1 also showed how conditioned reinforcers act as aids to memory when animals learn to respond on delayed reinfor­cernent schedules and how memory limitations may underlie the effects of second­ order schedules. Proportional timing seems to determine performance even on ratio schedules. 1 discussed in some detail a variety of experimental results on simple and concurrent chain procedures. The discussion showed that most, perhaps all, the concurrent effects do not represent choice in the usual sense at ll. The animals do not seem to be comparing alternatives, but rather seem to treat each alternative as if it occurred in isolation. 1 was able to derive quite complex patterns of apparent preference and preference shift from an "ideal pigeon" who behaves according to proportional timing. 1 also showed how this analysis relates to the optimal policy on chain schedules, i.e., the pattern of responding that maximizes food rate: it turns out that proportional timing almost always produces a close-to-optimal pattern of  choice. 1 also showed the similarities between the optimal policy for animals on chain reinforcement schedules and the optimal foraging theory predictions about diet selection. The last part of this section discussed the self-control problem ­preference for small-immediate vs. large-delayed rewards and showed how the same proportional-timing rule applies here also.

The last half of the chapter discussed implicit choice, the factors that deter­mines the distribution on activities under free conditions. We saw that under many conditions the activity distribution is stable, and the organism resists in various ways perturbations that threaten to change the distribution from its paired-baseline level. The first attempt to understand these effects was made by David Premack, who concluded that higher-probability activities always reinforce lower-probability activities. This molar principle was extended first by the qualitative principle of response deprivation and then by a variety of quantitative optimality and economic analyses. The first of these, the minimum-distance model gave a special status to the paired-baseline levels or bliss point.

Optimality analysis is a general tool that can be applied to any adaptive system. It has allowed us to see common principles underlying implicit choice and explicit choice. Robust experimental findings such as the matching law turn out to be generally consistent with optimality models. Similar adaptive principles-diminishing marginal utility of reward frequency and amount-seem to underlie both the situa­tions studied by Premack and more conventional schedules of operant reinforce­ment.

Despite their many successes, all optimality models fail under sorne conditions, because they are functional models, not models of mechanism. Animals and people are rarely, if ever, literal optimizers, systematically comparing the long-term payoffs associated with different policies. Thus, while matching on concurrent VI VI schedules fits in with a number of optimal policies, matching on concurrent VI VR does not. 1 described a number of other experiments in which animals clearly behave nonoptimally. The last pan of the chapter therefore looked at the mechanisms of choice and behavioral allocation. The first conclusion was that mar­ginal changes in molar variables probably do not have any direct effect on behavior, underlining the conclusion that even good optimality models, particularly molar op­timality models, only describe what animals achieve, not how they achieve i1. The last part of the chapter therefore focused on molecular mechanisms of behavioral allocation. 1 discussed three, momentary maximizing, amelioration, and linear wait­ing. The first and the last make very similar predictions in choice situations, but linear waiting promises to be more general. Quite apart from the quantitative details, it Is cear that the expected time to the reinforcer, assessed through a memory-constrained timing mechanism, plays a dominant role in all the complex patterns of behavior generated by a variety of reinforcement schedules.

Article Details

How to Cite
Staddon, J. E. R. (2011). Schedule Combinations and Choice: Experiment and Theory. Mexican Journal of Behavior Analysis, 21(3), 163–274. https://doi.org/10.5514/rmac.v21.i1.ESP.25423