• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1381
  • 371
  • 195
  • 157
  • 74
  • 59
  • 43
  • 24
  • 23
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • Tagged with
  • 2932
  • 1212
  • 565
  • 388
  • 336
  • 290
  • 249
  • 242
  • 242
  • 241
  • 232
  • 225
  • 197
  • 195
  • 168
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Accounting for Behavioral Contrast: Recent Interpretations

Snyder, Ronald L. 01 May 1983 (has links)
Behavioral contrast has been interpreted as a function of either (1) the reduction of frequency of reinforcement in one component of a multiple schedule or (2) the suppression of responses in one component regardless of reinforcement frequency. These explanations are discussed in terms of their adequacy in accounting for several recent experimental results. Two alternative explanations are considered. First, contrast is interpreted as a function of the relative summation of excitatory and inhibitory effects of stimuli. Second, contrast is discussed as a possible function of a switch from a response-reinforcer contingency to a stimulus-reinforcer contingency as seen in auto-pecking. Both avenues are considered promising in terms of accounting for behavioral contrast.
472

The Comparative Effects of Two Reinforcement Schedules Applied to Groups in Teaching Arithmetic Skills

Bennett, Ronald C. 01 May 1972 (has links)
A behavioral approach to teaching in the public school system is difficult because of the inherent difficulty of finding positive reinforcers and administering them simultaneously to large groups of students. This study attempts to apply the same tangible reinforcers to two groups of students under different schedules of reinforcement. The students in the study were in special classes termed "learning adjustment" classes because of their failure to perform at grade level in regular classroom settings. One group was on a continuous schedule of reinforcement using tokens and gold strike stamps as reinforcers. The second group was also on a continuous schedule of reinforcement but with a punishment contingency added. Reinforcers were the same for this group as the first group. The third group was a comparison group. Performance rates were studied under the above schedules of reinforcement and were found to increase the number of arithmetic units completed for each group. Achievement level change in mathematics as measured by the mathematics section of the California Achievement Test was a second major aspect of this study. Although there was a very definite difference in the number of arithmetic units completed by the three groups there was not a corresponding difference in the amount of change in achievement level.
473

A Quantitative Analysis of Response Elimination and Resurgence Using Rich, Lean, and Thinning Schedules of Alternative Reinforcement

Sweeney, Mary M. 01 May 2012 (has links)
A common approach to the treatment of instrumental problem behavior is the introduction of an acceptable alternative source of reinforcement. However, when alternative reinforcement is removed or reduced, the target behavior tends to relapse. The relapse of a target response following the removal of alternative reinforcement has been termed resurgence. Shahan and Sweeney developed a quantitative model of resurgence based on behavioral momentum theory that captures both the disruptive and strengthening effects of alternative reinforcement on the target response. The quantitative model suggests that although higher rates of alternative reinforcement result in faster response elimination, lower rates of alternative reinforcement result in less relapse when removed. The present study was designed to examine the possibility that good target response suppression and less relapse could be achieved by beginning with a higher (rich) rate of alternative reinforcement and gradually thinning it such that a lower (lean) rate of alternative reinforcement is ultimately removed. Furthermore, the data obtained were generated to provide insight into how thinning rates of alternative reinforcement might be incorporated into the quantitative model of resurgence. Results suggest that rich rates of alternative reinforcement were more effective than lean or thinning rates of alternative reinforcement at response suppression during treatment, but when alternative reinforcement was discontinued, the group that experienced rich rates exhibited a substantial increase. Although lean and thinning rates of alternative reinforcement were not as effective at response suppression during treatment as rich rates, they still resulted in substantial decreases in the target response. Furthermore, removal of lean rates of alternative reinforcement did not result in substantial increase in the target response. Advantages and disadvantages of rich, lean, and thinning alternative reinforcement rates are discussed with respect to target response suppression and sensitivity to the end of treatment, and an alternative response rate is discussed. Although a small modification to the quantitative model was able to similarly account for data produced by rich, lean, and thinning alternative reinforcement, as it currently stands the model is unable to account for the finding that alternative reinforcement may not always serve as a disruptor relative to a no alternative reinforcement control.
474

Adaptive Fuzzy Reinforcement Learning for Flock Motion Control

Qu, Shuzheng 06 January 2022 (has links)
The flock-guidance problem enjoys a challenging structure where multiple optimization objectives are solved simultaneously. This usually necessitates different control approaches to tackle various objectives, such as guidance, collision avoidance, and cohesion. The guidance schemes, in particular, have long suffered from complex tracking-error dynamics. Furthermore, techniques that are based on linear feedback or output feedback strategies obtained at equilibrium conditions either may not hold or degrade when applied to uncertain dynamic environments. Relying on potential functions, embedded within pre-tuned fuzzy inference architectures, lacks robustness under dynamic disturbances. This thesis introduces two adaptive distributed approaches for the autonomous control of multi-agent systems. The first proposed technique has its structure based on an online fuzzy reinforcement learning Value Iteration scheme which is precise and flexible. This distributed adaptive control system simultaneously targets a number of flocking objectives; namely: 1) tracking the leader, 2) keeping a safe distance from the neighboring agents, and 3) reaching a velocity consensus among the agents. In addition to its resilience in the face of dynamic disturbances, the algorithm does not require more than the agent’s position as a feedback signal. The effectiveness of the proposed method is validated with two simulation scenarios and benchmarked against a similar technique from the literature. The second technique is in the form of an online fuzzy recursive least squares-based Policy Iteration control scheme, which employs a recursive least squares algorithm to estimate the weights in the leader tracking subsystem, as a substitute for the original reinforcement learning actor-critic scheme adopted in the first technique. The recursive least squares algorithm demonstrates a faster approximation weight convergence. The time-invariant communication graph utilized in the fuzzy reinforcement learning method is also improved with time-varying graphs, which can smoothly guide the agents to reach a speed consensus. The fuzzy recursive least squares-based technique is simulated with a few scenarios and benchmarked against the fuzzy reinforcement learning method. The scenarios are simulated in CoppeliaSim for a better visualization and more realistic results.
475

A Defender-Aware Attacking Guidance Policy for the TAD Differential Game

English, Jacob T. January 2020 (has links)
No description available.
476

Introducing New Energy Dissipation Mechanisms for Steel Fiber Reinforcement in Ultra-High Performance Concrete

Scott, Dylan Andrew 08 December 2017 (has links)
By adding annealed plain carbon steel fibers and stainless steel fibers into Ultra-High Performance Concrete (UHPC), we have increased UHPC’s toughness through optimized thermal processing and alloy selection of steel fiber reinforcements. Currently, steel fiber reinforcements used in UHPCs are extremely brittle and have limited energy dissipation mainly through debonding due to matrix crumbling with some pullout. Implementing optimized heat treatments and selecting proper alternative alloys can drastically improve the post-yield carrying capacity of UHPCs for static and dynamic applications through plastic deformations, phase transformations, and fiber pullout. By using a phase transformable stainless steel, the ultimate flexural strength increased from 32.0 MPa to 42.5 MPa (33%) and decreased the post-impact or residual projectile velocity measurements an average of 31.5 m/s for 2.54 cm and 5.08 cm thick dynamic impact panels.
477

Reinforcement learning in the presence of rare events

Frank, Jordan William, 1980- January 2009 (has links)
No description available.
478

Acquisition and extinction of lever-pressing for food and for brain stimulation compared.

Blevings, George James. January 1968 (has links)
No description available.
479

Attitudinal reinforcement in a verbal conditioning paradigm.

Edwards, John R. January 1970 (has links)
No description available.
480

Self-administration of brain-stimulation : an exploration of a model of drug self-administration

Lepore, Marino January 1990 (has links)
No description available.

Page generated in 0.0535 seconds