• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1381
  • 371
  • 195
  • 157
  • 74
  • 59
  • 43
  • 24
  • 23
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • Tagged with
  • 2932
  • 1212
  • 565
  • 388
  • 336
  • 290
  • 249
  • 242
  • 242
  • 241
  • 232
  • 225
  • 197
  • 195
  • 168
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Making Sense of Serotonin Through Spike Frequency Adaptation

Harkin, Emerson 04 December 2023 (has links)
What does serotonin do? Just as the diffuse axonal arbours of midbrain serotonin neurons touch nearly every corner of the forebrain, so too is this ancient neuromodulator involved in nearly every aspect of learning and behaviour. The role of serotonin in reward processing has received increasing attention in recent years, but there is little agreement about how the perplexing responses of serotonin neurons to emotionally salient stimuli should be interpreted, and essentially nothing is known about how they arise. Here I approach these two aspects of serotonergic function in reverse order. In the first part of this thesis, I construct an experimentally-constrained spiking neural network model of the dorsal raphe nucleus (DRN), the main source of forebrain serotonergic input, and characterize its signal processing features. I show that potent spike-frequency adaptation deeply shapes DRN output while other aspects of its physiology are relatively less important. Overall, this part of my work suggests that in vivo serotonergic activity patterns arise from a temporal-derivative-like computation. But the temporal derivative of what? In the second part, I consider the possibility that the DRN is driven by an input that represents cumulative future reward, a quantity called state value in reinforcement learning theory. The resulting model reproduces established tuning features of serotonin neurons, including phasic activation by reward predicting cues and punishments, reward-specific surprise tuning, and tonic modulation by reward and punishment context. Because these features are the basis of many and varied existing serotonergic theories, these results show that my theory, which I call value prediction, provides a unifying perspective on serotonergic function. Finally, in an empirical test of the theory, I re-analyze data from an in vivo trace conditioning experiment and find that value prediction accounts for the firing rates of serotonin neurons to a precision ≪0.1 Hz, outperforming previous models by a large margin. Here I establish serotonin as a new neural substrate of prediction and reward, a significant step towards understanding the role of serotonin signalling in the brain.
502

COMBUSTION SYNTHESIS AND MECHANICAL PROPERTIES OF SiC PARTICULATE REINFORCED MOLYBDENUM DISILICIDE

MANOMAISUPAT, DAMRONGCHAI 11 1900 (has links)
Intermetallic composites of molybdenum disilicide reinforced with various amounts of silicon carbide particulate were produced by combustion synthesis from their elemental powders. Elemental powders were mixed stoichiometrically then ball-milled. The coldpressed mixture was then chemically ignited at one end under vacuum at approximately 700°C. The combustion temperature of the process was approximately 1600°C which was lower than the melting point of molybdenum disilicide. This processing technique allowed the fabrication of the composites at 700°C within a few seconds, instead of sintering at temperatures greater than 1200°C for many hours. The end product was a porous composite, which was densified to >97% ofthe theoretical density by hot pressing. The grains ofthe matrix were 8-14 μm in size surrounded by SiC reinforcement of 1-5 μm. The morphology and structure of the products were studied by x-ray diffraction and scanning electron microscopy (SEM). Samples were prepared for hardness, fracture strength, and toughness testing at room temperature. There were improvements in the mechanical properties of the composites with increasing SiC reinforcement. The hardness of the materials increased from 10.1 ± 0.1 GPa (959 ± 13 kg/mm2) to 11.7 ± 0.6 GPa (1102 ± 52 kg/mm2) to 12.7 ± 0.4 GPa (1199 ± 36 kg/mm2) with the 10 vol% and 20 vol% SiC reinforcement, respectively. The strength increased from 195±39 MPa to 237±39 MPa with 10 vol% and to 299 ± 43.2 MPa with a 20 vol% SiC reinforcement. The fracture toughness increased from 2.79 ± 0.36 MPa.m1/2 to 3.31± 0.41 MPa.m1/2 with 10 vol% SiC and to 4.08± 0.30 MPa.m1/2 with 20 vol% SiC. The increase in hardness and flexural strength is due to the effective load transfer across the strong interface in the composites. The main toughening mechanism is crack deflection by the residual stress in the materials, induced by the differences in the thermal expansion coefficients and the elastic moduli ofthe matrix and reinforcement. / Thesis / Master of Engineering (ME)
503

An overview of the applications of reinforcement learning to robot programming: discussion on the literature and the potentials

Sunilkumar, Abishek, Bahrpeyma, Fouad, Reichelt, Dirk 13 February 2024 (has links)
There has been remarkable progress in the field of robotics over the past few years, whether it is stationary robots that perform dynamically changing tasks in the manufacturing sector or automated guided vehicles for warehouse management or space exploration. The use of artificial intelligence (AI), especially reinforcement learning (RL), has contributed significantly to the success of various robotics tasks, proving that the shift toward intelligent control paradigms is successful and feasible. A fascinating aspect of RL is its ability to function both as low-level controller and as a high-level decision-making tool at the same time. An example of this is the manipulator robot whose task is to guide itself through an environment with irregular and recurrent obstacles. In this scenario, low-level controllers can receive the joint angles and execute smooth motion using the Joint Trajectory controllers. On a higher level, RL can also be used to define complex paths designed to avoid obstacles and self-collisions. An important aspect of successful operation of an AGV is the ability to make timely decisions. When Convolutional Neural Networks (CNN) based networks are incorporated with RL, agents can decide to direct AGVs to the destination effectively, which is mitigating the risk of catastrophic collisions. Even though many of these challenges can be addressed with classical solutions, devising such solutions takes a great deal of time and effort, making this process quite expensive. With an eye on different categories of RL applications to robotics, this study will provide an overview of the use of RL in robotic applications, examining the advantages and disadvantages of state-of-the-art applications. Additionally, we provide a targeted comparative analysis between classical robotics methods and RL-based robotics methods. Along with drawing conclusions from our analysis, an outline of the future possibilities and advancements that may accelerate the progress and autonomy of robotics in the future is provided.
504

Reinforcement Learning Application in Wavefront Sensorless Adaptive Optics System

Zou, Runnan 13 February 2024 (has links)
With the increasing exploration of space and widespread use of communication tools worldwide, near-ground satellite communication has emerged as a promising tool in various fields such as aerospace, military, and microscopy. However, the presence of air and water in the atmosphere causes distortion in the light signal, and thus, it is essential for the ground base to retrieve the original signal from the distorted light signal sent from the satellite. Traditionally, Shack-Hartmann sensors or charge-coupled devices are integrated in the system for distortion measurement. In our pursuit of a cost-effective system establishment with optimal performance and enhanced response speed, sensors and charge-coupled devices have been replaced by a photodiode and a single mode fiber in this project. Since the system has limited observation capability, it requires a powerful controller for optimal performance. To address this issue, we have implemented an off-policy reinforcement learning framework, the soft actor-critic, in the adaptive optics system controller. This integration results in a model-free online controller capable of mitigating wavefront distortion. The soft actor-critic controller processes the acquired data matrix from the photodiode and generates a two-dimensional array control signal for the deformable mirror, which corrects the wavefront distortion induced by the atmosphere, and refocusing the signal to maximize the incoming power. The parameters of the soft actor-critic controller have been tuned to achieve optimal system performance. Simulations have been conducted to compare the performance of the proposed controller with respect to wavefront sensor-based methods. The training and verification of the proposed controller have been conducted in both static and semi-dynamic atmospheres, under different atmospheric conditions. Simulation results demonstrate that, in severe atmospheric conditions, the adaptive optics system with the soft actor-critic controller achieves more than 55% and 30% Strehl ratio on average in static and semi-dynamic atmospheres, respectively. Furthermore, the distorted wavefront's power can be concentrated at the center of the focal plane and the fiber, providing an improved signal.
505

Behavioral Training of Reward Learning Increases Reinforcement Learning Parameters and Decreases Depression Symptoms Across Repeated Sessions

Goyal, Shivani 12 1900 (has links)
Background: Disrupted reward learning has been suggested to contribute to the etiology and maintenance of depression. If deficits in reward learning are core to depression, we would expect that improving reward learning would decrease depression symptoms across time. Whereas previous studies have shown that changing reward learning can be done in a single study session, effecting clinically meaningful change in learning requires this change to endure beyond task completion and transfer to real world environments. With a longitudinal design, we investigate the potential for repeated sessions of behavioral training to create change in reward learning and decrease depression symptoms across time. Methods: 929 online participants (497 depression-present; 432 depression-absent) recruited from Amazon’s Mechanical Turk platform completed a behavioral training paradigm and clinical selfreport measures for up to eight total study visits. Participants were randomly assigned to one of 12 arms of the behavioral training paradigm, in which they completed a probabilistic reward learning task interspersed with queries about a feature of the task environment (11 learning arms) or a control query (1 control arm). Learning queries trained participants on one of four computational-based learning targets known to affect reinforcement learning (probability, average or extreme outcome values, and value comparison processes). A reinforcement learning model previously shown to distinguish depression related differences in learning was fit to behavioral responses using hierarchical Bayesian estimation to provide estimates of reward sensitivity and learning rate for each participant on each visit. Reward sensitivity captured participants’ value dissociation between high versus low outcome values, while learning rate informed how much participants learned from previously experienced outcomes. Mixed linear models assessed relationships between model-agnostic task performance, computational model-derived reinforcement learning parameters, depression symptoms, and study progression. Results: Across time, learning queries increased individuals’ reward sensitivities in depression-absent participants (β = 0.036, p =< 0.001, 95% CI (0.022, 0.049)). In contrast, control queries did not change reward sensitivities in depression-absent participants across time ((β = 0.016, p = 0.303, 95% CI (-0.015, 0.048)). Learning rates were not affected across time for participants receiving learning queries (β = 0.001, p = 0.418, 95% CI (-0.002, 0.004)) or control queries (β = 0.002, p = 0.558, 95% CI (-0.005, 0.009). Of the learning queries, those targeting value comparison processes improved depression symptoms (β = -0.509, p = 0.015, 95% CI (-0.912, - 0.106)) and increased reward sensitivities across time (β = 0.052, p =< 0.001, 95% CI (0.030, 0.075)) in depression-present participants. Increased reward sensitivities related to decreased depression symptoms across time in these participants (β = -2.905, p = 0.002, 95% CI (-4.75, - 1.114)). Conclusions: Multiple sessions of targeted behavioral training improved reward learning for participants with a range of depression symptoms. Improved behavioral reward learning was associated with improved clinical symptoms with time, possibly because learning transferred to real world scenarios. These results support disrupted reward learning as a mechanism contributing to the etiology and maintenance of depression and suggest the potential of repeated behavioral training to target deficits in reward learning. / Master of Science / Disrupted reward learning has been suggested to be central to depression. Work investigating how changing reward learning affects clinical symptoms has the potential to clarify the role of reward learning in depression. Here, we address this question by investigating if multiple sessions of behavioral training changes reward learning and decreases depression symptoms across time. We recruited 929 online participants to complete up to eight study visits. On each study visit participants completed a depression questionnaire and one of 12 arms of a behavioral training paradigm, in which they completed a reward learning task interspersed with queries about the task. Queries trained participants on one of four learning targets known to affect reward learning (probability, average or extreme outcome values, and value comparison processes). We used reinforcement learning to quantify specific reward learning processes, including how much participants valued high vs. low outcomes (reward sensitivity) and how much participants learned from previously experienced outcomes (learning rates). Across study visits, we found that participants without depression symptoms that completed the targeted behavioral training increased reward sensitivities (β = 0.036, p =< 0.001, 95% CI (0.022, 0.049)). Of the queries, those targeting value comparison processes improved both depression symptoms (β = -0.509, p = 0.015, 95% CI (-0.912, -0.106)) and reward sensitivities (β = 0.052, p =< 0.001, 95% CI (0.030, 0.075)) across study visits for participants with depression symptoms. These results suggest that multiple sessions of behavioral training can increase reward learning across time for participants with and without depression symptoms. Further, these results support the role of disrupted reward learning in depression and suggest the potential for behavioral training to improve both reward learning and symptoms in depression.
506

Electrical stimulation of the brain as a reinforcing stimulus

Beninger, Richard J. January 1977 (has links)
Note:
507

Automatic Selection of Dynamic Loop Scheduling Algorithms for Load Balancing using Reinforcement Learning

Dhandayuthapani, Sumithra 07 August 2004 (has links)
Scientific applications are large, complex, irregular, and computationally intensive and are characterized by data parallel loops. The prevalence of independent iterations in these loops, makes parallel computing as the natural choice for solving these applications. The computational requirements of these problems vary due to variations in problem, algorithmic and systemic characteristics during parallelization, leading to performance degradation. Considerable amount of research has been dedicated to the development of dynamic scheduling techniques based on probabilistic analysis to address these predictable and unpredictable factors that lead to severe load imbalance. The mathematical foundations of these scheduling algorithms have been previously developed and published in the literature. These techniques have been successfully integrated into scientific applications as well as into runtime systems. Recently, efforts have also been directed to integrate these techniques into dynamic load balancing libraries for scientific applications. The optimal scheduling algorithm to load balance a specific scientific application in a dynamic parallel computing environment is very difficult without the exhaustive testing of all the scheduling techniques. This is a time consuming process, and therefore, there is a need for developing an automatic mechanism for the selection of dynamic scheduling algorithms. In recent years, extensive work has been dedicated to the development of reinforcement learning and some of its techniques have addressed load-balancing problems. However, they do not cover a number of aspects regarding the performance of scientific applications. First, these previously developed techniques address the load balancing problem only at a coarse granularity level (for example, job scheduling), and the reinforcement learning techniques used for load balancing are based on learning from trained datasets which are obtained prior to the execution of the application. Moreover, scientific applications contain parameters whose variations are so irregular that the use of training sets would not be able to accurately capture the entire spectrum of possible characteristics. Finally, algorithm selection using reinforcement learning has only been used for simple sequential problems. This thesis addresses these limitations and provides a novel integrated approach for automating the selection of dynamic scheduling algorithms at a finer granularity level to improve the performance of scientific applications using reinforcement learning. This integrated approach will experimentally be tested on a scientific application that involves a large number of time steps: The Quantum Trajectory Method (QTM). A qualitative and quantitative analysis of the effectiveness of this novel approach will be presented to underscore the significance of its use in improving the performance of large-scale scientific applications.
508

Dispersion structure and properties of nanocomposites

Zhang, Guojun 18 August 2009 (has links)
No description available.
509

ELECTROSPUN NANOFIBERS FOR PROGRAMMABLE DRUG DELIVERY SYSTEM SEQUENTIALLY TARGETING INFLAMMATION AND INFECTION

Hu, Yupeng 14 September 2015 (has links)
No description available.
510

The effectiveness of silence as a reinforcer with the educable mentally retarded and its relationship to the construct of locus of control /

Callahan, William George January 1971 (has links)
No description available.

Page generated in 0.0552 seconds