• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5137
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 50
  • 44
  • 43
  • 39
  • Tagged with
  • 10698
  • 5795
  • 2836
  • 2720
  • 2637
  • 2389
  • 1656
  • 1614
  • 1545
  • 1523
  • 1336
  • 1114
  • 1030
  • 930
  • 898
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Exploration of Autobiographical, Episodic, and Semantic Memory: Modeling of a Common Neural Network

Burianova', Hana 15 July 2009 (has links)
The purpose of this thesis was to delineate the neural underpinning of three types of declarative memory retrieval; autobiographical, episodic, and semantic. Autobiographical memory was defined as the conscious recollection of personally relevant events, episodic memory as the recall of stimuli presented in the laboratory, and semantic memory as the retrieval of factual information and general knowledge about the world. Young adults participated in an event-related fMRI study in which pictorial stimuli were presented as cues for retrieval. By manipulating retrieval demands, autobiographical, episodic, or semantic memories were extracted in response to the same stimulus. The objective of the subsequent analyses was threefold: firstly, to delineate regional activations common across the memory conditions, as well as neural activations unique to each memory type (“condition-specific”); secondly, to delineate a functional network common to all three memory conditions; and, thirdly, to delineate functional network(s) of brain regions that show condition-specific activity and to assess their overlap with the common functional network. The results of the first analysis showed regional activations common to all three types of memory retrieval in the bilateral inferior frontal gyrus, left middle frontal gyrus, right caudate nucleus, bilateral thalamus, left hippocampus, and left lingual gyrus. Condition-specific activations were also delineated, including medial frontal increases for autobiographical, right middle frontal increases for episodic, and right inferior temporal increases for semantic retrieval. The second set of analyses delineated a functional network common to the three conditions that comprised 21 functionally connected neural areas. The final set of analyses further explored the functional connectivity of those brain regions that showed condition-specific activations, yielding two functional networks – one involved semantic and autobiographical conditions, and the other involved episodic and autobiographical conditions. Despite their recruiting some brain regions unique to the content of retrieved memories, the two functional networks did overlap to a degree with the common functional network. Together, these findings lend support to the notion of a common network, which is hypothesized to give rise to different types of declarative memory retrieval (i.e., autobiographical, episodic, or semantic) along a contextual continuum (i.e., highly contextualized or highly decontextualized).
342

Investigation in the application of complex algorithms to recurrent generalized neural networks for modeling dynamic systems

Yackulic, Richard Matthew Charles 04 April 2011
<p>Neural networks are mathematical formulations that can be "trained" to perform certain functions. One particular application of these networks of interest in this thesis is to "model" a physical system using only input-output information. The physical system and the neural network are subjected to the same inputs. The neural network is then trained to produce an output which is the same as the physical system for any input. This neural network model so created is essentially a "blackbox" representation of the physical system. This approach has been used at the University of Saskatchewan to model a load sensing pump (a component which is used to create a constant flow rate independent of variations in pressure downstream of the pump). These studies have shown the versatility of neural networks for modeling dynamic and non-linear systems; however, these studies also indicated challenges associated with the morphology of neural networks and the algorithms to train them. These challenges were the motivation for this particular research.</p> <p>Within the Fluid Power Research group at the University of Saskatchewan, a "global" objective of research in the area of load sensing pumps has been to apply dynamic neural networks (DNN) in the modeling of loads sensing systems.. To fulfill the global objective, recurrent generalized neural network (RGNN) morphology along with a non-gradient based training approach called the complex algorithm (CA) were chosen to train a load sensing pump neural network model. However, preliminary studies indicated that the combination of recurrent generalized neural networks and complex training proved ineffective for even second order single-input single-output (SISO) systems when the initial synaptic weights of the neural network were chosen at random.</p> <p>Because of initial findings the focus of this research and its objectives shifted towards understanding the capabilities and limitations of recurrent generalized neural networks and non-gradient training (specifically the complex algorithm). To do so a second-order transfer function was considered from which an approximate recurrent generalized neural network representation was obtained. The network was tested under a variety of initial weight intervals and the number of weights being optimized. A definite trend was noted in that as the initial values of the synaptic weights were set closer to the "exact" values calculated for the system, the robustness of the network and the chance of finding an acceptable solution increased. Two types of training signals were used in the study; step response and frequency based training. It was found that when step response and frequency based training were compared, step response training was shown to produce a more generalized network.</p> <p>Another objective of this study was to compare the use of the CA to a proven non-gradient training method; the method chosen was genetic algorithm (GA) training. For the purposes of the studies conducted two modifications were done to the GA found in the literature. The most significant change was the assurance that the error would never increase during the training of RGNNs using the GA. This led to a collapse of the population around a specific point and limited its ability to obtain an accurate RGNN.</p> <p>The results of the research performed produced four conclusions. First, the robustness of training RGNNs using the CA is dependent upon the initial population of weights. Second, when using GAs a specific algorithm must be chosen which will allow the calculation of new population weights to move freely but at the same time ensure a stable output from the RGNN. Third, when the GA used was compared to the CA, the CA produced more generalized RGNNs. And the fourth is based upon the results of training RGNNs using the CA and GA when step response and frequency based training data sets were used, networks trained using step response are more generalized in the majority of cases.</p>
343

Modeling Hedge Fund Performance Using Neural Network Models

Tryphonas, Marinos 23 July 2012 (has links)
Hedge fund performance is modeled from publically available data using feed-forward neural networks trained using a resilient backpropagation algorithm. The neural network’s performance is then compared with linear regression models. Additionally, a stepwise factor regression approach is introduced to reduce the number of inputs supplied to the models in order to increase precision. Three main conclusions are drawn: (1) neural networks effectively model hedge fund returns, illustrating the strong non-linear relationships between the economic risk factors and hedge fund performance, (2) while the group of 25risk factors we draw variables from are used to explain hedge fund performance, the best model performance is achieved using different subsets of the 25 risk factors, and, (3) out-of-sample model performance degrades across the time during the recent (and still on-going) financial crisis compared to less volatile time periods, indicating the models’ inability to predict severely volatile economic scenarios such as economic crises.
344

Modeling Hedge Fund Performance Using Neural Network Models

Tryphonas, Marinos 23 July 2012 (has links)
Hedge fund performance is modeled from publically available data using feed-forward neural networks trained using a resilient backpropagation algorithm. The neural network’s performance is then compared with linear regression models. Additionally, a stepwise factor regression approach is introduced to reduce the number of inputs supplied to the models in order to increase precision. Three main conclusions are drawn: (1) neural networks effectively model hedge fund returns, illustrating the strong non-linear relationships between the economic risk factors and hedge fund performance, (2) while the group of 25risk factors we draw variables from are used to explain hedge fund performance, the best model performance is achieved using different subsets of the 25 risk factors, and, (3) out-of-sample model performance degrades across the time during the recent (and still on-going) financial crisis compared to less volatile time periods, indicating the models’ inability to predict severely volatile economic scenarios such as economic crises.
345

Exploration of Autobiographical, Episodic, and Semantic Memory: Modeling of a Common Neural Network

Burianova', Hana 15 July 2009 (has links)
The purpose of this thesis was to delineate the neural underpinning of three types of declarative memory retrieval; autobiographical, episodic, and semantic. Autobiographical memory was defined as the conscious recollection of personally relevant events, episodic memory as the recall of stimuli presented in the laboratory, and semantic memory as the retrieval of factual information and general knowledge about the world. Young adults participated in an event-related fMRI study in which pictorial stimuli were presented as cues for retrieval. By manipulating retrieval demands, autobiographical, episodic, or semantic memories were extracted in response to the same stimulus. The objective of the subsequent analyses was threefold: firstly, to delineate regional activations common across the memory conditions, as well as neural activations unique to each memory type (“condition-specific”); secondly, to delineate a functional network common to all three memory conditions; and, thirdly, to delineate functional network(s) of brain regions that show condition-specific activity and to assess their overlap with the common functional network. The results of the first analysis showed regional activations common to all three types of memory retrieval in the bilateral inferior frontal gyrus, left middle frontal gyrus, right caudate nucleus, bilateral thalamus, left hippocampus, and left lingual gyrus. Condition-specific activations were also delineated, including medial frontal increases for autobiographical, right middle frontal increases for episodic, and right inferior temporal increases for semantic retrieval. The second set of analyses delineated a functional network common to the three conditions that comprised 21 functionally connected neural areas. The final set of analyses further explored the functional connectivity of those brain regions that showed condition-specific activations, yielding two functional networks – one involved semantic and autobiographical conditions, and the other involved episodic and autobiographical conditions. Despite their recruiting some brain regions unique to the content of retrieved memories, the two functional networks did overlap to a degree with the common functional network. Together, these findings lend support to the notion of a common network, which is hypothesized to give rise to different types of declarative memory retrieval (i.e., autobiographical, episodic, or semantic) along a contextual continuum (i.e., highly contextualized or highly decontextualized).
346

A Neural Reinforcement Learning Approach for Behaviors Acquisition in Intelligent Autonomous Systems

Aislan Antonelo, Eric January 2006 (has links)
In this work new artificial learning and innate control mechanisms are proposed for application in autonomous behavioral systems for mobile robots. An autonomous system (for mobile robots) existent in the literature is enhanced with respect to its capacity of exploring the environment and avoiding risky configurations (that lead to collisions with obstacles even after learning). The particular autonomous system is based on modular hierarchical neural networks. Initially,the autonomous system does not have any knowledge suitable for exploring the environment (and capture targets œ foraging). After a period of learning,the system generates efficientobstacle avoid ance and target seeking behaviors. Two particular deficiencies of the forme rautonomous system (tendency to generate unsuitable cyclic trajectories and ineffectiveness in risky configurations) are discussed and the new learning and controltechniques (applied to the autonomous system) are verified through simulations. It is shown the effectiveness of the proposals: theautonomous system is able to detect unsuitable behaviors (cyclic trajectories) and decrease their probability of appearance in the future and the number of collisions in risky situations is significantly decreased. Experiments also consider maze environments (with targets distant from each other) and dynamic environments (with moving objects).
347

Investigation in the application of complex algorithms to recurrent generalized neural networks for modeling dynamic systems

Yackulic, Richard Matthew Charles 04 April 2011 (has links)
<p>Neural networks are mathematical formulations that can be "trained" to perform certain functions. One particular application of these networks of interest in this thesis is to "model" a physical system using only input-output information. The physical system and the neural network are subjected to the same inputs. The neural network is then trained to produce an output which is the same as the physical system for any input. This neural network model so created is essentially a "blackbox" representation of the physical system. This approach has been used at the University of Saskatchewan to model a load sensing pump (a component which is used to create a constant flow rate independent of variations in pressure downstream of the pump). These studies have shown the versatility of neural networks for modeling dynamic and non-linear systems; however, these studies also indicated challenges associated with the morphology of neural networks and the algorithms to train them. These challenges were the motivation for this particular research.</p> <p>Within the Fluid Power Research group at the University of Saskatchewan, a "global" objective of research in the area of load sensing pumps has been to apply dynamic neural networks (DNN) in the modeling of loads sensing systems.. To fulfill the global objective, recurrent generalized neural network (RGNN) morphology along with a non-gradient based training approach called the complex algorithm (CA) were chosen to train a load sensing pump neural network model. However, preliminary studies indicated that the combination of recurrent generalized neural networks and complex training proved ineffective for even second order single-input single-output (SISO) systems when the initial synaptic weights of the neural network were chosen at random.</p> <p>Because of initial findings the focus of this research and its objectives shifted towards understanding the capabilities and limitations of recurrent generalized neural networks and non-gradient training (specifically the complex algorithm). To do so a second-order transfer function was considered from which an approximate recurrent generalized neural network representation was obtained. The network was tested under a variety of initial weight intervals and the number of weights being optimized. A definite trend was noted in that as the initial values of the synaptic weights were set closer to the "exact" values calculated for the system, the robustness of the network and the chance of finding an acceptable solution increased. Two types of training signals were used in the study; step response and frequency based training. It was found that when step response and frequency based training were compared, step response training was shown to produce a more generalized network.</p> <p>Another objective of this study was to compare the use of the CA to a proven non-gradient training method; the method chosen was genetic algorithm (GA) training. For the purposes of the studies conducted two modifications were done to the GA found in the literature. The most significant change was the assurance that the error would never increase during the training of RGNNs using the GA. This led to a collapse of the population around a specific point and limited its ability to obtain an accurate RGNN.</p> <p>The results of the research performed produced four conclusions. First, the robustness of training RGNNs using the CA is dependent upon the initial population of weights. Second, when using GAs a specific algorithm must be chosen which will allow the calculation of new population weights to move freely but at the same time ensure a stable output from the RGNN. Third, when the GA used was compared to the CA, the CA produced more generalized RGNNs. And the fourth is based upon the results of training RGNNs using the CA and GA when step response and frequency based training data sets were used, networks trained using step response are more generalized in the majority of cases.</p>
348

The intertemporary studies of financial crisis prediction model

Kung, Chih-Ming 29 June 2000 (has links)
The purpose of this article is try to find the efficient factor that affect corporate's financial structure.
349

The Speech Recognition System using Neural Networks

Chen, Sung-Lin 06 July 2002 (has links)
This paper describes an isolated-word and speaker-independent Mandarin digit speech recognition system based on Backpropagation Neural Networks(BPNN). The recognition rate will achieve up to 95%. When the system was applied to a new user with adaptive modification method, the recognition rate will be higher than 99%. In order to implement the speech recognition system on Digital Signal Processors (DSP) we use a neuron-cancellation rule in accordance with BPNN. The system will cancel about 1/3 neurons and reduce 20%¡ã40% memory size under the rule. However, the recognition rate can still achiever up to 85%. For the output structure of the BPNN, we present a binary-code to supersede the one-to-one model. In addition, we use a new ideal about endpoint detection algorithm for the recoding signals. It can avoid disturbance without complex computations.
350

Using Local Invariant in Occluded Object Recognition by Hopfield Neural Network

Tzeng, Chih-Hung 11 July 2003 (has links)
In our research, we proposed a novel invariant in 2-D image contour recognition based on Hopfield-Tank neural network. At first, we searched the feature points, the position of feature points where are included high curvature and corner on the contour. We used polygonal approximation to describe the image contour. There have two patterns we set, one is model pattern another is test pattern. The Hopfield-Tank network was employed to perform feature matching. In our results show that we can overcome the test pattern which consists of translation, rotation, scaling transformation and no matter single or occlusion pattern.

Page generated in 0.0473 seconds