• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 786
  • 103
  • 65
  • 32
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 9
  • 4
  • 3
  • 2
  • Tagged with
  • 1142
  • 1142
  • 1142
  • 1142
  • 252
  • 156
  • 139
  • 136
  • 129
  • 127
  • 114
  • 109
  • 109
  • 109
  • 103
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Hierarchical Random Boolean Network Reservoirs

Cherupally, Sai Kiran 23 February 2018 (has links)
Reservoir Computing (RC) is an emerging Machine Learning (ML) paradigm. RC systems contain randomly assembled computing devices and can be trained to solve complex temporal tasks. These systems are computationally cheaper to train than other ML paradigms such as recurrent neural networks, and they can also be trained to solve multiple tasks simultaneously. Further, hierarchical RC systems with fixed topologies, were shown to outperform monolithic RC systems by up to 40% when solving temporal tasks. Although the performance of monolithic RC networks was shown to improve with increasing network size, building large monolithic networks may be challenging, for example because of signal attenuation. In this research, larger hierarchical RC systems were built using a network generation algorithm. The benefits of these systems are presented by evaluating their accuracy in solving three temporal problems: pattern detection, food foraging, and memory recall. This work also demonstrates the functionality of random Boolean networks being used as reservoirs. Networks with up to 5,000 neurons were used to 200 sequences from memory and to identify X or O patterns temporally. Also, a Genetic Algorithm (GA) was used to train different types of hierarchical RC networks, to find optimal solutions for food-foraging tasks. This research shows that about 80% of the possible different hierarchical configurations of RC systems can outperform monolithic RC systems by up to 60% while solving complex temporal tasks. These results suggest that hierarchical random Boolean network RC systems can be used to solve temporal tasks, instead of building large monolithic RC systems.
552

Automatic Tuning of Integrated Filters Using Neural Networks

Lenz, Lutz Henning 23 July 1993 (has links)
Component values of integrated filters vary considerably due to· manufacturing tolerances and environmental changes. Thus it is of major importance that the components of an integrated filter be electronically tunable. The method explored in this thesis is the transconductance-C-method. A method of realizing higher-order filters is to use a cascade structure of second-order filters. In this context, a method of tuning second-order filters becomes important The research objective of this thesis is to determine if the Neural Network methodology can be used to facilitate the filter tuning process for a second-order filter (realized via the transconductance-C-method). Since this thesis is, at least to the knowledge of the author, the first effort in this direction, basic principles of filters and of Neural Networks [1-22] are presented. A control structure is proposed which comprises three parts: the filter, the Neural Network, and a digital spectrum analyzer. The digital spectrum analyzer sends a test signal to the filter and measures the magnitude of the output at 49 frequency samples. The Neural Network part includes a memory that stores the 49 sampled values of the nominal spectrum. ·A comparator subtracts the latter values from the measured (actual) values, and feeds them as input to the Neural Network. The outputs of the Neural Network are the values of the percentage tuning amount The adjusting device, which is envisioned as a component of the filter itself, translates the output of the Neural Network to adjustments in the value of the filter's transconductances. Experimental results provide a demonstration that the Neural Network methodology can be usefully applied to the above problem context. A feedforward, singlehidden layer Backpropagation Network reduces the manufacturing errors of up to 85% for the pole frequency and of up to 41% for the quality factor down to less than approximately 5% each. It is demonstrated that the method can be iterated to further reduce the error.
553

Advances in Deep Generative Modeling With Applications to Image Generation and Neuroscience

Loaiza Ganem, Gabriel January 2019 (has links)
Deep generative modeling is an increasingly popular area of machine learning that takes advantage of recent developments in neural networks in order to estimate the distribution of observed data. In this dissertation we introduce three advances in this area. The first one, Maximum Entropy Flow Networks, allows to do maximum entropy modeling by combining normalizing flows with the augmented Lagrangian optimization method. The second one is the continuous Bernoulli, a new [0,1]-supported distribution which we introduce with the motivation of fixing the pervasive error in variational autoencoders of using a Bernoulli likelihood for non-binary data. The last one, Deep Random Splines, is a novel distribution over functions, where samples are obtained by sampling Gaussian noise and transforming it through a neural network to obtain the parameters of a spline. We apply these to model texture images, natural images and neural population data, respectively; and observe significant improvements over current state of the art alternatives.
554

Simulating drug responses in laboratory test time series with deep generative modeling

Yahi, Alexandre January 2019 (has links)
Drug effects can be unpredictable and vary widely among patients with environmental, genetic, and clinical factors. Randomized control trials (RCTs) are not sufficient to identify adverse drug reactions (ADRs), and the electronic health record (EHR) along with medical claims have become an important resource for pharmacovigilance. Among all the data collected in hospitals, laboratory tests represent the most documented and reliable data type in the EHR. Laboratory tests are at the core of the clinical decision process and are used for diagnosis, monitoring, screening, and research by physicians. They can be linked to drug effects either directly, with therapeutic drug monitoring (TDM), or indirectly using drug laboratory effects (DLEs) that affect surrogate tests. Unfortunately, very few automated methods use laboratory tests to inform clinical decision making and predict drug effects, partly due to the complexity of these time series that are irregularly sampled, highly dependent on other clinical covariates, and non-stationary. Deep learning, the branch of machine learning that relies on high-capacity artificial neural networks, has known a renewed popularity this past decade and has transformed fields such as computer vision and natural language processing. Deep learning holds the promise of better performances compared to established machine learning models, although with the necessity for larger training datasets due to their higher degrees of freedom. These models are more flexible with multi-modal inputs and can make sense of large amounts of features without extensive engineering. Both qualities make deep learning models ideal candidate for complex, multi-modal, noisy healthcare datasets. With the development of novel deep learning methods such as generative adversarial networks (GANs), there is an unprecedented opportunity to learn how to augment existing clinical dataset with realistic synthetic data and increase predictive performances. Moreover, GANs have the potential to simulate effects of individual covariates such as drug exposures by leveraging the properties of implicit generative models. In this dissertation, I present a body of work that aims at paving the way for next generation laboratory test-based clinical decision support systems powered by deep learning. To this end, I organized my experiments around three building blocks: (1) the evaluation of various deep learning architectures with laboratory test time series and their covariates with a forecasting task; (2) the development of implicit generative models of laboratory test time series using the Wasserstein GAN framework; (3) the inference properties of these models for the simulation of drug effects in laboratory test time series, and their application for data augmentation. Each component has its own evaluation: The forecasting task enabled me to explore the properties and performances of different learning architectures; the Wasserstein GAN models are evaluated with both intrinsic metrics and extrinsic tasks, and I always set baselines to avoid providing results in a "neural-network only" referential. Applied machine learning, and more so with deep learning, is an empirical science. While the datasets used in this dissertation are not publicly available due to patient privacy regulation, I described pre-processing steps, hyper-parameters selection and training processes with reproducibility and transparency in mind. In the specific context of these studies involving laboratory test time series and their clinical covariates, I found that for supervised tasks, machine learning holds up well against deep learning methods. Complex recurrent architectures like long short-term memory (LSTM) do not perform well on these short time series, while convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs) provide the best performances, at the cost of extensive hyper-parameter tuning. Generative adversarial networks, enabled by deep learning models, were able to generate high-fidelity laboratory test time series, and the quality of the generated samples was increased with conditional models using drug exposures as auxiliary information. Interestingly, forecasting models trained on synthetic data exclusively still retain good performances, confirming the potential of GANs in privacy-oriented applications. Finally, conditional GANs demonstrated an ability to interpolate samples from drug exposure combinations not seen during training, opening the way for laboratory test simulation with larger auxiliary information spaces. In specific cases, augmenting real training sets with synthetic data improved performances in the forecasting tasks, and could be extended to other applications where rare cases present a high prediction error.
555

Application of computer simulation and artificial intelligence technologies for modeling and optimization of food thermal processing

Chen, Cuiren, 1962- January 2001 (has links)
No description available.
556

Applications of artificial neural networks in epidemiology : prediction and classification

Black, James Francis Patrick, 1959- January 2002 (has links)
Abstract not available
557

Hybrid soft computing : architecture optimization and applications

Abraham, Ajith, 1968- January 2002 (has links)
Abstract not available
558

Catastrophic forgetting and the pseudorehearsal solution in Hopfield networks

McCallum, Simon, n/a January 2007 (has links)
Most artificial neural networks suffer from the problem of catastrophic forgetting, where previously learnt information is suddenly and completely lost when new information is learnt. Memory in real neural systems does not appear to suffer from this unusual behaviour. In this thesis we discuss the problem of catastrophic forgetting in Hopfield networks, and investigate various potential solutions. We extend the pseudorehearsal solution of Robins (1995) enabling it to work in this attractor network, and compare the results with the unlearning procedure proposed by Crick and Mitchison (1983). We then explore a familiarity measure based on the energy profile of the learnt patterns. By using the ratio of high energy to low energy parts of the network we can robustly distinguish the learnt patterns from the large number of spurious "fantasy" patterns that are common in these networks. This energy ratio measure is then used to improve the pseudorehearsal solution so that it can store 0.3N patterns in the Hopfield network, significantly more than previous proposed solutions to catastrophic forgetting. Finally, we explore links between the mechanisms investigated in this thesis and the consolidation of newly learnt material during sleep.
559

Machine and component residual life estimation through the application of neural networks

Herzog, Michael Andreas. January 2006 (has links)
Thesis (M.Eng.)(Mechanical)--University of Pretoria, 2006. / Includes summaries in English and Afrikaans. Includes bibliographical references.
560

Nonlinear neural control with power systems applications

Chen, Dingguo 30 September 1998 (has links)
Extensive studies have been undertaken on the transient stability of large interconnected power systems with flexible ac transmission systems (FACTS) devices installed. Varieties of control methodologies have been proposed to stabilize the postfault system which would otherwise eventually lose stability without a proper control. Generally speaking, regular transient stability is well understood, but the mechanism of load-driven voltage instability or voltage collapse has not been well understood. The interaction of generator dynamics and load dynamics makes synthesis of stabilizing controllers even more challenging. There is currently increasing interest in the research of neural networks as identifiers and controllers for dealing with dynamic time-varying nonlinear systems. This study focuses on the development of novel artificial neural network architectures for identification and control with application to dynamic electric power systems so that the stability of the interconnected power systems, following large disturbances, and/or with the inclusion of uncertain loads, can be largely enhanced, and stable operations are guaranteed. The latitudinal neural network architecture is proposed for the purpose of system identification. It may be used for identification of nonlinear static/dynamic loads, which can be further used for static/dynamic voltage stability analysis. The properties associated with this architecture are investigated. A neural network methodology is proposed for dealing with load modeling and voltage stability analysis. Based on the neural network models of loads, voltage stability analysis evolves, and modal analysis is performed. Simulation results are also provided. The transient stability problem is studied with consideration of load effects. The hierarchical neural control scheme is developed. Trajectory-following policy is used so that the hierarchical neural controller performs as almost well for non-nominal cases as they do for the nominal cases. The adaptive hierarchical neural control scheme is also proposed to deal with the time-varying nature of loads. Further, adaptive neural control, which is based on the on-line updating of the weights and biases of the neural networks, is studied. Simulations provided on the faulted power systems with unknown loads suggest that the proposed adaptive hierarchical neural control schemes should be useful for practical power applications. / Graduation date: 1999

Page generated in 0.1076 seconds