• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 336
  • 39
  • 21
  • 15
  • 12
  • 11
  • 9
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 1192
  • 1192
  • 1192
  • 572
  • 556
  • 423
  • 157
  • 134
  • 129
  • 128
  • 120
  • 110
  • 94
  • 93
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
931

Estimation and Mapping of Ship Air Wakes using RC Helicopters as a Sensing Platform

Kumar, Anil 24 April 2018 (has links)
This dissertation explores the applicability of RC helicopters as a tool to map wind conditions. This dissertation presents the construction of a robust instrumentation system capable of wireless in-situ measurement and mapping of ship airwake. The presented instrumentation system utilizes an RC helicopter as a carrier platform and uses the helicopter's dynamics for spatial 3D mapping of wind turbulence. The system was tested with a YP676 naval training craft to map ship airwake generated in controlled heading wind conditions. Novel system modeling techniques were developed to estimate the dynamics of an instrumented RC helicopter, in conjunction with onboard sensing, to estimate spatially varying (local) wind conditions. The primary problem addressed in this dissertation is the reliable estimation and separation of pilot induced dynamics from the system measurements, followed by the use of the dynamics residuals/discrepancies to map the wind conditions. This dissertation presents two different modelling approaches to quantify ship airwake using helicopter dynamics. The helicopter systems were characterized using both machine learning and analytical aerodynamic modelling approaches. In the machine learning based approaches, neural networks, along with other models, were trained then assessed in their capability to model dynamics from pilot inputs and other measured helicopter states. The dynamics arising from the wind conditions were fused with the positioning estimates of the helicopter to generate ship airwake maps which were compared against CFD generated airwake patterns. In the analytical modelling based approach, the dynamic response of an RC helicopter to a spatially varying parameterized wind field was modeled using a 30-state nonlinear ordinary differential equation-based dynamic system, while capturing essential elements of the helicopter dynamics. The airwake patterns obtained from both types of approach were compared against anemometrically produced wind maps of turbulent wind conditions artificially generated in a controlled indoor environment. Novel hardware architecture was developed to acquire data critical for the operation and calibration of the proposed system. The mechatronics design of three prototypes of the proposed system were presented and performance evaluated using experimental testing with a modified YP676 naval training vessel in the Chesapeake Bay area. In closing, qualitative analysis of these systems along with potential applications and improvements are discussed to conclude this dissertation. / Ph. D.
932

Predicción de la resistencia a la compresión del concreto usando redes neuronales artificiales

Bernilla Rodriguez, David Henry January 2024 (has links)
El concreto es el material de construcción más ampliamente utilizado en la actualidad debido a su excepcional capacidad para resistir fuerzas de compresión, comúnmente denominadas como f'c. La obtención del valor de f'c del concreto involucra la realización de diversos ensayos, siendo el ensayo a compresión simple o uniaxial en probetas de concreto el más comúnmente empleado, evaluando la resistencia a diferentes intervalos de tiempo. Lamentablemente, estas probetas suelen ser desechadas al aire libre, contribuyendo a la contaminación ambiental. En esta investigación, se enfoca en la predicción del valor de f'c del concreto a los 28 días mediante un modelo predictivo basado en redes neuronales artificiales. Los datos de entrada comprenden propiedades de los agregados, tipo de cemento y las proporciones de sus componentes, como agua, cemento y agregados. El único dato de salida es el valor real de f'c obtenido en el ensayo de compresión simple. Estos datos se recopilaron de varios laboratorios en el norte de Perú. La red neuronal se construyó utilizando TensorFlow de Google, con dos capas ocultas que constan de 16 y 8 neuronas respectivamente, y se entrenó durante 450 épocas. Se obtuvo una exactitud en la predicción mayor al 90% en el rango de 210 a 335 kg/cm². / Concrete is currently the most widely used construction material due to its exceptional ability to withstand compressive forces, commonly referred to as CS. Determining the CS value of concrete involves conducting various tests, with the uniaxial or simple compression test on concrete specimens being the most employed, assessing resistance at different time intervals. Unfortunately, these test specimens are often discarded outdoors, contributing to environmental pollution. This research focuses on predicting the CS value of concrete at 28 days using a predictive model based on artificial neural networks. Input data include aggregate properties, cement type, and their component proportions such as water, cement, and aggregates. The only output data is the actual CS value obtained from the simple compression test. This data was collected from multiple laboratories in northern Peru. The neural network was constructed using Google's TensorFlow, with two hidden layers consisting of 16 and 8 neurons, respectively, and trained for 450 epochs. Prediction accuracy exceeded 90% in the range of 210 to 335 kg/ cm².
933

Statistical modelling by neural networks

Fletcher, Lizelle 30 June 2002 (has links)
In this thesis the two disciplines of Statistics and Artificial Neural Networks are combined into an integrated study of a data set of a weather modification Experiment. An extensive literature study on artificial neural network methodology has revealed the strongly interdisciplinary nature of the research and the applications in this field. An artificial neural networks are becoming increasingly popular with data analysts, statisticians are becoming more involved in the field. A recursive algoritlun is developed to optimize the number of hidden nodes in a feedforward artificial neural network to demonstrate how existing statistical techniques such as nonlinear regression and the likelihood-ratio test can be applied in innovative ways to develop and refine neural network methodology. This pruning algorithm is an original contribution to the field of artificial neural network methodology that simplifies the process of architecture selection, thereby reducing the number of training sessions that is needed to find a model that fits the data adequately. [n addition, a statistical model to classify weather modification data is developed using both a feedforward multilayer perceptron artificial neural network and a discriminant analysis. The two models are compared and the effectiveness of applying an artificial neural network model to a relatively small data set assessed. The formulation of the problem, the approach that has been followed to solve it and the novel modelling application all combine to make an original contribution to the interdisciplinary fields of Statistics and Artificial Neural Networks as well as to the discipline of meteorology. / Mathematical Sciences / D. Phil. (Statistics)
934

Connectionist modelling in cognitive science: an exposition and appraisal

Janeke, Hendrik Christiaan 28 February 2003 (has links)
This thesis explores the use of artificial neural networks for modelling cognitive processes. It presents an exposition of the neural network paradigm, and evaluates its viability in relation to the classical, symbolic approach in cognitive science. Classical researchers have approached the description of cognition by concentrating mainly on an abstract, algorithmic level of description in which the information processing properties of cognitive processes are emphasised. The approach is founded on seminal ideas about computation, and about algorithmic description emanating, amongst others, from the work of Alan Turing in mathematical logic. In contrast to the classical conception of cognition, neural network approaches are based on a form of neurocomputation in which the parallel distributed processing mechanisms of the brain are highlighted. Although neural networks are generally accepted to be more neurally plausible than their classical counterparts, some classical researchers have argued that these networks are best viewed as implementation models, and that they are therefore not of much relevance to cognitive researchers because information processing models of cognition can be developed independently of considerations about implementation in physical systems. In the thesis I argue that the descriptions of cognitive phenomena deriving from neural network modelling cannot simply be reduced to classical, symbolic theories. The distributed representational mechanisms underlying some neural network models have interesting properties such as similarity-based representation, content-based retrieval, and coarse coding which do not have straightforward equivalents in classical systems. Moreover, by placing emphasis on how cognitive processes are carried out by brain-like mechanisms, neural network research has not only yielded a new metaphor for conceptualising cognition, but also a new methodology for studying cognitive phenomena. Neural network simulations can be lesioned to study the effect of such damage on the behaviour of the system, and these systems can be used to study the adaptive mechanisms underlying learning processes. For these reasons, neural network modelling is best viewed as a significant theoretical orientation in the cognitive sciences, instead of just an implementational endeavour. / Psychology / D. Litt. et Phil. (Psychology)
935

以羅吉斯與類神經模型辨別台灣選擇權與期貨市場間的有效套利機會 / Distinguishing valid arbitrage opportunities in Taiwan option and future market by logistic regression and artificial neural networks

宋鴻緯, Sung, Hong Wei Unknown Date (has links)
本研究在考慮交易成本的情況下,利用羅吉斯模型、類神經模型以及其兩者的混合模型建立一分類器,用以識別台灣選擇權與期貨市場中違反買權賣權平價等式的套利訊號。由逐筆成交資料的實證結果顯示,無論在金融海嘯(2007)、景氣復甦(2008)或是平穩時期(2012~2014)時,就識別率來說三種模型相差不大,但就獲利性而言混合模型有略優於其他兩者的表現。 / Considering the transaction cost, we establish a binary classifier system by logistic regression, artificial neural networks and hybird model with aboves. The system is used for distinguishing valid arbitrage opportunities which violated put call parity in Taiwan option and future market. By tickdata, we find that, although three models has same accuracy on classification almostly, hybird model is grater then the others in profitability no matter in depression(2007), boom(2008) or business steady state(2012~2014).
936

A novel approach to the control of quad-rotor helicopters using fuzzy-neural networks

Poyi, Gwangtim Timothy January 2014 (has links)
Quad-rotor helicopters are agile aircraft which are lifted and propelled by four rotors. Unlike traditional helicopters, they do not require a tail-rotor to control yaw, but can use four smaller fixed-pitch rotors. However, without an intelligent control system it is very difficult for a human to successfully fly and manoeuvre such a vehicle. Thus, most of recent research has focused on small unmanned aerial vehicles, such that advanced embedded control systems could be developed to control these aircrafts. Vehicles of this nature are very useful when it comes to situations that require unmanned operations, for instance performing tasks in dangerous and/or inaccessible environments that could put human lives at risk. This research demonstrates a consistent way of developing a robust adaptive controller for quad-rotor helicopters, using fuzzy-neural networks; creating an intelligent system that is able to monitor and control the non-linear multi-variable flying states of the quad-rotor, enabling it to adapt to the changing environmental situations and learn from past missions. Firstly, an analytical dynamic model of the quad-rotor helicopter was developed and simulated using Matlab/Simulink software, where the behaviour of the quad-rotor helicopter was assessed due to voltage excitation. Secondly, a 3-D model with the same parameter values as that of the analytical dynamic model was developed using Solidworks software. Computational Fluid Dynamics (CFD) was then used to simulate and analyse the effects of the external disturbance on the control and performance of the quad-rotor helicopter. Verification and validation of the two models were carried out by comparing the simulation results with real flight experiment results. The need for more reliable and accurate simulation data led to the development of a neural network error compensation system, which was embedded in the simulation system to correct the minor discrepancies found between the simulation and experiment results. Data obtained from the simulations were then used to train a fuzzy-neural system, made up of a hierarchy of controllers to control the attitude and position of the quad-rotor helicopter. The success of the project was measured against the quad-rotor’s ability to adapt to wind speeds of different magnitudes and directions by re-arranging the speeds of the rotors to compensate for any disturbance. From the simulation results, the fuzzy-neural controller is sufficient to achieve attitude and position control of the quad-rotor helicopter in different weather conditions, paving way for future real time applications.
937

Реконфигурабилне архитектуре за хардверску акцелерацију предиктивних модела машинског учења / Rekonfigurabilne arhitekture za hardversku akceleraciju prediktivnih modela mašinskog učenja / Reconfigurable Architectures for Hardware Acceleration of Machine Learning Classifiers

Vranjković Vuk 02 July 2015 (has links)
<p>У овој дисертацији представљене су универзалне реконфигурабилне<br />архитектуре грубог степена гранулације за хардверску имплементацију<br />DT (decision trees), ANN (artificial neural networks) и SVM (support vector<br />machines) предиктивних модела као и хомогених и хетерогених<br />ансамбала. Коришћењем ових архитектура реализоване су две врсте<br />DT модела, две врсте ANN модела, две врсте SVM модела и седам<br />врста ансамбала на FPGA (field programmable gate arrays) чипу.<br />Експерименти, засновани на скуповима из стандардне UCI базе скупова<br />за машинско учење, показују да FPGA имплементација омогућава<br />значајно убрзање (од 1 до 6 редова величине) просечног времена<br />потребног за предикцију, у поређењу са софтверским решењима.</p> / <p>U ovoj disertaciji predstavljene su univerzalne rekonfigurabilne<br />arhitekture grubog stepena granulacije za hardversku implementaciju<br />DT (decision trees), ANN (artificial neural networks) i SVM (support vector<br />machines) prediktivnih modela kao i homogenih i heterogenih<br />ansambala. Korišćenjem ovih arhitektura realizovane su dve vrste<br />DT modela, dve vrste ANN modela, dve vrste SVM modela i sedam<br />vrsta ansambala na FPGA (field programmable gate arrays) čipu.<br />Eksperimenti, zasnovani na skupovima iz standardne UCI baze skupova<br />za mašinsko učenje, pokazuju da FPGA implementacija omogućava<br />značajno ubrzanje (od 1 do 6 redova veličine) prosečnog vremena<br />potrebnog za predikciju, u poređenju sa softverskim rešenjima.</p> / <p>This thesis proposes universal coarse-grained reconfigurable computing<br />architectures for hardware implementation of decision trees (DTs), artificial<br />neural networks (ANNs), support vector machines (SVMs), and<br />homogeneous and heterogeneous ensemble classifiers (HHESs). Using<br />these universal architectures, two versions of DTs, two versions of SVMs,<br />two versions of ANNs, and seven versions of HHESs machine learning<br />classifiers, have been implemented in field programmable gate arrays<br />(FPGA). Experimental results, based on datasets of standard UCI machine<br />learning repository database, show that FPGA implementation provides<br />significant improvement (1&ndash;6 orders of magnitude) in the average instance<br />classification time, in comparison with software implementations.</p>
938

Understanding deep architectures and the effect of unsupervised pre-training

Erhan, Dumitru 10 1900 (has links)
Cette thèse porte sur une classe d'algorithmes d'apprentissage appelés architectures profondes. Il existe des résultats qui indiquent que les représentations peu profondes et locales ne sont pas suffisantes pour la modélisation des fonctions comportant plusieurs facteurs de variation. Nous sommes particulièrement intéressés par ce genre de données car nous espérons qu'un agent intelligent sera en mesure d'apprendre à les modéliser automatiquement; l'hypothèse est que les architectures profondes sont mieux adaptées pour les modéliser. Les travaux de Hinton (2006) furent une véritable percée, car l'idée d'utiliser un algorithme d'apprentissage non-supervisé, les machines de Boltzmann restreintes, pour l'initialisation des poids d'un réseau de neurones supervisé a été cruciale pour entraîner l'architecture profonde la plus populaire, soit les réseaux de neurones artificiels avec des poids totalement connectés. Cette idée a été reprise et reproduite avec succès dans plusieurs contextes et avec une variété de modèles. Dans le cadre de cette thèse, nous considérons les architectures profondes comme des biais inductifs. Ces biais sont représentés non seulement par les modèles eux-mêmes, mais aussi par les méthodes d'entraînement qui sont souvent utilisés en conjonction avec ceux-ci. Nous désirons définir les raisons pour lesquelles cette classe de fonctions généralise bien, les situations auxquelles ces fonctions pourront être appliquées, ainsi que les descriptions qualitatives de telles fonctions. L'objectif de cette thèse est d'obtenir une meilleure compréhension du succès des architectures profondes. Dans le premier article, nous testons la concordance entre nos intuitions---que les réseaux profonds sont nécessaires pour mieux apprendre avec des données comportant plusieurs facteurs de variation---et les résultats empiriques. Le second article est une étude approfondie de la question: pourquoi l'apprentissage non-supervisé aide à mieux généraliser dans un réseau profond? Nous explorons et évaluons plusieurs hypothèses tentant d'élucider le fonctionnement de ces modèles. Finalement, le troisième article cherche à définir de façon qualitative les fonctions modélisées par un réseau profond. Ces visualisations facilitent l'interprétation des représentations et invariances modélisées par une architecture profonde. / This thesis studies a class of algorithms called deep architectures. We argue that models that are based on a shallow composition of local features are not appropriate for the set of real-world functions and datasets that are of interest to us, namely data with many factors of variation. Modelling such functions and datasets is important if we are hoping to create an intelligent agent that can learn from complicated data. Deep architectures are hypothesized to be a step in the right direction, as they are compositions of nonlinearities and can learn compact distributed representations of data with many factors of variation. Training fully-connected artificial neural networks---the most common form of a deep architecture---was not possible before Hinton (2006) showed that one can use stacks of unsupervised Restricted Boltzmann Machines to initialize or pre-train a supervised multi-layer network. This breakthrough has been influential, as the basic idea of using unsupervised learning to improve generalization in deep networks has been reproduced in a multitude of other settings and models. In this thesis, we cast the deep learning ideas and techniques as defining a special kind of inductive bias. This bias is defined not only by the kind of functions that are eventually represented by such deep models, but also by the learning process that is commonly used for them. This work is a study of the reasons for why this class of functions generalizes well, the situations where they should work well, and the qualitative statements that one could make about such functions. This thesis is thus an attempt to understand why deep architectures work. In the first of the articles presented we study the question of how well our intuitions about the need for deep models correspond to functions that they can actually model well. In the second article we perform an in-depth study of why unsupervised pre-training helps deep learning and explore a variety of hypotheses that give us an intuition for the dynamics of learning in such architectures. Finally, in the third article, we want to better understand what a deep architecture models, qualitatively speaking. Our visualization approach enables us to understand the representations and invariances modelled and learned by deeper layers.
939

Using unsupervised machine learning for fault identification in virtual machines

Schneider, C. January 2015 (has links)
Self-healing systems promise operating cost reductions in large-scale computing environments through the automated detection of, and recovery from, faults. However, at present there appears to be little known empirical evidence comparing the different approaches, or demonstrations that such implementations reduce costs. This thesis compares previous and current self-healing approaches before demonstrating a new, unsupervised approach that combines artificial neural networks with performance tests to perform fault identification in an automated fashion, i.e. the correct and accurate determination of which computer features are associated with a given performance test failure. Several key contributions are made in the course of this research including an analysis of the different types of self-healing approaches based on their contextual use, a baseline for future comparisons between self-healing frameworks that use artificial neural networks, and a successful, automated fault identification in cloud infrastructure, and more specifically virtual machines. This approach uses three established machine learning techniques: Naïve Bayes, Baum-Welch, and Contrastive Divergence Learning. The latter demonstrates minimisation of human-interaction beyond previous implementations by producing a list in decreasing order of likelihood of potential root causes (i.e. fault hypotheses) which brings the state of the art one step closer toward fully self-healing systems. This thesis also examines the impact of that different types of faults have on their respective identification. This helps to understand the validity of the data being presented, and how the field is progressing, whilst examining the differences in impact to identification between emulated thread crashes and errant user changes – a contribution believed to be unique to this research. Lastly, future research avenues and conclusions in automated fault identification are described along with lessons learned throughout this endeavor. This includes the progression of artificial neural networks, how learning algorithms are being developed and understood, and possibilities for automatically generating feature locality data.
940

Modélisation de la dispersion atmosphérique sur un site industriel par combinaison d’automates cellulaires et de réseaux de neurones. / Turbulent atmospheric dispersion modelling on an industrial site using cellular automata and neural networks.

Lauret, Pierre 18 June 2014 (has links)
La dispersion atmosphérique de substances dangereuses est un évènement susceptible d’entrainer de graves conséquences. Sa modélisation est primordiale pour anticiper des situations accidentelles. L’objectif de ce travail fut de développer un modèle opérationnel, à la fois rapide et précis, prenant en compte la dispersion en champ proche sur un site industriel. L’approche développée s’appuie sur des modèles issus de l’intelligence artificielle : les réseaux de neurones et les automates cellulaires. L’utilisation des réseaux de neurones requiert l’apprentissage d’une base de données de dispersion : des simulations CFD k-ϵ dans ce travail. Différents paramètres sont évalués lors de l’apprentissage : échantillonnage et architecture du réseau. Trois méthodologies sont développées :La première méthode permet d’estimer la dispersion continue en champ libre, par réseaux de neurones seuls.La deuxième méthode utilise le réseau de neurones en tant que règle de transition de l’automate cellulaire pour le suivi de l’évolution d’une bouffée en champ libre.La troisième méthode sépare la problématique : le calcul de l’écoulement est effectué par les réseaux de neurones alors que le calcul de la dispersion est réalisé par la résolution de l’équation d’advection diffusion pour le suivi de l’évolution d’un nuage autour d’un obstacle cylindrique. La simulation de cas tests non-appris avec des simulations CFD permet de valider les méthodes développées. Les temps de calcul mis en œuvre pour réaliser la dispersion sont en accord avec la cinétique d’une situation de crise. L’application à des données réelles doit être développée dans la perspective de rendre les modèles opérationnels. / Atmospheric dispersion of hazardous materials is an event that could lead to serious consequences. Atmospheric dispersion is studied in particular in this work. Modeling of atmospheric dispersion is an important tool to anticipate industrial accidents. The objective of this work was to develop a model that is both fast and accurate, considering the dispersion in the near field on an industrial site. The approach developed is based on models from artificial intelligence: neural networks and cellular automata. Using neural networks requires training a database typical of the phenomenon, CFD k-ϵ simulations in this work. Training the neural network is carried out by identifying the important parameters: database sampling and network architecture. Three methodologies are developed:The first method estimates the continuous dispersion in free field by neural networks.The second method uses the neural network as a transition rule of the cellular automaton to estimate puff evolution in the free field.The third method divides the problem: the flow calculation is performed by the neural network and the calculation of the dispersion is realized by solving the advection diffusion equation to estimate the evolution of a cloud around a cylindrical obstacle. For the three methods, assessment of the generalization capabilities of the neural network has been validated on a test database and on unlearned cases. A comparison between developed method and CFD simulations is done on unlearned cases in order to validate them. Simulations computing time are low according to crisis duration. Application to real data should be developed to make these models operational.

Page generated in 0.0956 seconds