• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 336
  • 39
  • 21
  • 15
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 1191
  • 1191
  • 1191
  • 571
  • 556
  • 423
  • 157
  • 134
  • 129
  • 128
  • 120
  • 110
  • 94
  • 93
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1101

Computer-Aided Diagnoses (CAD) System: An Artificial Neural Network Approach to MRI Analysis and Diagnosis of Alzheimer's Disease (AD)

Padilla Cerezo, Berizohar 01 December 2017 (has links) (PDF)
Alzheimer’s disease (AD) is a chronic and progressive, irreversible syndrome that deteriorates the cognitive functions. Official death certificates of 2013 reported 84,767 deaths from Alzheimer’s disease, making it the 6th leading cause of death in the United States. The rate of AD is estimated to double by 2050. The neurodegeneration of AD occurs decades before symptoms of dementia are evident. Therefore, having an efficient methodology for the early and proper diagnosis can lead to more effective treatments. Neuroimaging techniques such as magnetic resonance imaging (MRI) can detect changes in the brain of living subjects. Moreover, medical imaging techniques are the best diagnostic tools to determine brain atrophies; however, a significant limitation is the level of training, methodology, and experience of the diagnostician. Thus, Computer aided diagnosis (CAD) systems are part of a promising tool to help improve the diagnostic outcomes. No publications addressing the use of Feedforward Artificial Neural Networks (ANN), and MRI image attributes for the classification of AD were found. Consequently, the focus of this study is to investigate if the use of MRI images, specifically texture and frequency attributes along with a feedforward ANN model, can lead to the classification of individuals with AD. Moreover, this study compared the use of a single view versus a multi-view of MRI images and their performance. The frequency, texture, and MRI views in combination with the feedforward artificial neural network were tested to determine if they were comparable to the clinician’s performance. The clinician’s performances used were 78 percent accuracy, 87 percent sensitivity, 71 percent specificity, and 78 percent precision from a study with 1,073 individuals. The study found that the use of the Discrete Wavelet Transform (DWT) and Fourier Transform (FT) low frequency give comparable results to the clinicians; however, the FT outperformed the clinicians with an accuracy of 85 percent, precision of 87 percent, sensitivity of 90 percent and specificity of 75 percent. In the case of texture, a single texture feature, and the combination of two or more features gave results comparable to the clinicians. However, the Gray level co-occurrence matrix (GLCOM), which is the combination of texture features, was the highest performing texture method with 82 percent accuracy, 86 percent sensitivity, 76 percent specificity, and 86 percent precision. Combination CII (energy and entropy) outperformed all other combinations with 78 percent accuracy, 88 percent sensitivity, 72 percent specificity, and 78 percent precision. Additionally, a combination of views can increase performance for certain texture attributes; however, the axial view outperformed the sagittal and coronal views in the case of frequency attributes. In conclusion, this study found that both texture and frequency characteristics in combinations with a feedforward backpropagation neural network can perform at the level of the clinician and even higher depending on the attribute and the view or combination of views used.
1102

Fluid-structure interaction with the application to the non-linear aeroelastic phenomena

Cremades Botella, Andrés 06 November 2023 (has links)
[ES] El interés en reducir el peso y resistencia aerodinámica de vehículos y en desarrollar fuentes de energía renovables se ha incrementado debido a la compleja situación ambiental y los requerimientos legales para reducir las emisiones de contaminantes y el consumo de combustibles. La industria aeronáutica ha propuesto nuevos diseños que integren conceptos como alas de alto alargamiento y materiales con elevada resistencia específica, como los materiales compuestos. Por su parte, conceptos similares se emplean en la generación de energía eólica. El radio de las palas de las turbinas eólicas se incrementa paulatinamente, siendo un ejemplo muy claro las grandes instalaciones off-shore. El uso de estructuras más alargadas y ligeras provoca mayor deformación debida a las cargas aerodinámicas. Este fenómeno se conoce como aeroelasticidad y combina los efectos de las cargas aerodinámicas, los efectos inerciales y las tensiones internas de la estructura. La combinación de las cargas anteriores provoca fenómenos de amortiguamiento de las vibraciones, o por el contrario, inestabilidades aeroelásticas. Diferentes metodologías pueden ser empleadas para simular los fenómenos aeroelásticos. La metodología más extendida para la simulación de las ecuaciones elásticas del sólido es la conocida como análisis de elementos finitos. Respecto a las ecuaciones de conservación del fluido, la mecánica de fluidos computacional es la herramienta de resolución para un problema arbitrario. La combinación de las metodologías anteriores puede ser empleada para el cálculo de fenómenos aeroelásticos. Sin embargo, el coste computacional de estas simulaciones es inasumible en la mayoría de casos de aplicación. Se requiere una metodología nueva capaz de reducir el coste de cálculo. Este trabajo se centra en el desarrollo de modelos de orden reducido que permitan resolver el problema acoplado sin pérdidas sustanciales de precisión. En primer lugar, la estructura tridimensional se reduce a una sección equivalente que reproduzca la física del sólido original. La sección equivalente se acopla con dos modelos aerodinámicos: simulaciones de mecánica de fluidos computacional y un modelo reducido basado en redes neuronales. Ambos modelos presentan elevada precisión respecto a las simulaciones tridimensionales. Sin embargo, algunos efectos como los efectos aerodinámicos tridimensionales, las distribuciones de carga aerodinámica, la presencia de materiales ortotrópicos y los acoplamientos estructurales no pueden ser simulados. Con el objetivo de resolver los limitantes del modelo anterior, se propone un segundo modelo de orden reducido. En este caso se trata de un algoritmo basado en elementos de viga. El algoritmo se diseña para ser capaz de incluir el cálculo de materiales ortotrópicos y diferentes tipos de problemas aeroelásticos. Inicialmente, se emplea el software para determinar su precisión en el cálculo de una viga de material compuesto y sección rectangular. Estos resultados se validan con las simulaciones tridimensionales. De este modo se demuestra la capacidad de la herramienta computacional para predecir las inestabilidades y los efectos de acoplamiento estructural provocados por la orientación de las fibras. Posteriormente, el algoritmo se emplea en la simulación de turbinas eólicas, mejorando los rangos de operación de las palas sin que ello suponga una penalización desde el punto de vista del peso de la misma. Finalmente, un ala basada en una estructura de membrana resistente es simulada. El cálculo obtiene una gran precisión en la predicción de la velocidad de flameo respecto a la simulación acoplada, siendo la única limitación del modelo la predicción de la distorsión de la membrana. El trabajo presente un conjunto de modelos de orden reducido que permiten disminuir el coste computacional de las simulaciones aeroelásticas en órdenes de magnitud. También, se proporcionan directrices para la selección del modelo reducido apropiado para los casos de interés. / [CA] L'interès a reduir el pes i la resistència aerodinàmica dels vehicles i a desenvolupar fonts d'energia renovables s'ha incrementat a causa de la complexa situació ambiental i els requeriments legals per a reduir les emissions de contaminants i el consum de combustibles. La indústria aeronàutica ha proposat nous dissenys que integren conceptes com ales d'alt allargament i materials amb elevada resistència específica, com ara els materials compostos. Per la seua banda, conceptes similars es fan servir en la generació d'energia eòlica. El radi de les pales de les turbines eòliques s'incrementa progresivament, sent un exemple molt clar les grans instal·lacions off-shore. L'ús d'estructures més allargades i lleugeres provoca més deformació deguda a les càrregues aerodinàmiques. Aquest fenomen es coneix com a aeroelasticitat i combina els efectes de les càrregues aerodinàmiques, els efectes inercials i les tensions internes de l'estructura. La combinació de les càrregues anteriors provoca fenòmens d'esmorteïment de les vibracions, o per contra, inestabilitats aeroelàstiques. Diferents metodologies poden ser emprades per simular els fenòmens aeroelàstics. La metodologia més estesa per a la simulació de les equacions elàstiques del sòlid és la coneguda com a anàlisi d'elements finits. Pel que fa a les equacions de conservació del fluid, la mecànica de fluids computacional és l'eina de resolució per a un problema arbitrari. La combinació de les metodologies anteriors pot ser emprada per al càlcul de fenòmens aeroelàstics. Tot i això, el cost computacional d'aquestes simulacions és inassumible en la majoria de casos d'aplicació. Cal una metodologia nova capaç de reduir el cost de càlcul. Aquest treball se centra en el desenvolupament de models d'ordre reduït que permeten resoldre el problema acoblat sense pèrdues substancials de precisió. En primer lloc, l'estructura tridimensional es reduix a una secció equivalent que reproduixca la física del sòlid original. La secció equivalent s'acobla amb dos models aerodinàmics. El primer empra les forces aerodinàmiques obtingudes mitjançant simulacions de mecànica de fluids computacional. Posteriorment es fa servir un model reduït basat en xarxes neuronals. Tots dos models presenten elevada precisió respecte a les simulacions tridimensionals. No obstant això, alguns efectes com ara els efectes aerodinàmics tridimensionals, les distribucions de càrrega aerodinàmica, la presència de materials ortotròpics i els acoblaments estructurals no poden ser simulats. Amb l'objectiu de resoldre els limitants del model anterior, es proposa un segon model dordre reduït. En aquest cas és un algorisme basat en elements de biga. L'algorisme es dissenya per ser capaç d'incloure el càlcul de materials ortotròpics i diferents tipus de problemes aeroelàstics. Inicialment, s'empra el programari per determinar-ne la precisió en el càlcul d'una biga de material compost i secció rectangular. Aquests resultats es validen amb les simulacions tridimensionals. D'aquesta manera, es demostra la capacitat de l'eina computacional per predir les inestabilitats i els efectes d'acoblament estructural provocats per l'orientació de les fibres. Posteriorment, l'algorisme s'empra en la simulació de turbines eòliques, millorant els rangs d'operació de les pales sense que això suposi una penalització des del punt de vista del pes. Finalment, una ala basada en una estructura de membrana resistent és simulada. El càlcul obté una gran precisió en la predicció de la velocitat de flameig respecte a la simulació acoblada, i l'única limitació del model és la predicció de la distorsió de la membrana. El treball presenta un conjunt de models reduïts que permeten disminuir el cost computacional de les simulacions aeroelàstiques en ordres de magnitud. També es proporcionen directrius per a la selecció del model reduït adequat per als casos d'interès. / [EN] The complex environmental situation and the legal requirements for decreasing pollutant emissions and fuel consumption have increased the interest in reducing the empty weight and drag of vehicles and developing renewable energy sources. Due to the former, the aviation industry has proposed new designs integrating high strength-to-weight ratios, such as composite materials and higher aspect ratio wings. These increases in aspect ratio have also been applied to wind energy generation. The rotors of wind turbines are increasing their diameters in recent years: a clear example is the massive off-shore facilities. Using larger and lightweight structures increases the effects of the aerodynamic loads on structural deformation. Structural dynamics are strongly connected to the air-structure interaction. This phenomenon, called aeroelasticity, combines the effect of the external aerodynamic loads, the inertial forces, and the internal elastic stress of the structure. The complex combination of all the previous effects may damp the vibrations of the structure, or on the contrary, they could increase their amplitude, resulting in an unstable phenomenon. The simulation of the aeroelastic phenomena can be performed using different approaches. The well-known finite element analysis is the most extended methodology for solving solid elastic equations. Regarding fluid conservation equations, computational fluid dynamics is the principal tool for resolving general aerodynamic problems. The aeroelastic simulations can be calculated by combining the previous algorithms. Nevertheless, the computational cost of these methodologies is excessive for a general engineering case. Therefore, new methodologies are required. This work focuses on developing aeroelastic reduced-order models that compute the coupled phenomena without substantial accuracy losses. Initially, the complete three-dimensional structure is reduced to an equivalent section that reproduces the structure. The equivalent structural section is coupled with two aerodynamic models. The first one uses the forces calculated with aeroelastic computational fluid dynamics. Then, a surrogate model based on artificial neural networks is combined with the equivalent section. Both models show accurate agreement compared to the complete three-dimensional simulations in predicting unstable velocity. However, the three-dimensional aerodynamic effects, load distribution, orthotropic materials, and structural couplings cannot be considered. In order to solve the previous limitations, a reduced-order model based on a beam element solver is proposed. The algorithm is designed to consider a general orthotropic material and different typologies of aeroelastic problems. Initially, the software is proven to simulate accurately a squared cross-section composite material beam. The results are validated with the complete three-dimensional simulations, demonstrating the capabilities of the tool for predicting the instabilities and the effects of the fiber orientations. Then, the algorithm is used for simulating a wind turbine blade, and the algorithm results are used to improve the operation range of the blades without weight penalties. Finally, a resistant membrane wing is simulated, obtaining high accuracy in the prediction of the flutter velocity compared with the complete coupled simulation. In addition, the only limitation of the model is the prediction of the membrane distortion. The work presents a set of reduced-order models that allow for reducing the computational cost of the aeroelastic simulations by orders of magnitude. In addition, a decision pattern is provided for selecting the appropriate algorithm for the interest problem. / This thesis have been funded by Spanish Ministry of Science, Innovation and University through the University Faculty Training (FPU) program with reference FPU19/02201. / Cremades Botella, A. (2023). Fluid-structure interaction with the application to the non-linear aeroelastic phenomena [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/199249
1103

Utilisation de l’apprentissage automatique pour approximer l’énergie d’échange-corrélation

Cuierrier, Étienne 01 1900 (has links)
Le sujet de cette thèse est le développement de nouvelles approximations à l’énergie d’échange-corrélation (XC) en théorie de la fonctionnelle de la densité (DFT). La DFT calcule l’énergie électronique d’une molécule à partir de la densité électronique, une quantité qui est considérablement plus simple que la fonction d’onde. Cette théorie a été développée durant les années 1960 et elle est devenue la méthode de choix en chimie quantique depuis 1990, grâce à un ratio coût/précision très favorable. En pratique, la DFT est utilisée par les chercheurs et l’industrie pour prédire des spectres infrarouges, la longueur des liens chimiques, les barrières d’activation, etc. Selon l’approche Kohn-Sham, seulement le terme de l’énergie XC est inconnu et doit être approximé. Les chapitres de ce texte sont des articles consacrés au développement d’approches non locales et à l’utilisation de l’apprentissage automatique pour améliorer la précision et/ou la rapidité des calculs de l’énergie XC. Le premier article de cette thèse concerne le développement d’approximations non locales au trou XC [Cuierrier, Roy, et Ernzerhof, JCP (2021)]. Notre groupe de recherche a précédemment développé la méthode du facteur de corrélation (CFX) [Pavlíková Přecechtělová, Bahmann, Kaupp, et Ernzerhof, JCP (2015)] et malgré les résultats supérieurs de CFX comparativement aux fonctionnelles courantes en DFT pour le calcul de l’énergie, cette approche n’est pas exacte pour les systèmes uniélectroniques. Les méthodes non locales telles que le facteur X [Antaya, Zhou, et Ernzerhof, PRA (2014)] corrigent ce problème. Ainsi, le but du projet du premier article est de combiner CFX avec le facteur X, afin de former un facteur XC exact pour l’atome d’hydrogène, tout en conservant les bonnes prédictions de CFX pour les molécules. Nos résultats montrent que notre modèle non local est exact pour les systèmes uniélectroniques, cependant, la densité électronique non locale a un comportement fortement oscillatoire qui rend difficile la construction du facteur XC et la qualité de ses prédictions pour les propriétés moléculaires est inférieure aux fonctionnelles hybrides. Notre étude permet de fournir une explication concernant l’échec des méthodes non locales en chimie, comme l’approximation de la densité pondérée [Gunnarsson, Jonson, et Lundqvist, PLA (1976)]. Les nombreuses oscillations de la densité non locale limitent la performance des facteurs XC qui sont trop simples et qui ne peuvent pas atténuer ces oscillations. vLe sujet du deuxième article de cette thèse [Cuierrier, Roy et Ernzerhof, JCP (2021)] est relié aux difficultés rencontrées durant le premier projet. L’apprentissage automatique (ML) est devenu une méthode populaire dans tous les domaines de la science. Les réseaux de neurones artificiels (NN) sont particulièrement puissants, puisqu’ils permettent un contrôle et une flexibilité considérables lors de la construction de fonctions approximatives. Ainsi, nous utilisons un NN pour modéliser le trou X à partir de contraintes physiques. Durant le premier projet de cette thèse, nous avons observé qu’une fonction mathématique simple n’est pas adaptée pour être combinée avec la densité non locale, les NN pourraient donc être un outil utile pour approximer un trou X. Néanmoins, ce chapitre s’intéresse à la densité locale, avant de s’attaquer à la non-localité. Les résultats que nous avons obtenus lors du calcul des énergies X des atomes montrent le potentiel des NN pour construire automatiquement des modèles du trou X. Une deuxième partie de l’article suggère qu’un NN permet d’ajouter d’autres contraintes à des approximations du trou X déjà existantes, ce qui serait utile pour améliorer CFX. Sans les NN, il est difficile de trouver une équation analytique pour accomplir cette tâche. L’utilisation du ML est encore récente en DFT, mais ce projet a contribué à montrer que les NN ont beaucoup d’avenir dans le domaine de la construction de trou XC. Finalement, le dernier chapitre concerne un projet [Cuierrier, Roy, Wang, et Ernzerhof, JCP (2022)] qui utilise aussi des NN en DFT. Des travaux précédents du groupe ont montré que le terme de quatrième ordre du développement en série de puissances de la distance interélectronique du trou X (Tσ (r)) [Wang, Zhou, et Ernzerhof, PRA (2017)] est un ingrédient utile pour améliorer les approximations du calcul de l’énergie X pour les molécules. Cependant, il n’a pas été possible de construire un modèle qui satisfait le deuxième et le quatrième terme du développement en série de puissances simultanément. Ainsi, avec l’expertise développée en apprentissage automatique lors du deuxième projet, le but de l’étude du troisième article est d’utiliser Tσ (r) comme une variable d’entrée à un NN qui approxime l’énergie X. Nous avons montré qu’en utilisant comme ingrédients la fonctionnelle de PBE, Tσ (r) et un NN, il est possible de s’approcher de la qualité des résultats d’une fonctionnelle hybride (PBEh) pour le calcul d’énergies d’atomisation, de barrières d’activation et de prédiction de la densité électronique. Cette étude démontre que Tσ (r) contient de l’information utile pour le développement de nouvelles fonctionnelles en DFT. Tσ (r) est en principe plus rapide à calculer que l’échange exact, donc nos fonctionnelles pourraient s’approcher de l’exactitude d’une fonctionnelle hybride, tout en étant plus rapides à calculer. / The subject of this thesis is the development of new approximations for the exchange- correlation (XC) energy in Density Functional Theory (DFT). DFT calculates the electronic energy from the electronic density, which is a considerably simpler quantity than the wave function. DFT was developed during the 1960s and became the most popular method in quantum chemistry during the 1990s, thanks to its favourable cost/precision ratio. In practice, DFT is used by scientists and the industry to predict infrared spectra, bond lengths, activation energies, etc. The Kohn-Sham approach in DFT is by far the most popular, since it is exact in theory and only the XC functional has to be be approximated. The exact form of the XC functional is unknown, thus the development of new approximations for the XC functional is an important field of theoretical chemistry. In this thesis, we will describe the development of new non-local methods and the use of machine learning to improve the prediction and the efficiency of the calculation of XC energy. The first article in this thesis [Cuierrier, Roy, and Ernzerhof, JCP (2021)] is about the development of non-local approximations of the XC hole. Our research group previously developed the correlation factor approach (CFX) [Pavlíková Přecechtělová, Bahmann, Kaupp, and Ernzerhof, JCP (2015)]. The prediction of CFX for molecular properties compares favourably to other common functionals. However, CFX suffers from one-electron self-interaction error (SIE). Non-local models such as the X factor [Antaya, Zhou, and Ernzerhof, PRA (2014)] can fix the SIE, thus the goal of this project is to combine CFX with the X factor to build a non-local XC factor. We show that our method is exact for one-electron systems, however, our simple XC factor is not appropriate for the oscillatory behaviour of the non-local density and the results for molecules are inferior when compared to hybrid functionals. Our study provides an explanation of why non-local models, such as the weighted density approximation [Gunnarsson, Jonson, and Lundqvist, PLA (1976)], are not as successful as the common DFT functionals (PBE, B3LYP, etc.) in chemistry. The non-local electronic density is an elaborate function and often has a large number of local minima and maxima. The development of functionals using simple XC factors does not lead to satisfying results for the prediction of molecular energies. Therefore, a sophisticated XC factor that could attenuate the oscillatory shape of the non-local density is required. viiThe second article [Cuierrier, Roy, and Ernzerhof, JCP(2021)] addresses the difficulties observed for the development of non-local functionals during the first project. Machine learning (ML) is a useful technique that is gaining popularity in many fields of science, including DFT. Neural networks (NN) are particularly powerful, since their structure allows considerable flexibility to approximate functions. Thus, in this chapter, we use a NN to approximate the X hole by considering many of its known physical and mathematical constraints during the training of the NN. The results we obtain, for the calculation of energies of atoms using the NN, reveal the potential of this method for the automation of the construction of X holes. The second part of the paper shows that an NN can be used to add more constraints to an existing X hole approximation, which would be quite useful to improve CFX. The X hole obtained for a stretched H2 molecule is promising when compared to the exact values. ML is still a new tool in DFT and our work shows that it has considerable potential for the construction of XC hole approximations. Finally, the last chapter [Cuierrier, Roy, Wang, and Ernzerhof, JCP (2022)] describes a project that also uses NN. In a previous work by our group, it is shown that the fourth-order term of the expansion of the X hole (Tσ (r)) could improve the calculation of the X energy for molecules [Wang, Zhou, and Ernzerhof, PRA (2017)]. However, developing an equation that satisfies both the second and fourth-order terms simultaneously proved difficult. Thus, using the expertise in ML we developed during the second project, we build a new NN that uses the fourth-order term of the expansion of the X hole as a new ingredient to approximate the XC energy. Starting from the PBE functional, we trained a NN to reproduce the X energy of the hybrid functional PBEh. Our results show that this approach is a considerable improvement compared to PBE for the calculation of atomization energies, barrier heights and the prediction of electronic density. This study confirms that the fourth-order term of the expansion of the X hole does include useful information to build functionals in DFT. Since the calculation of the fourth-order term has a more favourable computational scaling compared to the exact exchange energy, our new functionals could lead to faster calculations in DFT.
1104

[en] DESIGN AND ACTIVATION OF A PNEUMATIC GECKO ROBOT WITH APPLICATION OF MACHINE LEARNING / [pt] PROJETO E ACIONAMENTO DE UM ROBÔ LAGARTIXA PNEUMÁTICO COM APLICAÇÃO DE APRENDIZADO COMPUTACIONAL

MATHEUS RODRIGUES GOEBEL 07 November 2022 (has links)
[pt] Este trabalho apresenta um projeto mecânico de um robô lagartixa pneumática, capaz de se locomover em superfícies inclinadas em relação ao solo, através apenas de atuadores lineares que utilizam o ar comprimido como fonte de energia. Como parte fundamental do projeto mecânico neste trabalho, um sistema de garra é desenvolvido gerando vácuo mecanicamente, para haver uma economia de consumo energético no robô em comparação com os acessórios comerciais geralmente utilizados para esta tarefa de fixação. Com o protótipo de conceito fabricado e montado, o mesmo é submetido a uma bateria de testes com o intuito de posteriormente aplicar os dados obtidos em uma rede neural artificial, visando o aprendizado computacional dos movimentos do robô e, assim, sua otimização de velocidade em determinada sequência de movimentação. Após o treinamento desta rede neural, o protótipo é submetido a novos experimentos para verificar a eficiência do treinamento realizado e qual o impacto real obtido no robô. Finalmente, com a utilização de um sistema de câmeras, os deslocamentos do robô em diversas situações distintas são rastreados, visando gerar gráficos comparativos e analisar a repetibilidade e confiabilidade do sistema. / [en] This work presents the mechanical design of a pneumatic gecko robot, capable of moving on inclined surfaces with respect to the ground, using only linear actuators with compressed air as a source of energy. As a fundamental part of the mechanical design in this work, a claw system is developed by generating vacuum mechanically, significantly reducing the energy consumption of the robot when compared to commercial accessories generally used for this clamping task. With the concept prototype manufactured and assembled, a series of tests are conducted to later apply the collected data in an artificial neural network. This network allows the computational learning of the robot movements, and thus its speed optimization for a certain defined gait. After training this neural network, the prototype is submitted to new experiments to verify the efficiency of the training performed and the real impact obtained on the robot. Furthermore, with the use of a camera system, the movements of the robot along several different situations are tracked, generating comparative graphs to analyze the repeatability and reliability of the system.
1105

On dysgraphia diagnosis support via the automation of the BVSCO test scoring : Leveraging deep learning techniques to support medical diagnosis of dysgraphia / Om dysgrafi diagnosstöd via automatisering av BVSCO-testpoäng : Utnyttja tekniker för djupinlärning för att stödja medicinsk diagnos av dysgrafi

Sommaruga, Riccardo January 2022 (has links)
Dysgraphia is a rather widespread learning disorder in the current society. It is well established that an early diagnosis of this writing disorder can lead to improvement in writing skills. However, as of today, although there is no comprehensive standard process for the evaluation of dysgraphia, most of the tests used for this purpose must be done at a physician’s office. On the other hand, the pandemic triggered by COVID-19 has forced people to stay at home and opened the door to the development of online medical consultations. The present study therefore aims to propose an automated pipeline to provide pre-clinical diagnosis of dysgraphia. In particular, it investigates the possibility of applying deep learning techniques to the most widely used test for assessing writing difficulties in Italy, the BVSCO-2. This test consists of several writing exercises to be performed by the child on paper under the supervision of a doctor. To test the hypothesis that it is possible to enable children to have their writing impairment recognized even at a distance, an innovative system has been developed. It leverages an already developed customized tablet application that captures the graphemes produced by the child and an artificial neural network that processes the images and recognizes the handwritten text. The experimental results were analyzed using different methods and were compared with the actual diagnosis that a doctor would have provided if the test had been carried out normally. It turned out that, despite a slight fixed bias introduced by the machine for some specific exercises, these results seemed very promising in terms of both handwritten text recognition and diagnosis of children with dysgraphia, thus giving a satisfactory answer to the proposed research question. / Dysgrafi är en ganska utbredd inlärningsstörning i dagens samhälle. Det är väl etablerat att en tidig diagnos av denna skrivstörning kan leda till en förbättring av skrivförmågan. Även om det i dag inte finns någon omfattande standardprocess för utvärdering av dysgrafi måste dock de flesta av de tester som används för detta ändamål göras på en läkarmottagning. Å andra sidan har den pandemi som utlöstes av COVID-19 tvingat människor att stanna hemma och öppnat dörren för utvecklingen av medicinska konsultationer online. Syftet med denna studie är därför att föreslå en automatiserad pipeline för att ge preklinisk diagnos av dysgrafi. I synnerhet undersöks möjligheten att tillämpa djupinlärningstekniker på det mest använda testet för att bedöma skrivsvårigheter i Italien, BVSCO-2. Testet består av flera skrivövningar som barnet ska utföra på papper under överinseende av en läkare. För att testa hypotesen att det är möjligt att göra det möjligt för barn att få sina skrivsvårigheter erkända även på distans har ett innovativt system utvecklats. Det utnyttjar en redan utvecklad skräddarsydd applikation för surfplattor som fångar de grafem som barnet producerar och ett artificiellt neuralt nätverk som bearbetar bilderna och känner igen den handskrivna texten. De experimentella resultaten analyserades med hjälp av olika metoder och jämfördes med den faktiska diagnos som en läkare skulle ha ställt om testet hade utförts normalt. Det visade sig att, trots en liten fast bias som maskinen införde för vissa specifika övningar, verkade dessa resultat mycket lovande när det gäller både igenkänning av handskriven text och diagnos av barn med dysgrafi, vilket gav ett tillfredsställande svar på den föreslagna forskningsfrågan.
1106

Autoencoders for natural language semantics

Bosc, Tom 09 1900 (has links)
Les auto-encodeurs sont des réseaux de neurones artificiels qui apprennent des représentations. Dans un auto-encodeur, l’encodeur transforme une entrée en une représentation, et le décodeur essaie de prédire l’entrée à partir de la représentation. Cette thèse compile trois applications de ces modèles au traitement automatique des langues : pour l’apprentissage de représentations de mots et de phrases, ainsi que pour mieux comprendre la compositionnalité. Dans le premier article, nous montrons que nous pouvons auto-encoder des définitions de dictionnaire et ainsi apprendre des vecteurs de définition. Nous proposons une nouvelle pénalité qui nous permet d’utiliser ces vecteurs comme entrées à l’encodeur lui-même, mais aussi de les mélanger des vecteurs distributionnels pré-entraînés. Ces vecteurs de définition capturent mieux la similarité sémantique que les méthodes distributionnelles telles que word2vec. De plus, l’encodeur généralise à un certain degré à des définitions qu’il n’a pas vues pendant l’entraînement. Dans le deuxième article, nous analysons les représentations apprises par les auto-encodeurs variationnels séquence-à-séquence. Nous constatons que les encodeurs ont tendance à mémo- riser les premiers mots et la longueur de la phrase d’entrée. Cela limite considérablement leur utilité en tant que modèles génératifs contrôlables. Nous analysons aussi des variantes architecturales plus simples qui ne tiennent pas compte de l’ordre des mots, ainsi que des mé- thodes basées sur le pré-entraînement. Les représentations qu’elles apprennent ont tendance à encoder plus nettement des caractéristiques globales telles que le sujet et le sentiment, et cela se voit dans les reconstructions qu’ils produisent. Dans le troisième article, nous utilisons des simulations d’émergence du langage pour étudier la compositionnalité. Un locuteur – l’encodeur – observe une entrée et produit un message. Un auditeur – le décodeur – tente de reconstituer ce dont le locuteur a parlé dans son message. Nous émettons l’hypothèse que faire des phrases impliquant plusieurs entités, telles que « Jean aime Marie », nécessite fondamentalement de percevoir chaque entité comme un tout. Nous dotons certains agents de cette capacité grâce à un mechanisme d’attention, alors que d’autres en sont privés. Nous proposons différentes métriques qui mesurent à quel point les langues des agents sont naturelles en termes de structure d’argument, et si elles sont davantage analytiques ou synthétiques. Les agents percevant les entités comme des touts échangent des messages plus naturels que les autres agents. / Autoencoders are artificial neural networks that learn representations. In an autoencoder, the encoder transforms an input into a representation, and the decoder tries to recover the input from the representation. This thesis compiles three different applications of these models to natural language processing: for learning word and sentence representations, as well as to better understand compositionality. In the first paper, we show that we can autoencode dictionary definitions to learn word vectors, called definition embeddings. We propose a new penalty that allows us to use these definition embeddings as inputs to the encoder itself, but also to blend them with pretrained distributional vectors. The definition embeddings capture semantic similarity better than distributional methods such as word2vec. Moreover, the encoder somewhat generalizes to definitions unseen during training. In the second paper, we analyze the representations learned by sequence-to-sequence variational autoencoders. We find that the encoders tend to memorize the first few words and the length of the input sentence. This limits drastically their usefulness as controllable generative models. We also analyze simpler architectural variants that are agnostic to word order, as well as pretraining-based methods. The representations that they learn tend to encode global features such as topic and sentiment more markedly, and this shows in the reconstructions they produce. In the third paper, we use language emergence simulations to study compositionality. A speaker – the encoder – observes an input and produces a message about it. A listener – the decoder – tries to reconstruct what the speaker talked about in its message. We hypothesize that producing sentences involving several entities, such as “John loves Mary”, fundamentally requires to perceive each entity, John and Mary, as distinct wholes. We endow some agents with this ability via an attention mechanism, and deprive others of it. We propose various metrics to measure whether the languages are natural in terms of their argument structure, and whether the languages are more analytic or synthetic. Agents perceiving entities as distinct wholes exchange more natural messages than other agents.
1107

Deep Bayesian Neural Networks for Prediction of Insurance Premiums / Djupa Bayesianska neurala nätverk för prediktioner på fordonsförsäkringar

Olsgärde, Nils January 2021 (has links)
In this project, the problem concerns predicting insurance premiums and particularly vehicle insurance premiums. These predictions were made with the help of Bayesian Neural Networks (BNNs), a type of Artificial Neural Network (ANN). The central concept of BNNs is that the parameters of the network follow distributions, which is beneficial. The modeling was done with the help of TensorFlow's Probability API, where a few models were built and tested on the data provided. The results conclude the possibility of predicting insurance premiums. However, the output distributions in this report were too wide to use. More data, both in volume and in the number of features, and better-structured data are needed. With better data, there is potential in building BNN and other machine learning (ML) models that could be useful for production purposes. / Detta projekt grundar sig i möjligheten till att predikera försäkringspremier, mer specifikt fordonsförsäkringspremier. Prediktioner har gjorts med hjälp av Bayesianska neurala nätverk, vilket är en typ av artificiella neurala nätverk. Det huvudsakliga konceptet med Bayesianska neurala nätverk är att parametrarna i nätverket följer distributioner vilket har vissa fördelar och inte är fallet för vanliga artificiella neurala nätverk. Ett antal modeller har konstruerats med hjälp av TensorFlow Probability API:t som tränats och testats på given data. Resultatet visar att det finns potential att prediktera premier med hjälp av de egenskapspunkter\footnote[2]{\say{Features} på engelska} som finns tillgängliga, men att resultaten inte är tillräckligt bra för att kunna användas i produktion. Med mer data, både till mängd och egenskapspunkter samt bättre strukturerad data finns potential att skapa bättre modeller av intresse för produktion.
1108

3D Dose Prediction from Partial Dose Calculations using Convolutional Deep Learning models / 3D-dosförutsägelser från partiella dosberäkningar med hjälp av konvolutionella Deep Learning-modeller

Liberman Bronfman, Sergio Felipe January 2021 (has links)
In this thesis, the problem of predicting the full dose distribution from a partially modeled dose calculation is addressed. Two solutions were studied: a vanilla Hierarchically Densely Connected U-net (HDUnet) and a Conditional Generative Adversarial Network (CGAN) with HDUnet as a generator. The CGAN approach is a 3D version of Pix2Pix [1] for Image to Image translation which we name Dose2Dose. The research question that this project tackled is whether the Dose2Dose can learn more effective dose transformations than the vanilla HDUnet. To answer this, the models were trained using dose calculations of phantom slabs generated for the problem in pairs of inputs (doses without magnetic field) and targets (doses with magnetic field). Once trained, the models were evaluated and compared in various aspects. The evidence gathered suggests that the vanilla HDUnet model can learn to generate better dose predictions than the generative model. However, in terms of the resulting dose distributions, the samples generated from the Dose2Dose are as likely to belong to the target dose calculation distribution as those of the vanilla HDUnet. The results contain errors of considerable magnitude, and do not accomplish clinical suitability tests. / I denna avhandling har problemet med att förutsäga full dosfördelning från en delvis modellerad dosberäkning tagits upp. Två lösningar studerades: ett vanilla HDUnet och ett betingat generativt nätverk (CGAN) med HDUnet som generator. CGAN -metoden var en 3D-version av Pix2Pix [1] för översättning av bild till bild med namnet Dose2Dose. Forskningsfrågan som detta projekt tog upp var om Dose2Dose kan lära sig mer effektiva dostransformationer än vanilla HDUnet. För att svara på detta tränades modellerna med hjälp av parvisa dosberäkningar, i indata (doser utan magnetfält) och mål (doser med magnetfält).. När de var tränade utvärderades modellerna och jämfördes i olika aspekter. De samlade bevisen tyder på att Vanilla HDUnet -modellen kan lära sig att generera bättre dosförutsägelser än den generativa modellen. När det gäller de resulterande dosfördelningarna är emellertid de prover som genererats från Dose2Dose lika sannolikt att tillhöra måldosberäkningsfördelningen som de för vanilla HDUnet. Resultaten innehåller stora storleksfel och uppfyller inte kraven för klinisk tillämpbarhet.
1109

[pt] IDENTIFICAÇÃO NÃO-LINEAR E CONTROLE PREDITIVO DA DINÂMICA DO VEÍCULO / [en] NONLINEAR IDENTIFICATION AND PREDICTIVE CONTROL OF VEHICLE DYNAMICS

LUCAS CASTRO SOUSA 28 March 2023 (has links)
[pt] Os veículos automatizados devem trafegar em determinado ambiente detectando, planejando e seguindo uma trajetória segura. De modo a se mostrarem mais seguros que seres humanos, eles devem ser capazes de executar essas tarefas tão bem ou melhor do que motoristas humanos sob diferentes condições críticas. Uma parte essencial no estudo de veículos automatizados o desenvolvimento de modelos representativos que sejam precisos e computacionalmente eficientes. Assim, para lidar com esses problemas, o presente trabalho aplica métodos de inteligência computational e identificação de sistemas para realizar modelagem de veículos e controle de rastreamento de trajetória. Primeiro, arquiteturas neurais são usadas para capturar as características do pneu na interação entre a dinâmica lateral e longitudinal do veículo, reduzindo o custo computacional em controladores preditivos. Em segundo lugar, uma combinação de modelos caixa-preta é usada para melhorar o controle preditivo. Em seguida, uma abordagem híbrida combina modelos baseados na física e orientados por dados com modelagem de caixa-preta das discrepâncias. Essa abordagem é escolhida para melhorar a precisão da modelagem de veículos, propondo um modelo de discrepância para capturar incompatibilidades entre modelos de veículos e dados medidos. Os resultados são mostrados quando os métodos propostos são aplicados a sistemas com dados simulados/reais e comparados com abordagens encontradas na literatura, mostrando um aumento de precisão (até 40 por cento) em termos de métricas baseadas em erro, com menor esforço computacional (redução de até 88 por cento) do que os controladores preditivos convencionais. / [en] Automated vehicles must travel in a given environment detecting, planning, and following a safe path. In order to be safer than humans, they must be able to perform these tasks as well or better than human drivers under different critical conditions. An essential part of the study of automated vehicles is the development of representative models that are accurate and computationally efficient. Thus, to cope with these problems, the present work applies artificial neural networks and system identification methods to perform vehicle modeling and trajectory tracking control. First, neural architectures are used to capture tire characteristics present in the interaction between lateral and longitudinal vehicle dynamics, reducing computational costs for predictive controllers. Secondly, a combination of black-box models is used to improve predictive control. Then, a hybrid approach combines physics-based and data-driven models with black-box modeling of the discrepancies. This approach is chosen to improve the accuracy of vehicle modeling by proposing a discrepancy model to capture mismatches between vehicle models and measured data. Results are shown when the proposed methods are applied to systems with simulated/real data and compared with approaches found in the literature, showing an increase of accuracy (up to 40 percent) in terms of error-based metrics while having lesser computational effort (reduction by up to 88 percent) than conventional predictive controllers.
1110

Restructuring of Wetland Communities in Response to a Changing Climate at Multiple Spatial and Taxonomic Scales

Garris, Heath William January 2013 (has links)
No description available.

Page generated in 0.5526 seconds